Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
1,300 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example shape parameterisation
Step1: Parameterising shapes
Three options to parameterise shapes are given below; from raw coordinates, from an RT-DICOM file, or from a Monaco® 5.10 tel.1 file.
There is also a placeholder for importing directly from Eclipse™. Please let me know if someone with access to Eclipse™ achieves this.
From coordinates
Step2: From RT-DICOM
If you are using the online version of this notebook you will likely want to deidentify your dicom files. http
Step3: Directly from Monaco® 5.10 server
The following code is an example of what can be used to automatically pull and parameterise shapes from the server based off of patient ID. For use in other centres it will need adjustment. | Python Code:
import re
import numpy as np
import dicom
import matplotlib.pyplot as plt
%matplotlib inline
from electroninserts import (
parameterise_single_insert, display_parameterisation)
print("All modules and functions successfully imported.")
# !pip install --upgrade version_information
# %load_ext version_information
# %version_information dicom, electroninserts, re, numpy, matplotlib, version_information
Explanation: Example shape parameterisation
End of explanation
x = [0.99, -0.14, -1.0, -1.73, -2.56, -3.17, -3.49, -3.57, -3.17, -2.52, -1.76,
-1.04, -0.17, 0.77, 1.63, 2.36, 2.79, 2.91, 3.04, 3.22, 3.34, 3.37, 3.08, 2.54,
1.88, 1.02, 0.99]
y = [5.05, 4.98, 4.42, 3.24, 1.68, 0.6, -0.64, -1.48, -2.38, -3.77, -4.81,
-5.26, -5.51, -5.58, -5.23, -4.64, -3.77, -2.77, -1.68, -0.29, 1.23, 2.68, 3.8,
4.6, 5.01, 5.08, 5.05]
width, length, poi = parameterise_single_insert(x, y)
print("Width = {0:0.2f} cm\nLength = {1:0.2f} cm".format(width, length))
display_parameterisation(x, y, width, length, poi)
Explanation: Parameterising shapes
Three options to parameterise shapes are given below; from raw coordinates, from an RT-DICOM file, or from a Monaco® 5.10 tel.1 file.
There is also a placeholder for importing directly from Eclipse™. Please let me know if someone with access to Eclipse™ achieves this.
From coordinates
End of explanation
# Change this name to match the dicom file located in the same directory
# as this notebook.
dicom_filename = "example_dicom_file.dcm"
dcm = dicom.read_file(dicom_filename, force=True)
applicator_string = dcm.BeamSequence[0].ApplicatorSequence[0].ApplicatorID
energy_string = dcm.BeamSequence[0].ControlPointSequence[0].NominalBeamEnergy
ssd_string = dcm.BeamSequence[0].ControlPointSequence[0].SourceToSurfaceDistance
print("Applicator = {} (identifier name)".format(applicator_string))
print("Energy = {} (nominal)".format(energy_string))
print("SSD = {} (dicom units)\n".format(ssd_string))
block_data = np.array(dcm.BeamSequence[0].BlockSequence[0].BlockData)
x = np.array(block_data[0::2]).astype(float)/10
y = np.array(block_data[1::2]).astype(float)/10
width, length, poi = parameterise_single_insert(x, y)
print("Width = {0:0.2f} cm".format(width))
print("Length = {0:0.2f} cm".format(length))
display_parameterisation(x, y, width, length, poi)
Explanation: From RT-DICOM
If you are using the online version of this notebook you will likely want to deidentify your dicom files. http://www.dicompyler.com/ can be used to do this however I am not in a position to guarantee it will do this adequately. You need to check this yourself.
To upload the dicom file go to notebook home and click the "upload button" located at the top right of the dashboard.
End of explanation
# patientID = '00000'.zfill(6)
# string_search_pattern = r'\\MONACODA\FocalData\YOURDIRECTORYHERE\1~Clinical\*{}\plan\*\*tel.1'.format(patientID)
# string_search_pattern
# filepath_list = glob(string_search_pattern)
# filepath_list
telfilepath = "example_monaco510_telfile"
electronmodel_regex = "YourMachineName - \d+MeV" # \d+ stands for any positive integer
with open(telfilepath, "r") as file:
telfilecontents = np.array(file.read().splitlines())
electronmodel_index = []
for i, item in enumerate(telfilecontents):
if re.search(electronmodel_regex, item):
electronmodel_index += [i]
print("Located applicator and energy strings for plans within telfile:")
applicator_tel_string = [
telfilecontents[i+12] # applicator string is located 12 lines below electron model name
for i in electronmodel_index]
print(applicator_tel_string)
energy_tel_string = [
telfilecontents[i]
for i in electronmodel_index]
print(energy_tel_string)
for i, index in enumerate(electronmodel_index):
print("Applicator = {}".format(applicator_tel_string[i]))
print("Energy = {}\n".format(energy_tel_string[i]))
insert_inital_range = telfilecontents[
index + 51::] # coords start 51 lines after electron model name
insert_stop = np.where(
insert_inital_range=='0')[0][0] # coords stop right before a line containing 0
insert_coords_string = insert_inital_range[:insert_stop]
insert_coords = np.fromstring(','.join(insert_coords_string), sep=',')
x = insert_coords[0::2]/10
y = insert_coords[1::2]/10
width, length, poi = parameterise_single_insert(x, y)
print("Width = {0:0.2f} cm".format(width))
print("Length = {0:0.2f} cm".format(length))
display_parameterisation(x, y, width, length, poi)
Explanation: Directly from Monaco® 5.10 server
The following code is an example of what can be used to automatically pull and parameterise shapes from the server based off of patient ID. For use in other centres it will need adjustment.
End of explanation |
1,301 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parallel computing using REBOUND and IPython/Jupyter
In this tutorial, we'll use IPython for parallel and distributed REBOUND simulations. With IPython, we can execute code on multi-core machines, both locally and remotely. This is particularly interesting for running many small simulations that do not have to communicate a lot with each other. Thus, a paramerter space survey with REBOUND is ideally suited.
Note thate this tutorial does not cover how to setup the IPython cluster profile, please refer to the IPython documentation for that.
First, we'll initialize the IPython cluster. In our case, we have setup IPython to run on two machine which have in together 22 CPUs and can run a total 44 threads in parallel. The machines are connected via ethernet and the login is handled ssh. We'll also create a load_balanced_view, a scheduler that distributes the jobs to engines that are free and import rebound on all engines.
Step1: Next, we import rebound on all of our engines. rc[
Step2: Next, we define a python function that runs one simulation. We add a star and two planets. The eccentricity and semi-major axis of the outer planet are passed to the function as a parameter.
We choose the whfast-nocor integrator, i.e. WHFast with no symplectic correctors, because we are not interested in the most accurate integration but rather want to have it run fast to get a quick overview of the parameter space. While we integrate, we check for escaping particles. If a particle escapes we know that the simulation is unstable and can terminate the integration.
Step3: Next, we're creating a 2D array of parameters to sample. Here, it's in the $a$/$e$ plane of parameter space and we create a 200x200 matrix. Then, we finally execute the simulations on the cluster using the map function on our load balanced view object.
Step4: Integrating these 40000 systems took only about 3 seconds on our 44 thread test cluster.
We can now make a plot to visualize our results. | Python Code:
from IPython.parallel import Client
rc = Client()
print "Cluster size: %d" % len(rc.ids)
lv = rc.load_balanced_view()
lv.block = True
Explanation: Parallel computing using REBOUND and IPython/Jupyter
In this tutorial, we'll use IPython for parallel and distributed REBOUND simulations. With IPython, we can execute code on multi-core machines, both locally and remotely. This is particularly interesting for running many small simulations that do not have to communicate a lot with each other. Thus, a paramerter space survey with REBOUND is ideally suited.
Note thate this tutorial does not cover how to setup the IPython cluster profile, please refer to the IPython documentation for that.
First, we'll initialize the IPython cluster. In our case, we have setup IPython to run on two machine which have in together 22 CPUs and can run a total 44 threads in parallel. The machines are connected via ethernet and the login is handled ssh. We'll also create a load_balanced_view, a scheduler that distributes the jobs to engines that are free and import rebound on all engines.
End of explanation
with rc[:].sync_imports():
import rebound
Explanation: Next, we import rebound on all of our engines. rc[:] is a "direct view" of all engines.
End of explanation
def simulation(par):
a, e = par # unpack parameters
rebound.reset()
rebound.integrator = "whfast-nocor"
rebound.dt = 5.
rebound.add(m=1.)
rebound.add(m=0.000954, a=5.204, anom=0.600, omega=0.257, e=0.048)
rebound.add(m=0.000285, a=a, anom=0.871, omega=1.616, e=e)
rebound.move_to_com()
rebound.init_megno(1e-16)
try:
rebound.integrate(5e2*2.*3.1415,maxR=20.) # integrator for 500 years
return rebound.calculate_megno()
except rebound.ParticleEscaping:
return 10. # At least one particle got ejected, returning large MEGNO.
Explanation: Next, we define a python function that runs one simulation. We add a star and two planets. The eccentricity and semi-major axis of the outer planet are passed to the function as a parameter.
We choose the whfast-nocor integrator, i.e. WHFast with no symplectic correctors, because we are not interested in the most accurate integration but rather want to have it run fast to get a quick overview of the parameter space. While we integrate, we check for escaping particles. If a particle escapes we know that the simulation is unstable and can terminate the integration.
End of explanation
import numpy as np
Ngrid = 200
parameters = np.swapaxes(np.meshgrid(np.linspace(7.,10.,Ngrid),np.linspace(0.,0.5,Ngrid)),0,2).reshape(-1,2)
results = lv.map(simulation,parameters,chunksize=20)
Explanation: Next, we're creating a 2D array of parameters to sample. Here, it's in the $a$/$e$ plane of parameter space and we create a 200x200 matrix. Then, we finally execute the simulations on the cluster using the map function on our load balanced view object.
End of explanation
results2d = np.array(results).reshape(Ngrid,Ngrid).T
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15,8)); ax = plt.subplot(111)
extent = [parameters[:,0].min(),parameters[:,0].max(),parameters[:,1].min(),parameters[:,1].max()]
ax.set_xlim(extent[0],extent[1])
ax.set_xlabel("semi-major axis $a$")
ax.set_ylim(extent[2],extent[3])
ax.set_ylabel("eccentricity $e$")
im = ax.imshow(results2d, interpolation="none", vmin=1.9, vmax=4, cmap="RdYlGn_r", origin="lower", aspect='auto', extent=extent)
cb = plt.colorbar(im, ax=ax)
cb.set_label("MEGNO $\\langle Y \\rangle$")
Explanation: Integrating these 40000 systems took only about 3 seconds on our 44 thread test cluster.
We can now make a plot to visualize our results.
End of explanation |
1,302 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Colaboratory
Before you start
When you open a new Colab from Github (like this one), you cannot save changes. So it's usually best to store the Colab in you personal drive "File > Save a copy in drive..." before you do anything else.
Introduction
Some important links to keep open during the workshop – open these tabs now!
Step1: You can also only execute one single statement in a cell.
Step2: What to do if you get stuck
If you should get stuck and the documentation doesn't help you consider using additional help.
Step3: Importing TensorFlow
We'll be using TensorFlow 2.1.0 in this workshop. This will soon be the default, but for the time being we still need to activate it with the Colab-specific %tensorflow_version magic.
Step4: Running shell commands
You can run shell commands directly in Colab
Step5: Autocompletion and docstrings
Jupyter shows possible completions of partially typed
commands.
Try it for yourself by displaying all available tf. methods that start with one.
Step6: In addition, you can also display docstrings to see the function signature and possible parameters.
Step7: Alternatively, you might also inspect function details with docstrings if available by appending a "?".
Step8: Note
Step9: Runtimes
As noted in the introduction above, Colab provides multiple runtimes with different hardware accelerators
Step10: As can be seen, the machine has been allocated just very recently for our purposes.
VM specifications
Step11: Plotting
The notebook environment also provides options to visualize and interact with data.
We'll take a short look at the plotting/visualization libraries Matplotlib and Altair.
Matplotlib
Matplotlib is one of the most famous Python plotting libraries and can be used to plot results within a cell's output (see Matplotlib Introduction).
Let's try to plot something with it.
Step12: Altair
Another declarative visualization library for Python is Altair (see Altair
Step13: Notebook Magics
The IPython and Colab environment support built-in magic commands called magics (see
Step14: Line magics
You can also make use of line magics which can be inserted anywhere at the beginning of a line inside a cell and need to be prefixed with %.
Examples include
Step15: Note
Step16: Data handling
There are multiple ways to provide data to a Colabs's VM environment.
Note
Step17: List a subset of the contained files using the gsutil tool.
Step18: Conveniently, TensorFlow natively supports multiple file systems such as
Step19: Snippets
Finally, we can take a look at the snippets support in Colab.
If you're using Jupyter please see Jupyter contrib nbextensions - Snippets menu as this is not natively supported.
Snippets are a way to quickly "bookmark" pieces of code or text that you might want to insert into specific cells.
Step20: We have created some default snippets for this workshop in
Step21: Pro tip
Step22: Forms
You can simplify cells by hiding their code and displaying a form instead.
Note
Step23: Interactive debugging
An example of an IPython tool that you can utilize is the interactive debugger
provided inside an IPython environment like Colab.
For instance, by using %pdb on, you can automatically trigger the debugger on exceptions to further analyze the state.
Some useful debugger commands are | Python Code:
# YOUR ACTION REQUIRED:
# Execute this cell first using <CTRL-ENTER> and then using <SHIFT-ENTER>.
# Note the difference in which cell is selected after execution.
print('Hello world!')
Explanation: Colaboratory
Before you start
When you open a new Colab from Github (like this one), you cannot save changes. So it's usually best to store the Colab in you personal drive "File > Save a copy in drive..." before you do anything else.
Introduction
Some important links to keep open during the workshop – open these tabs now!:
TF documentation : Use the search box (top right) to get documentation on Tensorflow's rich API.
solutions/ : Every notebook in the exercises/ directory has a corresponding notebook in the solutions/ directory.
Colaboratory (Colab) is a Jupyter notebook environment which allows you to work with data and code in an interactive manner. You can decide where you want to run your code:
Using a hosted runtime provided by Google (default)
Locally using your own machine and resources
It supports Python 3 and comes with a set of pre-installed libraries like Tensorflow and Matplotlib but also gives you the option to install more libraries on demand. The resulting notebooks can be shared in a straightforward way.
Caveats:
The virtual machines used for the runtimes are ephemeral so make sure to safe your data in a persistent location like locally (downloading), in the Google Cloud Storage or Google Drive.
The service is free of use but the performance of default runtimes can be insufficient for your purposes.
You have the option to select a runtime with GPU or TPU support.
"Colaboratory is intended for interactive use. Long-running background computations, particularly on GPUs, may be stopped. [...] We encourage users who wish to run continuous or long-running computations through Colaboratory’s UI to use a local runtime." - See Colaboratory FAQ
Getting started
Connect to a runtime now by clicking connect in the top right corner if you don't already see a green checkmark there.
To get a better overview you might want to activate the Table of contents by clicking on the arrow on the left.
Important shortcuts
Action | Colab Shortcut | Jupyter Shortcut
---|---|---
Executes current cell | <CTRL-ENTER> | <CTRL-ENTER>
Executes current cell and moves to next cell | <SHIFT-ENTER> | S<HIFT-ENTER>
Executes current selection | <CTRL-SHIFT-ENTER> | N/A
Insert cell above | <CTRL-M> <A> | <A>
Append cell below | <CTRL-M> <B> | <B>
Shows searchable command palette | <CTRL-SHIFT-P> | <CTRL-SHIFT-P>
Convert cell to code | <CTRL-M> <Y> | <Y>
Convert cell to Markdown | <CTRL-M> <M> | <M>
Autocomplete (on by default) | <CTRL+SPACE> | <TAB>
Goes from edit to "command" mode | <ESC> | <ESC>
Goes from "command" to edit mode | <ENTER> | <ENTER>
Show keyboard shortcuts | <CTRL-M> <H> | <H>
<p align="center"><b>Note:</b> On OS X you can use `<COMMAND>` instead of `<CTRL>`</p>
Give it a try!
End of explanation
# YOUR ACTION REQUIRED:
# Execute only the first print statement by selecting the first line and pressing
# <CTRL-SHIFT-ENTER>.
print('Only print this line.')
print('Avoid printing this line.')
Explanation: You can also only execute one single statement in a cell.
End of explanation
def xor_str(a, b):
return ''.join([chr(ord(a[i % len(a)]) ^ ord(b[i % len(b)]))
for i in range(max(len(a), len(b)))])
# YOUR ACTION REQUIRED:
# Try to find the correct value for the variable below.
workshop_secret = 'Tensorflow rocks' #workshop_secret = '(replace me!)'
xor_str(workshop_secret,
'\x03\x00\x02\x10\x00\x1f\x03L\x1b\x18\x00\x06\x07\x06K2\x19)*S;\x17\x08\x1f\x00\x05F\x1e\x00\x14K\x115\x16\x07\x10\x1cR1\x03\x1d\x1cS\x1a\x00\x13J')
# Hint: You might want to checkout the ../solutions directory
# (you should already have opened this directory in a browser tab :-)
Explanation: What to do if you get stuck
If you should get stuck and the documentation doesn't help you consider using additional help.
End of explanation
# We must call this "magic" before importing TensorFlow. We will explain
# further down what "magics" (starting with %) are.
%tensorflow_version 2.x
# Include basic dependencies and display the tensorflow version.
import tensorflow as tf
tf.__version__
Explanation: Importing TensorFlow
We'll be using TensorFlow 2.1.0 in this workshop. This will soon be the default, but for the time being we still need to activate it with the Colab-specific %tensorflow_version magic.
End of explanation
# Print the current working directory and list all files in it.
!pwd
!ls
# Especially useful: Installs new packages.
!pip install qrcode
import qrcode
qrcode.make('Colab rocks!')
Explanation: Running shell commands
You can run shell commands directly in Colab: simply prepend the command with a !.
End of explanation
# YOUR ACTION REQUIRED:
# Set the cursor to after tf.one and press <CTRL-SPACE>.
# On Mac, only <OPTION-ESCAPE> might work.
tf.one_hot #tf.one
Explanation: Autocompletion and docstrings
Jupyter shows possible completions of partially typed
commands.
Try it for yourself by displaying all available tf. methods that start with one.
End of explanation
# YOUR ACTION REQUIRED:
# Complete the command to `tf.maximum` and then add the opening bracket "(" to
# see the function documentation.
tf.maximum([1, 2, 3], [2, 2, 2]) #tf.maximu
Explanation: In addition, you can also display docstrings to see the function signature and possible parameters.
End of explanation
tf.maximum?
Explanation: Alternatively, you might also inspect function details with docstrings if available by appending a "?".
End of explanation
test_dict = {'key0': 'Tensor', 'key1': 'Flow'}
test_dict?
Explanation: Note: This also works for any other type of object as can be seen below.
End of explanation
# Display how long the system has been running.
# Note : this shows "0 users" because no user is logged in via SSH.
!uptime
Explanation: Runtimes
As noted in the introduction above, Colab provides multiple runtimes with different hardware accelerators:
CPU (default)
GPU
TPU
which can be selected by choosing "Runtime > Change runtime type" in the menu.
Please be aware that selecting a new runtime will assign a new virtual machine (VM).
In general, assume that any changes you make to the VM environment including data storage are ephemeral. Particularly, this might require to execute previous cells again as their content is unknown to a new runtime otherwise.
Let's take a closer look at one of such provided VMs.
Once we have been assigned a runtime we can inspect it further.
End of explanation
# Display available and used memory.
!free -h
print("-"*70)
# Display the CPU specification.
!lscpu
print("-"*70)
# Display the GPU specification (if available).
!(nvidia-smi | grep -q "has failed") && echo "No GPU found!" || nvidia-smi
Explanation: As can be seen, the machine has been allocated just very recently for our purposes.
VM specifications
End of explanation
# Display the Matplotlib outputs within a cell's output.
%matplotlib inline
import numpy as np
from matplotlib import pyplot
# Create a randomized scatterplot using matplotlib.
x = np.random.rand(100).astype(np.float32)
noise = np.random.normal(scale=0.3, size=len(x))
y = np.sin(x * 7) + noise
pyplot.scatter(x, y)
Explanation: Plotting
The notebook environment also provides options to visualize and interact with data.
We'll take a short look at the plotting/visualization libraries Matplotlib and Altair.
Matplotlib
Matplotlib is one of the most famous Python plotting libraries and can be used to plot results within a cell's output (see Matplotlib Introduction).
Let's try to plot something with it.
End of explanation
# Load an example dataset.
from vega_datasets import data
cars = data.cars()
# Plot the dataset, referencing dataframe column names.
import altair as alt
alt.Chart(cars).mark_point().encode(
x='Horsepower',
y='Miles_per_Gallon',
color='Origin',
tooltip=['Name', 'Origin', 'Horsepower', 'Miles_per_Gallon']
).interactive()
Explanation: Altair
Another declarative visualization library for Python is Altair (see Altair: Declarative Visualization in Python).
Try to zoom in/out and to hover over individual data points in the resulting plot below.
End of explanation
%%sh
echo "This is a shell script!"
# List all running VM processes.
ps -ef
echo "Done"
# Embed custom HTML directly into a cell's output.
%%html
<marquee>HTML rocks</marquee>
Explanation: Notebook Magics
The IPython and Colab environment support built-in magic commands called magics (see: IPython - Magics).
In addition to default Python, these commands might be handy for example when it comes to interacting directly with the VM or the Notebook itself.
Cell magics
Cell magics define a mode for a complete cell and are prefixed with %%.
Examples include:
%%bash or %%sh
%%html
%%javascript
End of explanation
n = 1000000
%time list1 = [i for i in range(n)]
print("")
%time list2 = [i for i in range(int(n/2))]
Explanation: Line magics
You can also make use of line magics which can be inserted anywhere at the beginning of a line inside a cell and need to be prefixed with %.
Examples include:
%time - display the required time to execute the current line
%cd - change the current working directory
%pdb - invoke an interactive Python debugger
%lsmagic - list all available line magic and cell magic functions
For example, if you want to find out how long one specific line requires to be executed you can just prepend %time.
End of explanation
%%time
n = 1000000
list1 = [i for i in range(n)]
list2 = [i for i in range(int(n/2))]
Explanation: Note: Some line magics like %time can also be used for complete cells by writing %%time.
End of explanation
from google.colab import auth
auth.authenticate_user()
Explanation: Data handling
There are multiple ways to provide data to a Colabs's VM environment.
Note: This section only applies to Colab.
Jupyter has a file explorer and other options for data handling.
The options include:
* Uploading files from the local file system.
* Connecting to Google Cloud Storage (explained below).
* Connecting to Google Drive (see: Snippets: Drive; will be used in the next Colabs).
Uploading files from the local file system
If you need to manually upload files to the VM, you can use the files tab on the left. The files tab also allows you to browse the contents of the VM and when you double click on a file you'll see a small text editor on the right.
Connecting to Google Cloud Storage
Google Cloud Storage (GCS) is a cloud file storage service with a RESTful API.
We can utilize it to store our own data or to access data provided by the following identifier:
gs://[BUCKET_NAME]/[OBJECT_NAME]
We'll use the data provided in gs://amld-datasets/zoo_img as can be seen below.
Before we can interact with the cloud environment, we need to grant permissions accordingly (also see External data: Cloud Storage).
End of explanation
!gsutil ls gs://amld-datasets/zoo_img | head
Explanation: List a subset of the contained files using the gsutil tool.
End of explanation
# Note: This cell hangs if you forget to call auth.authenticate_user() above.
tf.io.gfile.glob('gs://amld-datasets/zoo_img/*')[:10]
Explanation: Conveniently, TensorFlow natively supports multiple file systems such as:
GCS - Google Cloud Storage
HDFS - Hadoop
S3 - Amazon Simple Storage
An example for the GCS filesystem can be seen below.
End of explanation
# YOUR ACTION REQUIRED:
# Explore existing snippets by going to the `Code snippets` section.
# Click on the <> button on the left sidebar to open the snippets.
# Alternatively, you can press `<CTRL><ALT><P>` (or `<COMMAND><OPTION><P>` for
# OS X).
Explanation: Snippets
Finally, we can take a look at the snippets support in Colab.
If you're using Jupyter please see Jupyter contrib nbextensions - Snippets menu as this is not natively supported.
Snippets are a way to quickly "bookmark" pieces of code or text that you might want to insert into specific cells.
End of explanation
from google.colab import snippets
# snippets.register('https://colab.research.google.com/drive/1OFSjEmqC-UC66xs-LR7-xmgkvxYTrAcN')
Explanation: We have created some default snippets for this workshop in:
https://colab.research.google.com/drive/1OFSjEmqC-UC66xs-LR7-xmgkvxYTrAcN
In order to use these snippets, you can:
Click on "Tools > Settings".
Copy the above url into "Custom snippet notebook URL" and press enter.
As soon as you update the settings, the snippets will then become available in every Colab. Search for "amld" to quickly find them.
Alternatively, you can also add snippets via the API (but this needs to be done for every Colab/kernel):
End of explanation
from IPython.core.magic import register_line_cell_magic
@register_line_cell_magic
def mymagic(line_content, cell_content=None):
print('line_content="%s" cell_content="%s"' % (line_content, cell_content))
%mymagic Howdy Alice!
%%mymagic simple question
Howdy Alice!
how are you?
Explanation: Pro tip : Maybe this is a good moment to create your own snippets and register them in settings. You can then start collecting often-used code and have it ready when you need it... In this Colab you'll need to have text cells with titles (like ### snippet name) preceeding the code cells.
----- Optional part -----
Custom line magic
You can also define your own line/cell magic in the following way.
End of explanation
#@title Execute me
# Hidden cell content.
print("Double click the cell to see its content.")
# Form example mostly taken from "Adding form fields" Snippet.
#@title Example form
#@markdown Specify some test data and execute this cell.
string_type = 'test_string' #@param {type: "string"}
slider_value = 145 #@param {type: "slider", min: 100, max: 200}
number = 1339 #@param {type: "number"}
date = '2019-01-26' #@param {type: "date"}
pick_me = "a" #@param ['a', 'b', 'c']
#@markdown ---
print("Submitted data:")
print(string_type, slider_value, number, date, pick_me)
Explanation: Forms
You can simplify cells by hiding their code and displaying a form instead.
Note: You can display or hide the code by double clicking the form which might be on the right side.
End of explanation
# YOUR ACTION REQUIRED:
# Execute this cell, print the variable contents of a, b and exit the debugger.
%pdb on
a = 67069 / 47 - 0x5a
b = a - 0x539
#c = a / b # Will throw an exception.
Explanation: Interactive debugging
An example of an IPython tool that you can utilize is the interactive debugger
provided inside an IPython environment like Colab.
For instance, by using %pdb on, you can automatically trigger the debugger on exceptions to further analyze the state.
Some useful debugger commands are:
Description | Command
---|---
h(elp) | Display available commands
p(rint) x | Show content of object x
w(here) | Show current instruction pointer position
q(uit) | Leave the debugger
End of explanation |
1,303 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Setup Software Environment
Step1: Download the Cincinnati 311 (Non-Emergency) Service Requests data
Dataset Description
Example of downloading a *.csv file progamatically using urllib2
Step2: Parse the 1st record
Step3: Implement a class that parses and cleans a Cincinnati 311 data record
This class forms the basis for mapper functions
This software applies the dateutil package parser fucntion to parse date/time strings | Python Code:
from Cincinnati311CSVDataParser import Cincinnati311CSVDataParser
from csv import DictReader
import os
import re
import urllib2
Explanation: Setup Software Environment
End of explanation
data_dir = "./Data"
csv_file_path = os.path.join(data_dir, "cincinnati311.csv")
if not os.path.exists(csv_file_path):
if not os.path.exists(data_dir):
os.mkdir(data_dir)
url = 'https://data.cincinnati-oh.gov/api/views' +\
'/4cjh-bm8b/rows.csv?accessType=DOWNLOAD'
response = urllib2.urlopen(url)
html = response.read()
with open(csv_file_path, 'wb') as h_file:
h_file.write(html)
Explanation: Download the Cincinnati 311 (Non-Emergency) Service Requests data
Dataset Description
Example of downloading a *.csv file progamatically using urllib2
End of explanation
h_file = open("./Data/cincinnati311.csv", "r")
fieldnames = [re.sub("_", "", elem.lower())\
for elem in h_file.readline().rstrip().split(',')]
readerobj = DictReader(h_file, fieldnames)
print readerobj.next()
h_file.close()
Explanation: Parse the 1st record
End of explanation
# head -n 3 cincinnati311.csv > sample.csv
h_file = open("./Data/sample.csv", "r")
parserobj = Cincinnati311CSVDataParser(h_file)
for record in parserobj:
print record
h_file.close()
Explanation: Implement a class that parses and cleans a Cincinnati 311 data record
This class forms the basis for mapper functions
This software applies the dateutil package parser fucntion to parse date/time strings
End of explanation |
1,304 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Higgs Boson Analysis with CMS Open Data
This is an example analysis of the Higgs boson detection via the decay channel H → ZZ* → 4l
From the decay products measured at the CMS experiment and provided as open data, you will be able to produce a histogram, and from there you can infer the invariant mass of the Higgs boson.
Code
Step1: H → ZZ* → 4$\mu$ - cuts and plot, using Monte Carlo signal data
(this is a step of the broader analsys)
Step4: Apply cuts
More details on the cuts (filters applied to the event data) in the reference CMS paper on the discovery of the Higgs boson
Step5: Compute the invariant mass
This computes the 4-vectors sum for the 4-lepton system
using formulas from special relativity.
See also http
Step7: Note on sparkhistogram
Use this to define the computeHistogram function if you cannot pip install sparkhistogram | Python Code:
# Run this if you need to install Apache Spark (PySpark)
# !pip install pyspark
# Install sparkhistogram
# Note: if you cannot install the package, create the computeHistogram
# function as detailed at the end of this notebook.
!pip install sparkhistogram
# Run this to download the dataset
# See further details at https://github.com/LucaCanali/Miscellaneous/tree/master/Spark_Physics
!wget https://sparkdltrigger.web.cern.ch/sparkdltrigger/CMS_Higgs_opendata/SMHiggsToZZTo4L.parquet
Explanation: Higgs Boson Analysis with CMS Open Data
This is an example analysis of the Higgs boson detection via the decay channel H → ZZ* → 4l
From the decay products measured at the CMS experiment and provided as open data, you will be able to produce a histogram, and from there you can infer the invariant mass of the Higgs boson.
Code: it is based on the original work on cms opendata notebooks and this notebook with RDataFrame implementation
Reference: link to the original article with CMS Higgs boson discovery
See also: https://github.com/LucaCanali/Miscellaneous/tree/master/Spark_Physics
Author and contact: [email protected]
April, 2022
End of explanation
# Start the Spark Session
# This uses local mode for simplicity
# the use of findspark is optional
import findspark
findspark.init("/home/luca/Spark/spark-3.3.0-bin-hadoop3")
from pyspark.sql import SparkSession
spark = (SparkSession.builder
.appName("H_ZZ_4Lep")
.master("local[*]")
.config("spark.driver.memory", "8g")
.config("spark.sql.parquet.enableNestedColumnVectorizedReader", "true")
.getOrCreate()
)
# Read data with the candidate events
# Only Muon events for this reduced-scope notebook
path = "./"
df_MC_events_signal = spark.read.parquet(path + "SMHiggsToZZTo4L.parquet")
df_MC_events_signal.printSchema()
# Count the number of events before cuts (filter)
print(f"Number of events, MC signal: {df_MC_events_signal.count()}")
Explanation: H → ZZ* → 4$\mu$ - cuts and plot, using Monte Carlo signal data
(this is a step of the broader analsys)
End of explanation
df_events = df_MC_events_signal.selectExpr(arrays_zip(Muon_charge, Muon_mass, Muon_pt, Muon_phi, Muon_eta,
Muon_dxy, Muon_dz, Muon_dxyErr, Muon_dzErr, Muon_pfRelIso04_all) Muon,
"nMuon")
df_events.printSchema()
# Apply filters to the input data
# Keep only events with at least 4 muons
df_events = df_events.filter("nMuon >= 4")
# Filter Muon arrays
# Filters are detailed in the CMS Higgs bosono paper
# See notebook with RDataFrame implementation at https://root.cern/doc/master/df103__NanoAODHiggsAnalysis_8py.html
# Article: with CMS Higgs boson discovery](https://inspirehep.net/record/1124338
df_events_filtered = df_events.selectExpr(filter(Muon, m ->
abs(m.Muon_pfRelIso04_all) < 0.40 -- Require good isolation
and m.Muon_pt > 5 -- Good muon kinematics
and abs(m.Muon_eta) < 2.4
-- Track close to primary vertex with small uncertainty
and (m.Muon_dxy * m.Muon_dxy + m.Muon_dz * m.Muon_dz) / (m.Muon_dxyErr * m.Muon_dxyErr + m.Muon_dzErr*m.Muon_dzErr) < 16
and abs(m.Muon_dxy) < 0.5
and abs(m.Muon_dz) < 1.0
) as Muon)
# only events with exactly 4 Muons left after the previous cuts
df_events_filtered = df_events_filtered.filter("size(Muon) == 4")
# cut on lepton charge
# paper: "selecting two pairs of isolated leptons, each of which is comprised of two leptons with the same flavour and opposite charge"
df_events_4muons = df_events_filtered.filter("Muon.Muon_charge[0] + Muon.Muon_charge[1] + Muon.Muon_charge[2] + Muon.Muon_charge[3] == 0")
print(f"Number of events after applying cuts: {df_events_4muons.count()}")
Explanation: Apply cuts
More details on the cuts (filters applied to the event data) in the reference CMS paper on the discovery of the Higgs boson
End of explanation
# This computes the 4-vectors sum for the 4-muon system
# convert to cartesian coordinates
df_4lep = df_events_4muons.selectExpr(
"Muon.Muon_pt[0] * cos(Muon.Muon_phi[0]) P0x", "Muon.Muon_pt[1] * cos(Muon.Muon_phi[1]) P1x", "Muon.Muon_pt[2] * cos(Muon.Muon_phi[2]) P2x", "Muon.Muon_pt[3] * cos(Muon.Muon_phi[3]) P3x",
"Muon.Muon_pt[0] * sin(Muon.Muon_phi[0]) P0y", "Muon.Muon_pt[1] * sin(Muon.Muon_phi[1]) P1y", "Muon.Muon_pt[2] * sin(Muon.Muon_phi[2]) P2y", "Muon.Muon_pt[3] * sin(Muon.Muon_phi[3]) P3y",
"Muon.Muon_pt[0] * sinh(Muon.Muon_eta[0]) P0z", "Muon.Muon_pt[1] * sinh(Muon.Muon_eta[1]) P1z", "Muon.Muon_pt[2] * sinh(Muon.Muon_eta[2]) P2z", "Muon.Muon_pt[3] * sinh(Muon.Muon_eta[3]) P3z",
"Muon.Muon_mass[0] as Mass"
)
# compute energy for each muon
df_4lep = df_4lep.selectExpr(
"P0x", "P0y", "P0z", "sqrt(Mass* Mass + P0x*P0x + P0y*P0y + P0z*P0z) as E0",
"P1x", "P1y", "P1z", "sqrt(Mass* Mass + P1x*P1x + P1y*P1y + P1z*P1z) as E1",
"P2x", "P2y", "P2z", "sqrt(Mass* Mass + P2x*P2x + P2y*P2y + P2z*P2z) as E2",
"P3x", "P3y", "P3z", "sqrt(Mass* Mass + P3x*P3x + P3y*P3y + P3z*P3z) as E3"
)
# sum energy and momenta over the 4 muons
df_4lep = df_4lep.selectExpr(
"P0x + P1x + P2x + P3x as Px",
"P0y + P1y + P2y + P3y as Py",
"P0z + P1z + P2z + P3z as Pz",
"E0 + E1 + E2 + E3 as E"
)
df_4lep.show(5)
# This computes the invariant mass for the 4-muon system
df_4lep_invmass = df_4lep.selectExpr("sqrt(E * E - ( Px * Px + Py * Py + Pz * Pz)) as invmass_GeV")
df_4lep_invmass.show(5)
# This defines the DataFrame transformation to compute the histogram of the invariant mass
# The result is a histogram with (energy) bin values and event counts foreach bin
# Requires sparkhistogram
# See https://github.com/LucaCanali/Miscellaneous/blob/master/Spark_Notes/Spark_DataFrame_Histograms.md
from sparkhistogram import computeHistogram
# histogram parameters
min_val = 80
max_val = 250
step = 3.0
num_bins = (max_val - min_val) / step
# use the helper function computeHistogram in the package sparkhistogram
histogram_data = computeHistogram(df_4lep_invmass, "invmass_GeV", min_val, max_val, num_bins)
# The action toPandas() here triggers the computation.
# Histogram data is fetched into the driver as a Pandas Dataframe.
%time histogram_data_pandas=histogram_data.toPandas()
# This plots the data histogram with error bars
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
f, ax = plt.subplots()
x = histogram_data_pandas["value"]
y = histogram_data_pandas["count"]
# scatter plot
#ax.plot(x, y, marker='o', color='red', linewidth=0)
#ax.errorbar(x, y, err, fmt = 'ro')
# histogram with error bars
ax.bar(x, y, width = 5.0, capsize = 5, linewidth = 0.5, ecolor='blue', fill=True)
ax.set_xlim(min_val-2, max_val)
ax.set_xlabel("$m_{4\mu}$ (GeV)")
ax.set_ylabel(f"Number of Events / bucket_size = {step} GeV")
ax.set_title("Distribution of the 4-Muon Invariant Mass")
# Label for the Z ang Higgs spectrum peaks
txt_opts = {'horizontalalignment': 'left',
'verticalalignment': 'center',
'transform': ax.transAxes}
plt.text(0.48, 0.71, "Higgs boson, mass = 125 GeV", **txt_opts)
# Add energy and luminosity
plt.text(0.60, 0.92, "CMS open data, for education", **txt_opts)
plt.text(0.60, 0.87, '$\sqrt{s}$=13 TeV, Monte Carlo data', **txt_opts)
plt.show()
spark.stop()
Explanation: Compute the invariant mass
This computes the 4-vectors sum for the 4-lepton system
using formulas from special relativity.
See also http://edu.itp.phys.ethz.ch/hs10/ppp1/2010_11_02.pdf
and https://en.wikipedia.org/wiki/Invariant_mass
End of explanation
def computeHistogram(df: "DataFrame", value_col: str, min: float, max: float, bins: int) -> "DataFrame":
This is a dataframe function to compute the count/frequecy histogram of a column
Parameters
----------
df: the dataframe with the data to compute
value_col: column name on which to compute the histogram
min: minimum value in the histogram
max: maximum value in the histogram
bins: number of histogram buckets to compute
Output DataFrame
----------------
bucket: the bucket number, range from 1 to bins (included)
value: midpoint value of the given bucket
count: number of values in the bucket
step = (max - min) / bins
# this will be used to fill in for missing buckets, i.e. buckets with no corresponding values
df_buckets = spark.sql(f"select id+1 as bucket from range({bins})")
histdf = (df
.selectExpr(f"width_bucket({value_col}, {min}, {max}, {bins}) as bucket")
.groupBy("bucket")
.count()
.join(df_buckets, "bucket", "right_outer") # add missing buckets and remove buckets out of range
.selectExpr("bucket", f"{min} + (bucket - 1/2) * {step} as value", # use center value of the buckets
"nvl(count, 0) as count") # buckets with no values will have a count of 0
.orderBy("bucket")
)
return histdf
Explanation: Note on sparkhistogram
Use this to define the computeHistogram function if you cannot pip install sparkhistogram
End of explanation |
1,305 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
entities
Step1: posting to twitter
Step2: getting access tokens for yourself | Python Code:
response=twitter.search(q="data journalism",result_type="recent",count=20)
first=response['statuses'][0]
first.keys()
first['entities']
for item in first['entities']['urls']:
print(item['expanded_url'])
for item in first['entities']['user_mentions']:
print(item['screen_name'])
cursor = twitter.cursor(twitter.search,q='"kevin durant"-filter:retweets',count=100)
all_urls = list()
for tweet in cursor:
print(tweet['entities'])
for item in tweet['entities']['urls']:
all_urls.append(item['expanded_url'])
if len(all_text)>1000:
break
all_urls
url_count=Counter(all_urls)
for item in url_count.most_common(10):
print(item)
cursor = twitter.cursor(twitter.search,q='"kevin durant"-filter:retweets',count=100)
all_media_urls=list()
for tweet in cursor:
if 'media' in tweet['entities']:
for item in tweet['entities']['media']:
all_media_urls.append(item['media_url'])
if len(all_media_urls)>1000:
break
fh=open("preview.html","w")
for item in all_media_urls:
fh.write('<img src="{}" width="100">'.format(item))
fh.close()
Explanation: entities
End of explanation
twitter.update_status(
status="I'm teaching a class right now on how to post to twitter with python.")
Explanation: posting to twitter
End of explanation
twitter= twython.Twython(api_key,api_secret) #create a Twython object
auth= twitter.get_authentication_tokens()
print("Log into Twitter as the user you want to authorize")
print("\t" +auth['auth_url']) #auth_url predetermined by the
Explanation: getting access tokens for yourself:
created an app
authorized yourself to use that app using
getting access tokens for someone else:
create the app
have the other use authorize the application<-complicated process!
authorizing an app
End of explanation |
1,306 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chem 30324, Spring 2020, Homework 1
Due on January 22, 2020
Problem 1
Step1: 1. How many different 5-card hands are there? (Remember, in poker the order in which the cards are received does not matter.)
Step2: 2. What is the probability of being dealt four of a kind (a card of the same rank from each suit)?
Step3: 3. What is the probability of being dealt a flush (five cards of the same suit)?
Step4: Problem 2
Step5: 2. What is the most probable value of $x$?
Step6: 3. What is the expectation value of $x$?
Step7: 4. What is the variance of $x$?
Step8: Problem 3
Step9: 2. What is the probability that the person won't have traveled any net distance at all after 20 steps?
Step10: 3. What is the probability that the person has traveled half the maximum distance after 20 steps?
Step11: 4. Plot the probability of traveling a given distance vs distance. Does the probability distribution look familiar? You'll see it again when we talk about diffusion.
Step12: Problem 4
Step13: 2. What is the expectation value of the kinetic energy $K$ of a particle? How does your answer depend on the particle mass? On temperature?
$\int_{-\infty}^{\infty} Ce^{-\frac{mv^2}{2k_B T}} dv = C(\frac{2k_B T\pi}{m})^\frac{1}{2} $
$\int_{-\infty}^{\infty} \frac{mv^2}{2}Ce^{-\frac{mv^2}{2k_B T}} dv = \frac{Ck_B T}{2}(\frac{2k_B T\pi}{m})^\frac{1}{2} $
$K = \frac{\int_{-\infty}^{\infty} \frac{mv^2}{2}Ce^{-\frac{mv^2}{2k_B T}} dv}{\int_{-\infty}^{\infty} Ce^{-\frac{mv^2}{2k_B T}} dv}
=\frac{k_B T}{2}$
K will increase when temperature increases(linear relationship), but unrelated to the particle mass.
Hint | Python Code:
import numpy as np
from scipy import linalg #contains certain operators you may need for class
import matplotlib.pyplot as plt #contains everything you need to create plots
import sympy as sy
from scipy.integrate import quad
Explanation: Chem 30324, Spring 2020, Homework 1
Due on January 22, 2020
Problem 1: Discrete, probably
In five card study, a poker player is dealt five cards from a standard deck of 52 cards.
End of explanation
import math
total=math.factorial(52)/math.factorial(52-5)/math.factorial(5)
print('Different 5-card hands =\t',total) # Pick 5 cards from 52 cards 5C52
Explanation: 1. How many different 5-card hands are there? (Remember, in poker the order in which the cards are received does not matter.)
End of explanation
print('The probability of being dealt four of a kind =\t',round(13*(52-4)/total,9))
# First pick a kind (1C13), then one card from the remaining 48 cards (1C48)
# round() returns x rounded to n digits from the decimal point
Explanation: 2. What is the probability of being dealt four of a kind (a card of the same rank from each suit)?
End of explanation
print('The probability of being dealt a flush =\t',round(4*math.factorial(13)/math.factorial(13-5)/math.factorial(5)/total,9))
#4 suites * 5C13 (Pick 5 cards from 13 cards)
Explanation: 3. What is the probability of being dealt a flush (five cards of the same suit)?
End of explanation
# First define a function that you want to integrate
def integrand(x):
return x*math.exp(-2*x) #Return Probability distribution
I = quad(integrand,0,np.inf)
print(I)
# I has two values, the first value is the estimation of the integration, the second value is the upper bound on the error.
# Notice that the upper bound on the error is extremely small, this is a good estimation.
X = np.linspace(0,10,1000)
Y=[]
for i in range(np.size(X)):
x=X[i]
y=integrand(x)/I[0]
Y.append(y)
plt.plot(X,Y)
plt.xlabel('x');
plt.ylabel('Normalized P(x)');
plt.title('Normalized P(x)')
plt.show()
Explanation: Problem 2: Continuous, probably
The probability distribution function for a random variable $x$ is given by
$P(x)=x e^{-2x}, 0\le x < \infty$.
1. Is $P(x)$ normalized? If not, normalize it. Plot the normalized $P(x)$.
End of explanation
X[np.argmax(Y)]
Explanation: 2. What is the most probable value of $x$?
End of explanation
def integrand1(x):
return x*integrand(x)/I[0] # Return nomalized probability distribution * v
I1 = quad(integrand1,0,np.inf)
print(I1)
Explanation: 3. What is the expectation value of $x$?
End of explanation
def integrand2(x):
return x*x*integrand(x)/I[0] # Return nomalized probability distribution * v^2
I2 = quad(integrand2,0,np.inf)
var=I2[0]-I1[0]**2 # Variance can be calculated as <P(x)^2>-<P(x)>^2
print(var)
Explanation: 4. What is the variance of $x$?
End of explanation
n=20
print('The furthest distance the person could travel after 20 steps = \t',n)
Explanation: Problem 3: One rough night
It's late on a Friday night and people are stumbling up Notre Dame Ave. to their dorms. You observe one particularly impaired individual who is taking steps of equal length 1m to the north or south (i.e., in one dimension), with equal probability.
1. What is the furthest distance the person could travel after 20 steps?
End of explanation
print('The probability that the person will not have traveled any net distance at all after 20 steps = \t',math.factorial(20)/math.factorial(10)/math.factorial(10)/2**20,)
# Going nowhere - 10 steps south + 10 steps north 10C20
# Total 2^20
Explanation: 2. What is the probability that the person won't have traveled any net distance at all after 20 steps?
End of explanation
print('The probability that the person has traveled half the maximun distance after 20 steps = \t',2*math.factorial(20)/math.factorial(5)/math.factorial(15)/2**20,)
# Going half the maximun distance - 15 steps south + 5 steps north or 15 steps north + 5 steps south
# Total 2_20
Explanation: 3. What is the probability that the person has traveled half the maximum distance after 20 steps?
End of explanation
X=[]
Y=[]
for x in range(21): # x stps going south
y=math.factorial(20)/math.factorial(x)/math.factorial(20-x)/2**20 # Pick x steps from 20 steps xC20 / Total
X.append(x-(20-x))
# X means the final distance. The steps going south - the steps going north. If x=0, X=-20 ... If x=20, X=20. Postive means south
Y.append(y)
plt.bar(X,Y)
plt.xlabel('distance');
plt.ylabel('P(x)');
plt.title('Probability distribution')
plt.show()
Explanation: 4. Plot the probability of traveling a given distance vs distance. Does the probability distribution look familiar? You'll see it again when we talk about diffusion.
End of explanation
from sympy import *
v = Symbol('x')
C=Symbol('C',positive=True)
m=Symbol('m',positive=True)
kB=Symbol('kB',positive=True)
T=Symbol('T',positive=True)
#Next create a function
function = integrate(C*exp(-m*v**2/2/kB/T),(v,-oo,+oo)) #Denominator
function2 = integrate(v*C*exp(-m*v**2/2/kB/T)/function,(v,-oo,+oo)) #Numerator
print('The expectation value of velocity is',function2/function)
Explanation: Problem 4: Now this is what I call equilibrium
The Boltzmann distribution tells us that, at thermal equilibrium, the probability of a particle having an energy $E$ is proportional to $\exp(-E/k_\text{B}T)$, where $k_\text{B}$ is the Boltzmann constant. Suppose a bunch of gas particles of mass $m$ are in thermal equilibrium at temperature $T$ and are traveling back and forth in one dimension with various velocities $v$ and kinetic energies $K=mv^2/2$.
1. What is the expectation value of the velocity $v$ of a particle?
$P = Ce^{-\frac{mv^2}{2k_B T}},-\infty\le v < \infty$.
$\int_{-\infty} ^{\infty} vCe^{-\frac{mv^2}{2k_B T}} dv = 0$
$V = \frac{\int_{0}^{\infty} vCe^{-\frac{mv^2}{2k_B T}} dv}{\int_{0}^{\infty} Ce^{-\frac{mv^2}{2k_B T}} dv}
=0$
End of explanation
function3 = integrate(m*v**2/2*C*exp(-m*v**2/2/kB/T)/function,(v,-oo,+oo)) #Numerator
print('The expectation value of kinetic energy K is',function3)
Explanation: 2. What is the expectation value of the kinetic energy $K$ of a particle? How does your answer depend on the particle mass? On temperature?
$\int_{-\infty}^{\infty} Ce^{-\frac{mv^2}{2k_B T}} dv = C(\frac{2k_B T\pi}{m})^\frac{1}{2} $
$\int_{-\infty}^{\infty} \frac{mv^2}{2}Ce^{-\frac{mv^2}{2k_B T}} dv = \frac{Ck_B T}{2}(\frac{2k_B T\pi}{m})^\frac{1}{2} $
$K = \frac{\int_{-\infty}^{\infty} \frac{mv^2}{2}Ce^{-\frac{mv^2}{2k_B T}} dv}{\int_{-\infty}^{\infty} Ce^{-\frac{mv^2}{2k_B T}} dv}
=\frac{k_B T}{2}$
K will increase when temperature increases(linear relationship), but unrelated to the particle mass.
Hint:
$\int_{0}^{\infty} e^{-ax^2} dx = \frac{1}{2}(\frac{pi}{a})^\frac{1}{2} $
$\int_{0}^{\infty} xe^{-ax^2} dx = \frac{1}{2a}$
$\int_{0}^{\infty} x^2e^{-ax^2} dx = \frac{1}{4a}(\frac{pi}{a})^\frac{1}{2} $
End of explanation |
1,307 | Given the following text description, write Python code to implement the functionality described.
Description:
Rotate a Linked List
Link list node ; This function rotates a linked list counter - clockwise and updates the head . The function assumes that k is smaller than size of linked list . ; Let us understand the below code for example k = 4 and list = 10.20 . 30.40 .50 . 60. ; Traverse till the end . ; Traverse the linked list to k - 1 position which will be last element for rotated array . ; Update the head_ref and last element pointer to None ; Function to push a node ; Allocate node ; Put in the data ; Link the old list off the new node ; Move the head to point to the new node ; Function to print linked list ; Driver code ; Start with the empty list ; Create a list 10.20 . 30.40 .50 . 60
| Python Code:
class Node :
def __init__(self ) :
self . data = 0
self . next = None
def rotate(head_ref , k ) :
if(k == 0 ) :
return
current = head_ref
while(current . next != None ) :
current = current . next
current . next = head_ref
current = head_ref
for i in range(k - 1 ) :
current = current . next
head_ref = current . next
current . next = None
return head_ref
def push(head_ref , new_data ) :
new_node = Node()
new_node . data = new_data
new_node . next =(head_ref )
(head_ref ) = new_node
return head_ref
def printList(node ) :
while(node != None ) :
print(node . data , end = ' ▁ ' )
node = node . next
if __name__== ' __main __' :
head = None
for i in range(60 , 0 , - 10 ) :
head = push(head , i )
print("Given ▁ linked ▁ list ▁ ")
printList(head )
head = rotate(head , 4 )
print("Rotated Linked list ")
printList(head )
|
1,308 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hello, LSTM!
In this project we'd like to explore the basic usage of LSTM (Long Short-Term Memory) which is a flavor of RNN (Recurrent Neural Network).
A nice theorerical tutorial is Understanding LSTM Networks.
Keras docs
Step1: Basic problems
Prediction of the next value of sequence
sequence of (110)+
Just a repeated pattern
Step2: Basic usage of LTSM layers in Keras
Notes
Step3: LSTM weight meanings | Python Code:
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
mpl.rc('image', interpolation='nearest', cmap='gray')
mpl.rc('figure', figsize=(20,10))
Explanation: Hello, LSTM!
In this project we'd like to explore the basic usage of LSTM (Long Short-Term Memory) which is a flavor of RNN (Recurrent Neural Network).
A nice theorerical tutorial is Understanding LSTM Networks.
Keras docs: http://keras.io/layers/recurrent/
Keras examples: https://github.com/fchollet/keras/tree/master/examples
https://github.com/fchollet/keras/blob/master/examples/imdb_bidirectional_lstm.py
https://github.com/fchollet/keras/blob/master/examples/imdb_cnn_lstm.py
https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py
https://github.com/fchollet/keras/blob/master/examples/lstm_text_generation.py
The goals
Define the problem that LSTM can solve.
Show a basic working example of LSTM usage in Keras.
Try to learn some basic patterns in simple sequences.
Setup
Install keras, tensorflow and the basic ML/Data Science libs (numpy/matplotlib/etc.).
Set TensorFlow as the keras backend in ~/.keras/keras.json:
json
{"epsilon": 1e-07, "floatx": "float32", "backend": "tensorflow"}
End of explanation
X = np.array([[[1],[1],[0]], [[1],[0],[1]], [[0],[1],[1]]])
y = np.array([[1], [1], [0]])
# X = np.array([[[1],[0],[0]], [[0],[1],[0]], [[0],[0],[1]]])
# y = np.array([[1], [0], [0]])
# input: 3 samples of 3-step sequences with 1 feature
# input: 3 samples with 1 feature
X.shape, y.shape
Explanation: Basic problems
Prediction of the next value of sequence
sequence of (110)+
Just a repeated pattern:
110110110110110...
Classification of sequences
The inputs/outputs must be tensors of shape (samples, time_steps, features).
In this case (1, len(X), 1).
For simplicity we have a single training example and no test test.
Predict one step ahead:
(A, B, C, [D, E]) -> D
End of explanation
from keras.models import Sequential
from keras.layers.core import Dense, Activation, TimeDistributedDense
from keras.layers.recurrent import LSTM
# model = Sequential()
# # return_sequences=False
# model.add(LSTM(output_dim=1, input_shape=(3, 1)))
# # since the LSTM layer has only one output after activation we can directly use as model output
# model.add(Activation('sigmoid'))
# model.compile(loss='binary_crossentropy', optimizer='adam', class_mode='binary')
# This models is probably too easy and it is not able to overfit on the training dataset.
# For LSTM output dim 3 it works ok (after a few hundred epochs).
model = Sequential()
model.add(LSTM(output_dim=3, input_shape=(3, 1)))
# Since the LSTM layer has multiple outputs and model has single one
# we need to add another Dense layer with single output.
# In case the LSTM would return sequences we would use TimeDistributedDense layer.
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', class_mode='binary')
model.count_params()
model.fit(X, y, nb_epoch=500, show_accuracy=True)
plt.plot(model.predict_proba(X).flatten(), 'rx')
plt.plot(model.predict_classes(X).flatten(), 'ro')
plt.plot(y.flatten(), 'g.')
plt.xlim(-0.1, 2.1)
plt.ylim(-0.1, 1.1)
model.predict_proba(X)
model.predict_classes(X)
# del model
Explanation: Basic usage of LTSM layers in Keras
Notes:
the first layer must specify the input shape
TensorFlow needs explicit length of series, so input_shape or batch_input_shape must be used, not just input_dim
when specifying batch_input_shape in LSTM, we need to explicitly add batch_size to model.fit()
End of explanation
weight_names = ['W_i', 'U_i', 'b_i',
'W_c', 'U_c', 'b_c',
'W_f', 'U_f', 'b_f',
'W_o', 'U_o', 'b_o']
weight_shapes = [w.shape for w in model.get_weights()]
# for n, w in zip(weight_names, weight_shapes):
# print(n, ':', w)
print(weight_shapes)
def pad_vector_shape(s):
return (s[0], 1) if len(s) == 1 else s
all_shapes = np.array([pad_vector_shape(s) for s in weight_shapes])
all_shapes
for w in model.get_weights():
print(w)
all_weights = np.zeros((all_shapes[:,0].sum(axis=0), all_shapes[:,1].max(axis=0)))
def add_weights(src, target):
target[0] = src[0]
target[1:4] = src[1]
target[4:7,0] = src[2]
for i in range(4):
add_weights(model.get_weights()[i*3:(i+1)*3], all_weights[i*7:(i+1)*7])
all_weights[28:31,0] = model.get_weights()[12].T
all_weights[31,0] = model.get_weights()[13]
plt.imshow(all_weights.T)
from matplotlib.patches import Rectangle
ax = plt.gca()
ax.add_patch(Rectangle([-.4, -0.4], 28-0.2, 3-0.2, fc='none', ec='r', lw=2, alpha=0.75))
ax.add_patch(Rectangle([28 - .4, -0.4], 3-0.2, 3-0.2, fc='none', ec='g', lw=2, alpha=0.75))
ax.add_patch(Rectangle([31 - .4, -0.4], 1-0.2, 3-0.2, fc='none', ec='b', lw=2, alpha=0.75))
plt.savefig('weights_110.png')
Explanation: LSTM weight meanings:
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
source code LSTM in recurrent.py
[W_i, U_i, b_i,
W_c, U_c, b_c,
W_f, U_f, b_f,
W_o, U_o, b_o]
Type of weights:
- W - weight matrix - from input to output
- b - bias vector - from input to output
- U - weight matrix - from hidden to output (it has no companion biases)
Usage of weights:
- i - input - to control whether to modify the cell state
- c - candidate - a new value of cell state
- f - forget - to remove the previous cell state
- o - output - to control whether to output something
Inputs and outputs of a LSTM unit:
- value, cell state, hidden state
End of explanation |
1,309 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Serializing STIX Objects
The string representation of all STIX classes is a valid STIX JSON object.
Step1: New in 3.0.0
Step2: If you need performance but also need human-readable output, you can pass the indent keyword argument to serialize() | Python Code:
from stix2 import Indicator
indicator = Indicator(name="File hash for malware variant",
pattern_type="stix",
pattern="[file:hashes.md5 = 'd41d8cd98f00b204e9800998ecf8427e']")
print(indicator.serialize(pretty=True))
Explanation: Serializing STIX Objects
The string representation of all STIX classes is a valid STIX JSON object.
End of explanation
print(indicator.serialize())
Explanation: New in 3.0.0:
Calling str() on a STIX object will call serialize() without any formatting options. The change was made to address the performance penalty induced by unknowingly calling with the pretty formatted option. As shown above, to get the same effect as str() had in past versions of the library, use the method directly and pass in the pretty argument serialize(pretty=True).
However, the pretty formatted string representation can be slow, as it sorts properties to be in a more readable order. If you need performance and don't care about the human-readability of the output, use the object's serialize() function to pass in any arguments json.dump() would understand:
End of explanation
print(indicator.serialize(indent=4))
Explanation: If you need performance but also need human-readable output, you can pass the indent keyword argument to serialize():
End of explanation |
1,310 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Beispiel Credit Data - Aufgabe - Klassifikation
Step1: https
Step2: Warum sollte man <b>%matplotlib inline</b> ausführen ? Recherchieren Sie !<br>
Fügen Sie<br>
<b>%matplotlib inline</b> <br>
in die nächste Zelle ein und führen Sie aus
Step3: Warum sollte warnings importiert werden ? <br>
https
Step4: Zum Lesen eines directories benötigen wir die Bibliothek os <br>
mit <br>
import os <br>
wird die Bibliothek geladen <br>
os.listdir("./data) liest dann das directory data aus <br>
Das data directory muss ein Unterordner des aktuellen directories sein
z.B.
Step5: <h1>Schritt
Step6: <h1>Schritt
Step7: Geben Sie in die nächste Zelle den Code
Step8: Geben Sie in die nächste Zelle den Code
Step9: Wenden Sie doch einige Übungen aus dem panda Notebook /grundlagen auf diesen Dataframe in der Variablen training zur Übung an. | Python Code:
# hier Ihren Code einfügen und aufüühren
Explanation: Beispiel Credit Data - Aufgabe - Klassifikation:
AI, Machine Learning & Data Science
Author list: Ramon Rank
Die in KNIME durchgeführte Klassifikation der Kreditdaten soll mit Python umgesetzt werden.
Die Zellen für die Klassifikation sind bereits vorbereitet.
Sie müssen die vorbereitenden Schritte ausarbeiten, soll heißen das Preprocessing durchführen. <br>
Hierzu die Daten credit_data.csv von github in ein directory /data oder direkt in das Arbeitsdirectory laden <br>
<h1>Schritt: Bibliotheken laden - Plotting vorbereiten - Warnings unterdrücken - Inhalt eines Directories ausgeben</h1>
Vorbereitend benötigen wir einige Bibliotheken die wir mit alias Namen laden:<br>
<b>import numpy as np <br>
import pandas as pd </b><br>
<b>import seaborn as sns <br>
from matplotlib import pyplot as plt </b><br>
Fügen Sie in die nächste Zelle den obigen Code ein.<br>
End of explanation
# hier Ihren Code einfügen und ausführen
Explanation: https://seaborn.pydata.org/tutorial/aesthetics.html <br>
Es gibt fünf voreingestellte Seaborn-Themen: darkgrid, whitegrid, dark, white, and ticks <br>
Fügen Sie in folgende Zelle den Code <br>
<b>sns.set_style("whitegrid") </b><br>ein
End of explanation
# hier Ihren Code einfügen und ausführen
Explanation: Warum sollte man <b>%matplotlib inline</b> ausführen ? Recherchieren Sie !<br>
Fügen Sie<br>
<b>%matplotlib inline</b> <br>
in die nächste Zelle ein und führen Sie aus
End of explanation
# hier Ihren Code einfügen und ausführen
Explanation: Warum sollte warnings importiert werden ? <br>
https://docs.python.org/3/library/warnings.html <br>
Fügen Sie in die nächste Zelle den Code:<br>
<b>import warnings <br>
warnings.filterwarnings("ignore") </b><br>
ein.
End of explanation
# hier Ihren Code einfügen und ausführen
import os
print(os.listdir("./data"))
Explanation: Zum Lesen eines directories benötigen wir die Bibliothek os <br>
mit <br>
import os <br>
wird die Bibliothek geladen <br>
os.listdir("./data) liest dann das directory data aus <br>
Das data directory muss ein Unterordner des aktuellen directories sein
z.B.:<br>
/aktuell <br>
/aktuell/data <br>
Geben Sie in die nächste Zelle den Code:<br>
<b>import os <br>
print(os.listdir("./data")) </b><br>
ein. </b>
End of explanation
# hier Ihren Code einfügen und ausführen
# Wenn Sie keine Fehlermeldung erhalten haben, wurde die Datei erfolgreich geladen
# die Daten wurden dann in die Variable training als panda Dataframe geladen
Explanation: <h1>Schritt: Einlesen der Daten als Panda Dataframe</h1>
Die Daten liegen im directory /data <br>
Zum Einlesen von csv Daten wird der Befehl pd.read_csv("") verwendet<br>
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html <br>
Sollen Daten aus dem gleichen directory eingelesen werden dann: <br>
pd.read_csv("credit_data.csv") <br>
sonst muss der relative Pfad angegeben werden:
pd.read_csv("./data/credit_data.csv") <br>
Achtung Linux: statt / bitte \ verwenden.
Geben Sie in die nächste Zelle den Code:<br>
<b>training = pd.read_csv("./data/credit_data.csv") </b><br>
ein und führen Sie die Zelle aus </b>
End of explanation
# hier Ihren Code einfügen und ausführen
Explanation: <h1>Schritt: Exploration der Daten</h1>
Zu allererst sollte man sich die Daten einmal anschauen <br>
shape, head(), tail(), describe() sind gute Möglichkeiten, sich einen Überblick zu verschaffen <br>
Geben Sie in die nächste Zelle den Code:<br>
<b>training.shape</b><br>
ein und führen Sie die Zelle aus </b>
End of explanation
# hier Ihren Code einfügen und ausführen
Explanation: Geben Sie in die nächste Zelle den Code:<br>
<b>training.head()</b><br>
ein und führen Sie die Zelle aus </b>
End of explanation
# hier Ihren Code einfügen und ausführen
# Erkennen Sie den Unterschied zu training.head() ? <br>
# Geben Sie einfach in training.head() oder training.tail() mal eine Zahl ein. zB. traing.tail(25)
Explanation: Geben Sie in die nächste Zelle den Code:<br>
<b>training.tail()</b><br>
ein und führen Sie die Zelle aus </b>
End of explanation
# Speichern Sie die Daten aus der Spalte CLAGE des Dataframes training in der Variablen spalte1
# Weitere Übungen
Explanation: Wenden Sie doch einige Übungen aus dem panda Notebook /grundlagen auf diesen Dataframe in der Variablen training zur Übung an.
End of explanation |
1,311 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First BERT Experiments
In this notebook we do some first experiments with BERT
Step1: Data
We use the same data as for all our previous experiments. Here we load the training, development and test data for a particular prompt.
Step3: Next, we build the label vocabulary, which maps every label in the training data to an index.
Step4: Model
We load the pretrained model and put it on a GPU if one is available. We also put the model in "training" mode, so that we can correctly update its internal parameters on the basis of our data sets.
Step7: Preprocessing
We preprocess the data by turning every example to an InputFeatures item. This item has all the attributes we need for finetuning BERT
Step8: Next, we initialize data loaders for each of our data sets. These data loaders present the data for training (for example, by grouping them into batches).
Step9: Evaluation
Our evaluation method takes a pretrained model and a dataloader. It has the model predict the labels for the items in the data loader, and returns the loss, the correct labels, and the predicted labels.
Step10: Training
Let's prepare the training. We set the training parameters and choose an optimizer and learning rate scheduler.
Step11: Now we do the actual training. In each epoch, we present the model with all training data and compute the loss on the training set and the development set. We save the model whenever the development loss improves. We end training when we haven't seen an improvement of the development loss for a specific number of epochs (the patience).
Optionally, we use gradient accumulation to accumulate the gradient for several training steps. This is useful when we want to use a larger batch size than our current GPU allows us to do.
Step12: Results
We load the pretrained model, set it to evaluation mode and compute its performance on the training, development and test set. We print out an evaluation report for the test set.
Note that different runs will give slightly different results. | Python Code:
import torch
from pytorch_transformers.tokenization_bert import BertTokenizer
from pytorch_transformers.modeling_bert import BertForSequenceClassification
BERT_MODEL = 'bert-base-uncased'
BATCH_SIZE = 16 if "base" in BERT_MODEL else 2
GRADIENT_ACCUMULATION_STEPS = 1 if "base" in BERT_MODEL else 8
tokenizer = BertTokenizer.from_pretrained(BERT_MODEL)
Explanation: First BERT Experiments
In this notebook we do some first experiments with BERT: we finetune a BERT model+classifier on each of our datasets separately and compute the accuracy of the resulting classifier on the test data.
For these experiments we use the pytorch_transformers package. It contains a variety of neural network architectures for transfer learning and pretrained models, including BERT and XLNET.
Two different BERT models are relevant for our experiments:
BERT-base-uncased: a relatively small BERT model that should already give reasonable results,
BERT-large-uncased: a larger model for real state-of-the-art results.
End of explanation
import ndjson
import glob
prefix = "junkfood_but"
train_file = f"../data/interim/{prefix}_train_withprompt_diverse200.ndjson"
synth_files = glob.glob(f"../data/interim/{prefix}_train_withprompt_*.ndjson")
dev_file = f"../data/interim/{prefix}_dev_withprompt.ndjson"
test_file = f"../data/interim/{prefix}_test_withprompt.ndjson"
with open(train_file) as i:
train_data = ndjson.load(i)
synth_data = []
for f in synth_files:
if "allsynth" in f:
continue
with open(f) as i:
synth_data += ndjson.load(i)
with open(dev_file) as i:
dev_data = ndjson.load(i)
with open(test_file) as i:
test_data = ndjson.load(i)
Explanation: Data
We use the same data as for all our previous experiments. Here we load the training, development and test data for a particular prompt.
End of explanation
label2idx = {}
idx2label = {}
target_names = []
for item in train_data:
if item["label"] not in label2idx:
target_names.append(item["label"])
idx = len(label2idx)
label2idx[item["label"]] = idx
idx2label[idx] = item["label"]
print(label2idx)
print(idx2label)
import random
def sample(train_data, synth_data, label2idx, number):
Sample a fixed number of items from every label from
the training data and test data.
new_train_data = []
for label in label2idx:
data_for_label = [i for i in train_data if i["label"] == label]
# If there is more training data than the required number,
# take a random sample of n examples from the training data.
if len(data_for_label) >= number:
random.shuffle(data_for_label)
new_train_data += data_for_label[:number]
# If there is less training data than the required number,
# combine training data with synthetic data.
elif len(data_for_label) < number:
# Automatically add all training data
new_train_data += data_for_label
# Compute the required number of additional data
rest = number-len(data_for_label)
# Collect the synthetic data for the label
synth_data_for_label = [i for i in synth_data if i["label"] == label]
# If there is more synthetic data than required,
# take a random sample from the synthetic data.
if len(synth_data_for_label) > rest:
random.shuffle(synth_data_for_label)
new_train_data += synth_data_for_label[:rest]
# If there is less synthetic data than required,
# sample with replacement from this data until we have
# the required number.
else:
new_train_data += random.choices(synth_data_for_label, k=rest)
return new_train_data
def random_sample(train_data, train_size):
random.shuffle(train_data)
train_data = train_data[:train_size]
return train_data
#train_data = train_data + synth_data
#train_data = sample(train_data, synth_data, label2idx, 200)
#train_data = random_sample(train_data, 200)
print("Train data size:", len(train_data))
Explanation: Next, we build the label vocabulary, which maps every label in the training data to an index.
End of explanation
model = BertForSequenceClassification.from_pretrained(BERT_MODEL, num_labels=len(label2idx))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
model.train()
Explanation: Model
We load the pretrained model and put it on a GPU if one is available. We also put the model in "training" mode, so that we can correctly update its internal parameters on the basis of our data sets.
End of explanation
import logging
import numpy as np
logging.basicConfig(format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt = '%m/%d/%Y %H:%M:%S',
level = logging.INFO)
logger = logging.getLogger(__name__)
MAX_SEQ_LENGTH=100
class InputFeatures(object):
A single set of features of data.
def __init__(self, input_ids, input_mask, segment_ids, label_id):
self.input_ids = input_ids
self.input_mask = input_mask
self.segment_ids = segment_ids
self.label_id = label_id
def convert_examples_to_features(examples, label2idx, max_seq_length, tokenizer, verbose=0):
Loads a data file into a list of `InputBatch`s.
features = []
for (ex_index, ex) in enumerate(examples):
# TODO: should deal better with sentences > max tok length
input_ids = tokenizer.encode("[CLS] " + ex["text"] + " [SEP]")
segment_ids = [0] * len(input_ids)
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
input_mask = [1] * len(input_ids)
# Zero-pad up to the sequence length.
padding = [0] * (max_seq_length - len(input_ids))
input_ids += padding
input_mask += padding
segment_ids += padding
assert len(input_ids) == max_seq_length
assert len(input_mask) == max_seq_length
assert len(segment_ids) == max_seq_length
label_id = label2idx[ex["label"]]
if verbose and ex_index == 0:
logger.info("*** Example ***")
logger.info("text: %s" % ex["text"])
logger.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
logger.info("input_mask: %s" % " ".join([str(x) for x in input_mask]))
logger.info("segment_ids: %s" % " ".join([str(x) for x in segment_ids]))
logger.info("label:" + str(ex["label"]) + " id: " + str(label_id))
features.append(
InputFeatures(input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids,
label_id=label_id))
return features
train_features = convert_examples_to_features(train_data, label2idx, MAX_SEQ_LENGTH, tokenizer, verbose=0)
dev_features = convert_examples_to_features(dev_data, label2idx, MAX_SEQ_LENGTH, tokenizer)
test_features = convert_examples_to_features(test_data, label2idx, MAX_SEQ_LENGTH, tokenizer, verbose=1)
Explanation: Preprocessing
We preprocess the data by turning every example to an InputFeatures item. This item has all the attributes we need for finetuning BERT:
input ids: the ids of the tokens in the text
input mask: tells BERT what part of the input it should not look at (such as padding tokens)
segment ids: tells BERT what segment every token belongs to. BERT can take two different segments as input
label id: the id of this item's label
End of explanation
import torch
from torch.utils.data import TensorDataset, DataLoader, RandomSampler
def get_data_loader(features, max_seq_length, batch_size, shuffle=True):
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
all_input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long)
all_segment_ids = torch.tensor([f.segment_ids for f in features], dtype=torch.long)
all_label_ids = torch.tensor([f.label_id for f in features], dtype=torch.long)
data = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_label_ids)
dataloader = DataLoader(data, shuffle=shuffle, batch_size=batch_size)
return dataloader
train_dataloader = get_data_loader(train_features, MAX_SEQ_LENGTH, BATCH_SIZE)
dev_dataloader = get_data_loader(dev_features, MAX_SEQ_LENGTH, BATCH_SIZE)
test_dataloader = get_data_loader(test_features, MAX_SEQ_LENGTH, BATCH_SIZE, shuffle=False)
Explanation: Next, we initialize data loaders for each of our data sets. These data loaders present the data for training (for example, by grouping them into batches).
End of explanation
def evaluate(model, dataloader, verbose=False):
eval_loss = 0
nb_eval_steps = 0
predicted_labels, correct_labels = [], []
for step, batch in enumerate(tqdm(dataloader, desc="Evaluation iteration")):
batch = tuple(t.to(device) for t in batch)
input_ids, input_mask, segment_ids, label_ids = batch
with torch.no_grad():
tmp_eval_loss, logits = model(input_ids, segment_ids, input_mask, label_ids)
outputs = np.argmax(logits.to('cpu'), axis=1)
label_ids = label_ids.to('cpu').numpy()
predicted_labels += list(outputs)
correct_labels += list(label_ids)
eval_loss += tmp_eval_loss.mean().item()
nb_eval_steps += 1
eval_loss = eval_loss / nb_eval_steps
correct_labels = np.array(correct_labels)
predicted_labels = np.array(predicted_labels)
return eval_loss, correct_labels, predicted_labels
Explanation: Evaluation
Our evaluation method takes a pretrained model and a dataloader. It has the model predict the labels for the items in the data loader, and returns the loss, the correct labels, and the predicted labels.
End of explanation
from pytorch_transformers.optimization import AdamW, WarmupLinearSchedule
NUM_TRAIN_EPOCHS = 20
LEARNING_RATE = 1e-5
WARMUP_PROPORTION = 0.1
def warmup_linear(x, warmup=0.002):
if x < warmup:
return x/warmup
return 1.0 - x
num_train_steps = int(len(train_data) / BATCH_SIZE / GRADIENT_ACCUMULATION_STEPS * NUM_TRAIN_EPOCHS)
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
optimizer = AdamW(optimizer_grouped_parameters, lr=LEARNING_RATE, correct_bias=False)
scheduler = WarmupLinearSchedule(optimizer, warmup_steps=100, t_total=num_train_steps)
Explanation: Training
Let's prepare the training. We set the training parameters and choose an optimizer and learning rate scheduler.
End of explanation
import os
from tqdm import trange
from tqdm import tqdm_notebook as tqdm
from sklearn.metrics import classification_report, precision_recall_fscore_support
OUTPUT_DIR = "/tmp/"
MODEL_FILE_NAME = "pytorch_model.bin"
PATIENCE = 5
global_step = 0
model.train()
loss_history = []
best_epoch = 0
for epoch in trange(int(NUM_TRAIN_EPOCHS), desc="Epoch"):
tr_loss = 0
nb_tr_examples, nb_tr_steps = 0, 0
for step, batch in enumerate(tqdm(train_dataloader, desc="Training iteration")):
batch = tuple(t.to(device) for t in batch)
input_ids, input_mask, segment_ids, label_ids = batch
outputs = model(input_ids, segment_ids, input_mask, label_ids)
loss = outputs[0]
if GRADIENT_ACCUMULATION_STEPS > 1:
loss = loss / GRADIENT_ACCUMULATION_STEPS
loss.backward()
tr_loss += loss.item()
nb_tr_examples += input_ids.size(0)
nb_tr_steps += 1
if (step + 1) % GRADIENT_ACCUMULATION_STEPS == 0:
lr_this_step = LEARNING_RATE * warmup_linear(global_step/num_train_steps, WARMUP_PROPORTION)
for param_group in optimizer.param_groups:
param_group['lr'] = lr_this_step
optimizer.step()
optimizer.zero_grad()
global_step += 1
dev_loss, _, _ = evaluate(model, dev_dataloader)
print("Loss history:", loss_history)
print("Dev loss:", dev_loss)
if len(loss_history) == 0 or dev_loss < min(loss_history):
model_to_save = model.module if hasattr(model, 'module') else model
output_model_file = os.path.join(OUTPUT_DIR, MODEL_FILE_NAME)
torch.save(model_to_save.state_dict(), output_model_file)
best_epoch = epoch
if epoch-best_epoch >= PATIENCE:
print("No improvement on development set. Finish training.")
break
loss_history.append(dev_loss)
Explanation: Now we do the actual training. In each epoch, we present the model with all training data and compute the loss on the training set and the development set. We save the model whenever the development loss improves. We end training when we haven't seen an improvement of the development loss for a specific number of epochs (the patience).
Optionally, we use gradient accumulation to accumulate the gradient for several training steps. This is useful when we want to use a larger batch size than our current GPU allows us to do.
End of explanation
print("Loading model from", output_model_file)
device="cpu"
model_state_dict = torch.load(output_model_file, map_location=lambda storage, loc: storage)
model = BertForSequenceClassification.from_pretrained(BERT_MODEL, state_dict=model_state_dict, num_labels=len(label2idx))
model.to(device)
model.eval()
#_, train_correct, train_predicted = evaluate(model, train_dataloader)
#_, dev_correct, dev_predicted = evaluate(model, dev_dataloader)
_, test_correct, test_predicted = evaluate(model, test_dataloader, verbose=True)
#print("Training performance:", precision_recall_fscore_support(train_correct, train_predicted, average="micro"))
#print("Development performance:", precision_recall_fscore_support(dev_correct, dev_predicted, average="micro"))
print("Test performance:", precision_recall_fscore_support(test_correct, test_predicted, average="micro"))
print(classification_report(test_correct, test_predicted, target_names=target_names))
c = 0
for item, predicted, correct in zip(test_data, test_predicted, test_correct):
assert item["label"] == idx2label[correct]
c += (item["label"] == idx2label[predicted])
print("{}#{}#{}".format(item["text"], idx2label[correct], idx2label[predicted]))
print(c)
print(c/len(test_data))
Explanation: Results
We load the pretrained model, set it to evaluation mode and compute its performance on the training, development and test set. We print out an evaluation report for the test set.
Note that different runs will give slightly different results.
End of explanation |
1,312 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First neural network
We will build a simple feed forward neural network with Keras. We will start with a two layer neural network for simplicity.
Import all necessary python packages
Step1: Load some data
The dataset in this experiment is a publically available pulsar dataset from Rob Lyons paper. It is in a simple ASCII format delimited by commas. There are 8 statistical features that represent different measure of the de-dispersed pulse profile of pulsar and non pulsar candidates. The last column is a label column where '1' represents a pulsar and '0' represents a non pulsar candidate.
Step2: Split the data into training and testing data
Step3: Show some info about the split
Step4: Construct the model
Step5: Print the mode summary
This step makes sure that our model is correctly defined and there is no error in the model definition.
It will also show the sizes of each layers
Step6: Compile the model
This step defines the parameters for training
Step7: Train the model
In this step we will train the network and also define the number of epochs and batch size for training. | Python Code:
# For simple array operations
import numpy as np
# To construct the model
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.optimizers import SGD
# Some utility for splitting data and printing the classification report
from sklearn.cross_validation import train_test_split
from sklearn.metrics import classification_report
from sklearn.utils import shuffle
Explanation: First neural network
We will build a simple feed forward neural network with Keras. We will start with a two layer neural network for simplicity.
Import all necessary python packages
End of explanation
dataset = np.loadtxt('../Data/HTRU_2.csv',delimiter=',')
print 'The dataset has %d rows and %d features' %(dataset.shape[0],dataset.shape[1]-1)
# Split into features and labels
for i in range(0,10):
dataset = shuffle(dataset)
features = dataset[:,0:-1]
labels = dataset[:,-1]
Explanation: Load some data
The dataset in this experiment is a publically available pulsar dataset from Rob Lyons paper. It is in a simple ASCII format delimited by commas. There are 8 statistical features that represent different measure of the de-dispersed pulse profile of pulsar and non pulsar candidates. The last column is a label column where '1' represents a pulsar and '0' represents a non pulsar candidate.
End of explanation
traindata,testdata,trainlabels,testlabels = train_test_split(features,labels,test_size=0.3)
trainlabels = trainlabels.astype('int')
testlabels = testlabels.astype('int')
Explanation: Split the data into training and testing data
End of explanation
print 'Number of training samples : %d'%(traindata.shape[0])
print 'Number of test samples : %d'%(testdata.shape[0])
Explanation: Show some info about the split
End of explanation
model = Sequential() # Our model is a simple feedforward model
model.add(Dense(64,input_shape=(8,))) # The first layer holds the input for in which our case the there are 8 features.
model.add(Activation('relu')) # First activation layer is rectified linear unit (RELU)
model.add(Dense(256)) # Second layer has 256 neurons
model.add(Activation('relu')) # Second RELU activation
model.add(Dense(1)) # Third layer has 1 neuron because we have only one outcome - pulsar or non pulsar
model.add(Activation('softmax')) # The Scoring layer which normalizes the scores
Explanation: Construct the model
End of explanation
model.summary()
Explanation: Print the mode summary
This step makes sure that our model is correctly defined and there is no error in the model definition.
It will also show the sizes of each layers
End of explanation
model.compile(loss='binary_crossentropy', # Loss function for binary classification
optimizer=SGD(), # Optimizer for learning, in this case Stochastic Gradient Descent (SGD)
metrics=['accuracy']) # Evaluation function"
Explanation: Compile the model
This step defines the parameters for training
End of explanation
batch_size = 100
n_epochs = 10
training = model.fit(traindata,trainlabels,
nb_epoch=n_epochs,
batch_size=batch_size,
validation_data=(testdata, testlabels),
verbose=1)
Explanation: Train the model
In this step we will train the network and also define the number of epochs and batch size for training.
End of explanation |
1,313 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-3', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: CSIR-CSIRO
Source ID: SANDBOX-3
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:54
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
1,314 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
http
Step1: Prova arrays
Step2: Riproduco cose fatte con numpy
Inizializzazioni matrici costanti
Step3: Inizializzazioni ranges e reshaping
Step4: Operazioni matriciali elementwise (somma, prodotto)
Step5: Altre operazioni matematiche
Step6: Tutte le operazioni matriciali
Step7: Prova map con passaggio di più variabili
Step8: Prova convolution
Step9: ## Prove raw to float e viceversa
Step10: Documentazione utile
Lezioni su TF
Parte su placeholders forse utile in hough
Doc ufficiale
Cose utili in doc uff
Guide ufficiali
Tensori costanti (generalizzazione di numpy zeros/ones))
Molto interessante, generalizzazione di matrice trasposta
Tensori sparsi
Fourier
Broadcasting IMPORTANTE
cose utili o importanti
https | Python Code:
#basic python
x = 35
y = x + 5
print(y)
#basic TF
#x = tf.random_uniform([1, 2], -1.0, 1.0)
x = tf.constant(35, name = 'x')
y = tf.Variable(x+5, name = 'y')
model = tf.global_variables_initializer()
sess = tf.Session()
sess.run(model)
print(sess.run(y))
#per scrivere il grafo
#writer = tf.summary.FileWriter("output", sess.graph)
print(sess.run(y))
#writer.close
a = tf.add(1, 2,)
b = tf.multiply(a, 3)
c = tf.add(4, 5,)
d = tf.multiply(c, 6,)
e = tf.multiply(4, 5,)
f = tf.div(c, 6,)
g = tf.add(b, d)
h = tf.multiply(g, f)
primo = tf.constant(3, name = 'primo')
secondo = tf.constant(5, name = 'secondo')
somma1 = primo + secondo
somma2 = tf.add(primo, secondo)
sess = tf.Session()
#writer = tf.summary.FileWriter("output", sess.graph)
print(sess.run(h))
%time print(sess.run(somma1))
%time print(sess.run(somma2))
#writer.close
# Creates a graph.
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))
Explanation: http://localhost:8888/notebooks/Documenti/TESI/thesis/codici/Prove%20TF.ipynb#Prova-cose-base
http://localhost:8888/notebooks/Documenti/TESI/thesis/codici/Prove%20TF.ipynb#Prova-arrays
http://localhost:8888/notebooks/Documenti/TESI/thesis/codici/Prove%20TF.ipynb#Riproduco-cose-fatte-con-numpy
http://localhost:8888/notebooks/Documenti/TESI/thesis/codici/Prove%20TF.ipynb#Prove-assegnazione-e-indexing
http://localhost:8888/notebooks/Documenti/TESI/thesis/codici/Prove%20TF.ipynb#Prova-convolution
Prova cose base
End of explanation
primo = tf.constant([[10,20,30], [100,200,300]], name = 'primo')
righe = tf.constant([1,2,3], name = 'secondo1')
colonne = tf.constant([[1],[2]], name = 'secondo2')
somma1 = primo + righe
somma2 = tf.add(primo, colonne)
sessione = tf.Session()
#writer = tf.summary.FileWriter("output", sess.graph)
print(sessione.run(somma1))
print(sessione.run(somma2))
print("dimensioni dei tre tensori")
print(primo.shape,
righe.shape,
colonne.shape)
print(primo)
# First, load the image
filename = "MarshOrchid.jpg"
img = tf.constant(image.imread(filename))
# Print out its shape
sessione = tf.Session()
numpimg = sessione.run(img)
pyplot.imshow(numpimg)
pyplot.show()
print(numpimg.size)
# immagine è resa come un array (non è chiaro se di numpy o di python)
alterazione = tf.constant([5,5,0], name='blur')
tensoreImg = tf.Variable(img+alterazione, name='x')
#print(tensoreImg)
model = tf.global_variables_initializer()
sess = tf.Session()
sess.run(model)
sess.run(tensoreImg)
img = tensoreImg.eval(session = sess)
img = img.astype(float)
#print(img)
pyplot.imshow(img)
#pyplot.show()
Explanation: Prova arrays
End of explanation
# non ho ben capito cosa è Variable
#unitensor = tf.Variable(tf.ones((10,10)))
unitensor = tf.Variable(tf.ones((10,10)))
unitensor2 = tf.ones((10,10))
unitensor3 = tf.constant(1, shape=(10,10))
tritensor = tf.constant(3, shape=(10,10))
tritensor2 = tf.Variable(unitensor3*3)
init = tf.global_variables_initializer()
sessione = tf.Session()
sessione.run(init)
#print(sessione.run(unitensor))
#print(sessione.run(unitensor2))
print(sessione.run(unitensor3))
print(sessione.run(tritensor))
print(sessione.run(tritensor2))
Explanation: Riproduco cose fatte con numpy
Inizializzazioni matrici costanti
End of explanation
rangetensor = tf.range(0, limit = 9, delta = 1)
rangeMatrTensor = tf.reshape(rangetensor, (3,3))
#transposeTensor = tf.transpose(rangetensor, perm=[1])
reshapeTensor = tf.reshape(rangetensor,(9,1))
init = tf.global_variables_initializer()
sessione = tf.Session()
sessione.run(init)
print(sessione.run(rangetensor))
print(sessione.run(rangeMatrTensor))
#print(sessione.run(transposeTensor))
print(sessione.run(reshapeTensor))
print(sessione.run(tf.ones(10, dtype = tf.int32)))
Explanation: Inizializzazioni ranges e reshaping
End of explanation
#tf.add #addizioni
#tf.subtract #sottrazioni
#tf.multiply #moltiplizazioni
#tf.scalar_mul(scalar, tensor) #aggiunge uno scalare a tutto il tensore
#tf.div #divisioni WARNING FORSE tf.divide È DIVERSO!
#tf.truediv #divisioni restituendo sempre float
unitensor = tf.ones((3,3))
duitensor = tf.constant(2.0, shape=(3,3))
sommatensor1 = tf.add(unitensor,duitensor)
sommatensor2 = tf.Variable(unitensor+duitensor)
init = tf.global_variables_initializer()
sessione = tf.Session()
sessione.run(init)
print(sessione.run(sommatensor1))
print(sessione.run(sommatensor2))
rangetensor = tf.range(0.0, limit = 9, delta = 1)
rangetensor = tf.reshape(rangetensor, (3,3))
prodottotensor1 = tf.multiply(rangetensor,duitensor)
# con variabile, non è esattamente la stessa cosa
prodottotensor2 = tf.Variable(rangetensor*duitensor)
init = tf.global_variables_initializer()
sessione = tf.Session()
sessione.run(init)
print(sessione.run(prodottotensor1))
print(sessione.run(prodottotensor2))
# le operazioni + e * lavorano elementwise come numpy, ma * può lavorare come prodotto scalare
Explanation: Operazioni matriciali elementwise (somma, prodotto)
End of explanation
#faccio prodotto vettoriale tra due vettori per ottenere matrice 2d, e poi faccio hack per ottenere matrice 3d
prodotto1 = tf.Variable(rangetensor*reshapeTensor)
#prodotto1 = tf.reshape(prodotto1, (81,1))
prodotto1bis = tf.multiply(rangetensor,reshapeTensor)
prodotto2 = tf.Variable(reshapeTensor*rangetensor)
prodotto2bis = tf.multiply(reshapeTensor, rangetensor)
#prodotto3d = tf.multiply(rangetensor,prodotto1) #che output dà questo comando?
#prodotto3d = tf.multiply(prodotto1, reshapeTensor) #è commutativo e dà stesso risultato sia che vettore sia
#verticale che orizzontale!
#prodotto3d = tf.multiply(rangetensor, prodotto1)
#prodotto3d = tf.reshape(prodotto3d, (9,9,9))
init = tf.global_variables_initializer()
sessione = tf.Session()
sessione.run(init)
def outer3d(vettore, matrice):
shape = tf.shape(matrice)
matrice = tf.reshape(matrice, (tf.size(matrice),1))
prodotto3d = tf.multiply(vettore, matrice)
return tf.reshape(prodotto3d, (shape[0],shape[1],tf.size(vettore)))
prodottoFunzione = outer3d(rangetensor,prodotto1)
#print(sessione.run(prodotto3d))
print(sessione.run(prodottoFunzione))
#prodotti matriciali
unitensor = tf.ones((3,3))
rangetensor = tf.range(0.0, limit = 9)
rangetensor = tf.reshape(rangetensor, (3,3))
tensorMatrProd = tf.matmul(rangetensor, rangetensor)
tensorProd = tf.tensordot(rangetensor,rangetensor, 1)
# sono equivalenti, se si fa il tensordot con asse 2 esce
# uno scalare che non capisco
sessione = tf.Session()
print(sessione.run(tensorMatrProd))
print(sessione.run(tensorProd))
print(sessione.run(tf.transpose(tensorProd)))
#tf.transpose #trasposta
#tf.reshape(rangetensor,(10,1)) #vettore trasposto
#tf.matrix_transpose #traposto di ultime due dimensioni un tensore di rango >=2
#tf.matrix_inverse #matrice inversa di quadrata, invertibile
tensoruni = tf.ones(10.0)
tensorzeri = tf.zeros(10.0)
tensorscala = tf.range(10.0)
colonne = tf.constant(10)
#prodotto scalare
tensorScalar = tf.tensordot(tensoruni,tensorscala, 1)
#trasposto
tensorTrasposto = tf.reshape(tensorscala,(10,1))
#outer: NB tensorFlow broadcasta automaticamente
tensorOuter = tensoruni*tensorTrasposto
sessione = tf.Session()
print(sessione.run(tensoruni), sessione.run(tensorzeri))
print(sessione.run(tf.zeros([colonne])))
print(sessione.run(tensorScalar))
print(sessione.run(tensorscala))
print(sessione.run(tensorTrasposto))
print(sessione.run(tensorOuter))
Explanation: Altre operazioni matematiche:
https://www.tensorflow.org/api_guides/python/math_ops#Arithmetic_Operators
https://www.tensorflow.org/api_guides/python/math_ops#Basic_Math_Functions
Operazioni matriciali: prodotto esterno, kronecker, righe x colonne, inversa, trasposta
End of explanation
array = tf.Variable(tf.range(10,20))
indici = tf.constant([1,3,5])
updati = tf.constant([100,90,4050])
slicearray = tf.gather(array,indici)
updarray = tf.scatter_update(array,indici,updati)
init = tf.global_variables_initializer()
sessione = tf.Session()
sessione.run(init)
print(sessione.run(array[0:4]))
print(sessione.run(slicearray))
print(sessione.run(array))
print(sessione.run(updarray))
# selezione nonzero elements
#vettore = tf.constant([1,0,0,2,0], dtype=tf.int64)
ravettore = tf.random_uniform((1,100000000),0,2,dtype = tf.int32)
ravettore = ravettore[0]
where = tf.not_equal(ravettore, 0)
indici = tf.where(where)
nonzeri = tf.gather(ravettore,indici)
#OPPURE
#sparso = tf.SparseTensor(indici, nonzeri, dense_shape=vettore.get_shape())
sessione = tf.Session(config=tf.ConfigProto(log_device_placement=True))
%time sessione.run(nonzeri)
#print(shape,sessione.run(shape))
#print(sessione.run(ravettore))
#%time print(sessione.run(indici))
#%time print(sessione.run(nonzeri))
#%time sessione.run(ravettore)
#%time sessione.run(indici)
#print(sessione.run(sparso))
from tensorflow.python.client import device_lib
def get_available_gpus():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos if x.device_type == 'GPU']
nomi = get_available_gpus()
print(nomi)
# prova map
sessione = tf.Session()
moltiplicatore = tf.range(10)
addizionatore = tf.range(100,110)
def mappalo(stepIesimo):
uni = tf.range(10)
moltiplicato = tf.multiply(moltiplicatore[stepIesimo],uni)
addizionato = moltiplicato + addizionatore
return addizionato
image = tf.map_fn(mappalo, tf.range(0, 10), dtype=tf.int32)
print(sessione.run(image))
#prova map con prodotto scalare
import numpy
from scipy import sparse
from matplotlib import pyplot
import tensorflow as tf
from tensorflow.python.client import timeline
import time
nRows = 10
def mapfunc(ithStep):
matrix1 = tf.zeros([1000,1000], dtype = tf.float32)
matrix2 = tf.ones([1000,1000], dtype = tf.float32)
matrix1 = tf.add(matrix1,ithStep)
prodotto = tf.matmul(matrix1,matrix2)
return prodotto
sessione = tf.Session(config=tf.ConfigProto(log_device_placement=True))
imageMapped = tf.map_fn(mapfunc, tf.range(0,nRows), dtype = tf.float32)
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata()
start = time.time()
image = sessione.run(imageMapped, options=run_options, run_metadata=run_metadata)
stop = time.time()
print(stop-start)
# Create the Timeline object, and write it to a json
tl = timeline.Timeline(run_metadata.step_stats)
ctf = tl.generate_chrome_trace_format()
with open('timelineDB.json', 'w') as f:
f.write(ctf)
#prova prodotto scalare
import numpy
import tensorflow as tf
from tensorflow.python.client import timeline
matrix1 = tf.zeros([5000,5000], dtype = tf.int32)
matrix2 = tf.ones([5000,5000], dtype = tf.int32)
matrix1 = tf.add(matrix1,2)
product = tf.matmul(matrix1,matrix2)
session = tf.Session(config=tf.ConfigProto(log_device_placement=True))
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata()
image = session.run(product, options=run_options, run_metadata=run_metadata)
# Create the Timeline object, and write it to a json
tl = timeline.Timeline(run_metadata.step_stats)
ctf = tl.generate_chrome_trace_format()
with open('timelineDB.json', 'w') as f:
f.write(ctf)
#prova histogram fixed
import numpy
import tensorflow as tf
from tensorflow.python.client import timeline
matrix1 = tf.random_uniform((5000,5000),0,2,dtype = tf.int32)
matrix2 = tf.ones([5000,5000], dtype = tf.int32)
matrix1 = tf.add(matrix1,2)
product = tf.matmul(matrix1,matrix2)
session = tf.Session(config=tf.ConfigProto(log_device_placement=True))
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata()
image = session.run(product, options=run_options, run_metadata=run_metadata)
# Create the Timeline object, and write it to a json
tl = timeline.Timeline(run_metadata.step_stats)
ctf = tl.generate_chrome_trace_format()
with open('timelineDB.json', 'w') as f:
f.write(ctf)
Explanation: Tutte le operazioni matriciali:
https://www.tensorflow.org/api_guides/python/math_ops#Matrix_Math_Functions
https://www.tensorflow.org/api_docs/python/tf/tensordot (prodotto per contrazione di un indice)
Prove assegnazione e indexing
End of explanation
import numpy
import tensorflow as tf
sessione = tf.Session()
array = tf.range(0.0,100.0)
cosaImportante1 = tf.range(0.0,2.0)
cosaImportante2 = tf.constant([2.0])
tutto = tf.concat((cosaImportante1, cosaImportante2, array),0)
def funsione(i):
j = i+3
funsionalo = tutto[2] + tutto[j]
return funsionalo
mappa = tf.map_fn(funsione, tf.range(0,tf.size(tutto)-3), dtype=tf.float32)
print(sessione.run(tf.size(mappa)))
Explanation: Prova map con passaggio di più variabili
End of explanation
import numpy
import tensorflow as tf
sessione = tf.Session()
array = tf.range(0.0,8160000.0)
array = tf.reshape(array, (85,96000))
kernel = tf.constant([[-1.0,0.0,0.0,1.0]])
array = tf.reshape(array,(1,85,96000,1))
kernel = tf.reshape(kernel, (1,4,1,1))
somma = tf.nn.conv2d(input=array,filter=kernel,strides=[1,1,1,1],padding ='SAME')
somma = tf.reshape(somma, (85,96000))
#somma = tf.reshape(somma, (85,95997))
#print(sessione.run(kernel))
#print(sessione.run(array))
print(sessione.run(somma))
array = tf.range(0.0,8160000.0)
array = tf.reshape(array, (85,96000))
larghezza = tf.constant(3)
colonne = tf.size(array[0])
#houghInt = houghDiff[:,semiLarghezza*2:nColumns]-houghDiff[:,0:nColumns - semiLarghezza*2]
#houghInt = tf.concat([houghDiff[:,0:semiLarghezza*2],houghInt],1)
arrayInt = array[:,larghezza:colonne]-array[:,0:colonne-larghezza]
print(sessione.run(arrayInt))
print(sessione.run(tf.shape(arrayInt)))
enhancement = 10
kernel = tf.concat(([-1.0],tf.zeros(enhancement,dtype=tf.float32),[1.0]),0)
print(sessione.run(kernel))
Explanation: Prova convolution
End of explanation
import numpy
import tensorflow as tf
sess = tf.Session()
array = numpy.array([0.1, 0.2, 0.4, 0.8, 0.9, 1.1]).astype(numpy.float32)
print(array.tobytes())
print(numpy.fromstring(array.tobytes()))
tensoraw = tf.constant(array.tobytes())
print(sess.run(tensoraw))
print(sess.run(tf.decode_raw(tensoraw, tf.float32)))
rawArray = sess.run(tensoraw)
decodedArray = sess.run(tf.decode_raw(tensoraw, tf.float32))
print(numpy.fromstring(rawArray))
print(numpy.fromstring(decodedArray))
Explanation: ## Prove raw to float e viceversa
End of explanation
#formalizzato in maniera generale come fare prodotto vettoriale tra due vettori e prodotto esterno vettore colonna-matrice
matrice = tf.reshape(tf.range(0,50), (10,5))
vettore = tf.range(0,4)
vettore1 = tf.range(1,6)
vettore2 = tf.range(100,106)
shape = tf.shape(matrice)
matrice = tf.reshape(matrice, (1, tf.size(matrice)))
vettore = tf.reshape(vettore, (tf.size(vettore),1))
prodotto3d = tf.multiply(vettore, matrice)
prodotto3d = tf.reshape(prodotto3d, (tf.size(vettore), shape[1],shape[0]))
vettore2 = tf.reshape(vettore2, (tf.size(vettore2),1))
prodottoX = tf.multiply(vettore1,vettore2)
sessione = tf.Session()
print(sessione.run(prodottoX))
#print(sessione.run(prodotto3d))
#print(sessione.run(shape))
# alcune altre prove su somme
vettore = tf.range(0,4)
vettore2 = tf.range(10,14)
sommaVettori = tf.add(vettore,vettore2)
vettoreSomma = vettore +2
vettoreSomma2 = tf.add(vettore,2)
vettoreSomma3 = tf.Variable(vettore+2)
init = tf.global_variables_initializer()
sessione = tf.Session()
sessione.run(init)
print(vettore, vettoreSomma, vettoreSomma2, vettoreSomma3)
print(sessione.run((vettore, vettoreSomma,vettoreSomma2,vettoreSomma3)))
print(sessione.run(sommaVettori))
# prova stack
vettore = tf.range(0,4)
vettore2 = tf.range(10,14)
vettore = tf.reshape(vettore,(1,4))
vettore2 = tf.reshape(vettore2,(1,4))
staccato = tf.stack([vettore[0],vettore2[0]])
sessione = tf.Session()
print(sessione.run(staccato))
# prova somma elementi con stesse coordinate in matrice sparsa
indices = tf.constant([[1, 1], [1, 2], [1, 2], [1, 6]])
values = tf.constant([1, 2, 3, 4])
# Linearize the indices. If the dimensions of original array are
# [N_{k}, N_{k-1}, ... N_0], then simply matrix multiply the indices
# by [..., N_1 * N_0, N_0, 1]^T. For example, if the sparse tensor
# has dimensions [10, 6, 4, 5], then multiply by [120, 20, 5, 1]^T
# In your case, the dimensions are [10, 10], so multiply by [10, 1]^T
linearized = tf.matmul(indices, [[10], [1]])
# Get the unique indices, and their positions in the array
y, idx = tf.unique(tf.squeeze(linearized))
# Use the positions of the unique values as the segment ids to
# get the unique values
values = tf.segment_sum(values, idx)
# Go back to N-D indices
y = tf.expand_dims(y, 1)
righe = tf.cast(y/10,tf.int32)
colonne = y%10
indices = tf.concat([righe, colonne],1)
tf.InteractiveSession()
print(indices.eval())
print(values.eval())
print(linearized.eval())
print(sessione.run((righe,colonne)))
# qui provo fully vectorial
sessione = tf.Session()
matrix = tf.random_uniform((10,10), 0,2, dtype= tf.int32)
coordinates = tf.where(tf.not_equal(matrix,0))
x = coordinates[:,0]
x = tf.cast(x, tf.float32)
times = coordinates[:,1]
times = tf.cast(times, tf.float32)
xSize = tf.shape(x)[0]
weights = tf.random_uniform((1,xSize),0,1,dtype = tf.float32)
nStepsY=5.0
y = tf.range(1.0,nStepsY+1)
#y = tf.reshape(y,(tf.size(y),1))
nRows = 5
nColumns = 80
image = tf.zeros((nRows, nColumns))
y = tf.reshape(y, (tf.size(y),1))
print(y[0])
yTimed = tf.multiply(y,times)
appoggio = tf.ones([nRows])
appoggio = tf.reshape(appoggio, (tf.size(appoggio),1))
#print(sessione.run(tf.shape(appoggio)))
#print(sessione.run(tf.shape(x)))
x3d = tf.multiply(appoggio,x)
weights3d = tf.multiply(appoggio,weights)
positions = tf.round(x3d-yTimed)
positions = tf.add(positions,50)
positions = tf.cast(positions, dtype=tf.int64)
riappoggio = tf.ones([xSize], dtype = tf.int64)
y = tf.cast(y, tf.int64)
y3d = tf.multiply(y, riappoggio)
y3d = tf.reshape(y3d, (1,tf.size(y3d)))
weights3d = tf.reshape(weights3d, (1,tf.size(weights3d)))
positions = tf.reshape(positions, (1,tf.size(positions)))
righe = y3d[0]
colonne = positions[0]
pesi = weights3d[0]
#VALUTARE DI FARE PARALLEL STACK
coordinate = tf.stack([righe,colonne],1)
shape = [6,80]
matrice = tf.SparseTensor(coordinate, pesi, shape)
matrice = tf.sparse_reorder(matrice)
coordinate = tf.cast(matrice.indices, tf.int32)
linearized = tf.matmul(coordinate, [[100], [1]])
coo, idx = tf.unique(tf.squeeze(linearized))
values = tf.segment_sum(matrice.values, idx)
# Go back to N-D indices
coo = tf.expand_dims(coo, 1)
indices = tf.concat([tf.cast(coo/100,tf.int32), coo%100],1)
#print(sessione.run((indices)))
#matrice = tf.SparseTensor(indices, pesi, shape)
immagine = tf.sparse_to_dense(indices, shape, values)
#print(sessione.run((tf.shape(coordinate), tf.shape(pesi), tf.shape(shape))))
#print(sessione.run((tf.shape(x3d), tf.shape(y3d),tf.shape(positions))))
#print(sessione.run(indices))
plottala = sessione.run(immagine)
%matplotlib inline
a = pyplot.imshow(plottala, aspect = 10)
#pyplot.show()
# qui provo mappando
sessione = tf.Session()
matrix = tf.random_uniform((10,10), 0,2, dtype= tf.int32)
coordinates = tf.where(tf.not_equal(matrix,0))
x = coordinates[:,0]
x = tf.cast(x, tf.float32)
times = coordinates[:,1]
times = tf.cast(times, tf.float32)
xSize = tf.shape(x)[0]
weights = tf.random_uniform((1,xSize),0,1,dtype = tf.float32)
nStepsY=5.0
y = tf.range(1.0,nStepsY+1)
#y = tf.reshape(y,(tf.size(y),1))
nRows = 5
nColumns = 80
weights = tf.reshape(weights, (1,tf.size(weights)))
pesi = weights[0]
def funmap(stepIesimo):
yTimed = tf.multiply(y[stepIesimo],times)
positions = tf.round(x-yTimed)
positions = tf.add(positions,50)
positions = tf.cast(positions, dtype=tf.int64)
positions = tf.reshape(positions, (1,tf.size(positions)))
riga= tf.ones([tf.size(x)])
riga = tf.reshape(riga, (1,tf.size(riga)))
righe = riga[0]
colonne = positions[0]
coordinate = tf.stack([tf.cast(righe,dtype=tf.int64),tf.cast(colonne,dtype=tf.int64)],1)
shape = [1,80]
matrice = tf.SparseTensor(coordinate, pesi, shape)
#matrice = tf.sparse_reorder(matrice)
coordinate = tf.cast(matrice.indices, tf.int32)
coo, idx = tf.unique(coordinate[:,1])
values = tf.segment_sum(matrice.values, idx)
immagine = tf.sparse_to_dense(coo, [nColumns], values)
#immagine = tf.cast(immagine, dtype=tf.float32)
return immagine
hough = tf.map_fn(funmap, tf.range(0,5),dtype=tf.float32)
plottala = sessione.run(hough)
print(numpy.size(plottala))
#imm = [plottala,plottala]
%matplotlib inline
a = pyplot.imshow(plottala, aspect = 10)
pyplot.show()
# qui provo con tf map o scan (con bincount)
sessione = tf.Session()
matrix = tf.random_uniform((10,10), 0,2, dtype= tf.int32)
coordinates = tf.where(tf.not_equal(matrix,0))
x = coordinates[:,0]
x = tf.cast(x, tf.float32)
times = coordinates[:,1]
times = tf.cast(times, tf.float32)
xSize = tf.shape(x)[0]
weights = tf.random_uniform((1,xSize),0,1,dtype = tf.float32)
nStepsY=5.0
y = tf.range(1.0,nStepsY+1)
#y = tf.reshape(y,(tf.size(y),1))
nRows = 5
nColumns = 80
y = tf.reshape(y, (tf.size(y),1))
def mapIt(ithStep):
image = tf.zeros(nColumns)
yTimed = y[ithStep]*times
positions = tf.round(x-yTimed+50, dtype=tf.int32)
values = tf.bincount(positions,weights)
values = values[numpy.nonzero(values)]
positions = numpy.unique(positions)
image[positions] = values
return image
%time imageMapped = list(map(mapIt, range(nStepsY)))
imageMapped = numpy.array(imageMapped)
%matplotlib inline
a = pyplot.imshow(imageMapped, aspect = 10)
import scipy.io
import numpy
percorsoFile = "/home/protoss/matlabbo.mat"
#percorsoFile = "matlabbo/miaimgSenzacumsum.mat"
scalareMatlabbo = scipy.io.loadmat(percorsoFile)['scalarevero']
scalareMatlabbo
ncolumn = tf.constant(10)
matrice = tf.zeros((0,ncolumn))
print(sessione.run(matrice))
import tensorflow as tf
import numpy
sessione = tf.Session()
matricia = tf.random_uniform((9,828360),0,1,dtype = tf.float32)
matricia = sessione.run(matricia)
%time matricia = numpy.transpose(matricia)
print(matricia.shape)
Explanation: Documentazione utile
Lezioni su TF
Parte su placeholders forse utile in hough
Doc ufficiale
Cose utili in doc uff
Guide ufficiali
Tensori costanti (generalizzazione di numpy zeros/ones))
Molto interessante, generalizzazione di matrice trasposta
Tensori sparsi
Fourier
Broadcasting IMPORTANTE
cose utili o importanti
https://stackoverflow.com/questions/39219414/in-tensorflow-how-can-i-get-nonzero-values-and-their-indices-from-a-tensor-with
https://www.google.it/search?client=ubuntu&channel=fs&q=tf+scatter+update&ie=utf-8&oe=utf-8&gfe_rd=cr&ei=JkYvWduzO-nv8AfympmoAQ
https://stackoverflow.com/questions/34685947/adjust-single-value-within-tensor-tensorflow
https://www.tensorflow.org/versions/r0.11/api_docs/python/state_ops/sparse_variable_updates
https://www.tensorflow.org/api_docs/python/tf/scatter_add
https://www.tensorflow.org/api_docs/python/tf/scatter_update
https://stackoverflow.com/questions/34935464/update-a-subset-of-weights-in-tensorflow
https://stackoverflow.com/questions/39859516/how-to-update-a-subset-of-2d-tensor-in-tensorflow
End of explanation |
1,315 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parsing Los Angeles County's precinct-level results from the 2014 general election.
Step1: Load the PDF in PDFPlumber
Step2: Let's look at the first 15 characters on the first page of the PDF
Step3: Extract the precint ID
The corresponding characters are about 37–44 pixels from the top, and on the left half of the page.
Step4: We can do the same for the number of ballots cast
Step5: ... and for the number of registered voters in each precinct
Step6: Getting the results for each race is a bit trickier
The data representation isn't truly tabular, but it's structured enough to allow us to create tabular data from it. This function divides the first column of the result-listings into columns (explicitly defined, in pixels) and rows (separated by gutters of whitespace).
Step7: Let's restructure that slightly, so that each row contains information about the relevant race
Step8: From there, we can start to do some calculations | Python Code:
import pandas as pd
import pdfplumber
import re
Explanation: Parsing Los Angeles County's precinct-level results from the 2014 general election.
End of explanation
pdf = pdfplumber.open("2014-bulletin-first-10-pages.pdf")
print(len(pdf.pages))
Explanation: Load the PDF in PDFPlumber:
End of explanation
first_page = pdf.pages[0]
chars = pd.DataFrame(first_page.chars)
chars.head(15)
Explanation: Let's look at the first 15 characters on the first page of the PDF:
End of explanation
pd.DataFrame(first_page.crop((0, 37, first_page.width / 2, 44 )).chars)
def get_precinct_id(page):
cropped = page.crop((0, 37, page.width / 2, 44 ))
text = "".join((c["text"] for c in cropped.chars))
trimmed = re.sub(r" +", "|", text)
return trimmed
for page in pdf.pages:
print(get_precinct_id(page))
Explanation: Extract the precint ID
The corresponding characters are about 37–44 pixels from the top, and on the left half of the page.
End of explanation
def get_ballots_cast(page):
cropped = page.crop((0, 48, page.width / 3, 60))
text = "".join((c["text"] for c in cropped.chars))
count = int(text.split(" ")[0])
return count
for page in pdf.pages:
print(get_ballots_cast(page))
Explanation: We can do the same for the number of ballots cast
End of explanation
def get_registered_voters(page):
cropped = page.crop((0, 62, page.width / 3, 74))
text = "".join((c["text"] for c in cropped.chars))
count = int(text.split(" ")[0])
return count
for page in pdf.pages:
print(get_registered_voters(page))
Explanation: ... and for the number of registered voters in each precinct
End of explanation
def get_results_rows(page):
first_col = page.crop((0, 77, 212, page.height))
table = first_col.extract_table(
v=(0, 158, 180, 212),
h="gutters",
x_tolerance=1)
return table
get_results_rows(first_page)
Explanation: Getting the results for each race is a bit trickier
The data representation isn't truly tabular, but it's structured enough to allow us to create tabular data from it. This function divides the first column of the result-listings into columns (explicitly defined, in pixels) and rows (separated by gutters of whitespace).
End of explanation
def get_results_table(page):
rows = get_results_rows(page)
results = []
race = None
for row in rows:
name, affil, votes = row
if name == "VOTER NOMINATED": continue
if votes == None:
race = name
else:
results.append((race, name, affil, int(votes)))
results_df = pd.DataFrame(results, columns=[ "race", "name", "party", "votes" ])
return results_df
get_results_table(first_page)
Explanation: Let's restructure that slightly, so that each row contains information about the relevant race:
End of explanation
def get_jerry_brown_pct(page):
table = get_results_table(page)
brown_votes = table[table["name"] == "EDMUND G BROWN"]["votes"].iloc[0]
kashkari_votes = table[table["name"] == "NEEL KASHKARI"]["votes"].iloc[0]
brown_prop = float(brown_votes) / (kashkari_votes + brown_votes)
return (100 * brown_prop).round(1)
for page in pdf.pages:
precinct_id = get_precinct_id(page)
brown = get_jerry_brown_pct(page)
print("{0}: {1}%".format(precinct_id, brown))
Explanation: From there, we can start to do some calculations:
End of explanation |
1,316 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Parse Table for a Shift-Reduce Parser
This notebook contains the parse table that is needed for a shift reduce parser that parses the following grammar
Step1: Next, we define the action table as a dictionary.
Step2: Below is the definition of the goto table.
Step3: Finally, we define the state table. This is table is only used for pretty printing. | Python Code:
r1 = ('E', ('E', '+', 'P'))
r2 = ('E', ('E', '-', 'P'))
r3 = ('E', ('P'))
r4 = ('P', ('P', '*', 'F'))
r5 = ('P', ('P', '/', 'F'))
r6 = ('P', ('F'))
r7 = ('F', ('(', 'E', ')'))
r8 = ('F', ('NUMBER',))
Explanation: A Parse Table for a Shift-Reduce Parser
This notebook contains the parse table that is needed for a shift reduce parser that parses the following grammar:
$$
\begin{eqnarray}
\mathrm{expr} & \rightarrow & \mathrm{expr}\;\;\texttt{'+'}\;\;\mathrm{product} \
& \mid & \mathrm{expr}\;\;\texttt{'-'}\;\;\mathrm{product} \
& \mid & \mathrm{product} \[0.2cm]
\mathrm{product} & \rightarrow & \mathrm{product}\;\;\texttt{''}\;\;\mathrm{factor} \
& \mid & \mathrm{product}\;\;\texttt{'/'}\;\;\mathrm{factor} \
& \mid & \mathrm{factor} \[0.2cm]
\mathrm{factor} & \rightarrow & \texttt{'('} \;\;\mathrm{expr} \;\;\texttt{')'} \
& \mid & \texttt{NUMBER}
\end{eqnarray*}
$$
Below, we define the grammar rules.
End of explanation
actionTable = {}
actionTable['s0', '(' ] = ('shift', 's5')
actionTable['s0', 'NUMBER'] = ('shift', 's2')
actionTable['s1', 'EOF'] = ('reduce', r6)
actionTable['s1', '+' ] = ('reduce', r6)
actionTable['s1', '-' ] = ('reduce', r6)
actionTable['s1', '*' ] = ('reduce', r6)
actionTable['s1', '/' ] = ('reduce', r6)
actionTable['s1', ')' ] = ('reduce', r6)
actionTable['s2', 'EOF'] = ('reduce', r8)
actionTable['s2', '+' ] = ('reduce', r8)
actionTable['s2', '-' ] = ('reduce', r8)
actionTable['s2', '*' ] = ('reduce', r8)
actionTable['s2', '/' ] = ('reduce', r8)
actionTable['s2', ')' ] = ('reduce', r8)
actionTable['s3', 'EOF'] = ('reduce', r3)
actionTable['s3', '+' ] = ('reduce', r3)
actionTable['s3', '-' ] = ('reduce', r3)
actionTable['s3', '*' ] = ('shift', 's12')
actionTable['s3', '/' ] = ('shift', 's11')
actionTable['s3', ')' ] = ('reduce', r3)
actionTable['s4', 'EOF'] = 'accept'
actionTable['s4', '+' ] = ('shift', 's8')
actionTable['s4', '-' ] = ('shift', 's9')
actionTable['s5', '(' ] = ('shift', 's5')
actionTable['s5', 'NUMBER'] = ('shift', 's2')
actionTable['s6', '+' ] = ('shift', 's8')
actionTable['s6', '-' ] = ('shift', 's9')
actionTable['s6', ')' ] = ('shift', 's7')
actionTable['s7', 'EOF'] = ('reduce', r7)
actionTable['s7', '+' ] = ('reduce', r7)
actionTable['s7', '-' ] = ('reduce', r7)
actionTable['s7', '*' ] = ('reduce', r7)
actionTable['s7', '/' ] = ('reduce', r7)
actionTable['s7', ')' ] = ('reduce', r7)
actionTable['s8', '(' ] = ('shift', 's5')
actionTable['s8', 'NUMBER'] = ('shift', 's2')
actionTable['s9', '(' ] = ('shift', 's5')
actionTable['s9', 'NUMBER'] = ('shift', 's2')
actionTable['s10', 'EOF'] = ('reduce', r2)
actionTable['s10', '+' ] = ('reduce', r2)
actionTable['s10', '-' ] = ('reduce', r2)
actionTable['s10', '*' ] = ('shift', 's12')
actionTable['s10', '/' ] = ('shift', 's11')
actionTable['s10', ')' ] = ('reduce', r2)
actionTable['s11', '(' ] = ('shift', 's5')
actionTable['s11', 'NUMBER'] = ('shift', 's2')
actionTable['s12', '(' ] = ('shift', 's5')
actionTable['s12', 'NUMBER'] = ('shift', 's2')
actionTable['s13', 'EOF'] = ('reduce', r4)
actionTable['s13', '+' ] = ('reduce', r4)
actionTable['s13', '-' ] = ('reduce', r4)
actionTable['s13', '*' ] = ('reduce', r4)
actionTable['s13', '/' ] = ('reduce', r4)
actionTable['s13', ')' ] = ('reduce', r4)
actionTable['s14', 'EOF'] = ('reduce', r5)
actionTable['s14', '+' ] = ('reduce', r5)
actionTable['s14', '-' ] = ('reduce', r5)
actionTable['s14', '*' ] = ('reduce', r5)
actionTable['s14', '/' ] = ('reduce', r5)
actionTable['s14', ')' ] = ('reduce', r5)
actionTable['s15', 'EOF'] = ('reduce', r1)
actionTable['s15', '+' ] = ('reduce', r1)
actionTable['s15', '-' ] = ('reduce', r1)
actionTable['s15', '*' ] = ('shift', 's12')
actionTable['s15', '/' ] = ('shift', 's11')
actionTable['s15', ')' ] = ('reduce', r1)
Explanation: Next, we define the action table as a dictionary.
End of explanation
gotoTable = {}
gotoTable['s0', 'E'] = 's4'
gotoTable['s0', 'P'] = 's3'
gotoTable['s0', 'F'] = 's1'
gotoTable['s5', 'E'] = 's6'
gotoTable['s5', 'P'] = 's3'
gotoTable['s5', 'F'] = 's1'
gotoTable['s8', 'P'] = 's15'
gotoTable['s8', 'F'] = 's1'
gotoTable['s9', 'P'] = 's10'
gotoTable['s9', 'F'] = 's1'
gotoTable['s11', 'F'] = 's14'
gotoTable['s12', 'F'] = 's13'
Explanation: Below is the definition of the goto table.
End of explanation
stateTable = {}
stateTable['s0'] = { 'S -> • E',
'E -> • E "+" P', 'E -> • E "-" P', 'E -> • P',
'P -> • P "*" F', 'P -> • P "/" F', 'P -> • F',
'F -> • "(" E ")"', 'F -> • NUMBER'
}
stateTable['s1'] = { 'P -> F •' }
stateTable['s2'] = { 'F -> NUMBER •' }
stateTable['s3'] = { 'P -> P • "*" F', 'P -> P • "/" F', 'E -> P •' }
stateTable['s4'] = { 'S -> E •', 'E -> E • "+" P', 'E -> E • "-" P' }
stateTable['s5'] = { 'F -> "(" • E ")"',
'E -> • E "+" P', 'E -> • E "-" P', 'E -> • P',
'P -> • P "*" F', 'P -> • P "/" F', 'P -> • F',
'F -> • "(" E ")"', 'F -> • NUMBER'
}
stateTable['s6'] = { 'F -> "(" E • ")"', 'E -> E • "+" P', 'E -> E • "-" P' }
stateTable['s7'] = { 'F -> "(" E ")" •' }
stateTable['s8'] = { 'E -> E "+" • P',
'P -> • P "*" F', 'P -> • P "/" F', 'P -> • F',
'F -> • "(" E ")"', 'F -> • NUMBER'
}
stateTable['s9' ] = { 'E -> E "-" • P',
'P -> • P "*" F', 'P -> • P "/" F', 'P -> • F',
'F -> • "(" E ")"', 'F -> • NUMBER'
}
stateTable['s10'] = { 'E -> E "-" P •', 'P -> P • "*" F', 'P -> P • "/" F' }
stateTable['s11'] = { 'P -> P "/" • F', 'F -> • "(" E ")"', 'F -> • NUMBER' }
stateTable['s12'] = { 'P -> P "*" • F', 'F -> • "(" E ")"', 'F -> • NUMBER' }
stateTable['s13'] = { 'P -> P "*" F •' }
stateTable['s14'] = { 'P -> P "/" F •' }
stateTable['s15'] = { 'E -> E "+" P •', 'P -> P • "*" F', 'P -> P • "/" F' }
Explanation: Finally, we define the state table. This is table is only used for pretty printing.
End of explanation |
1,317 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Wrangling OpenStreetMap Data with MongoDB
by Duc Vu in fulfillment of Udacity’s Data Analyst Nanodegree, Project 3
OpenStreetMap is an open project that lets eveyone use and create a free editable map of the world.
1. Chosen Map Area
In this project, I choose to analyze data from Boston, Massachusetts want to show you to fix one type of error, that is the address of the street. And not only that, I also will show you how to put the data that has been audited into MongoDB instance. We also use MongoDB's Agregation Framework to get overview and analysis of the data
Step1: The dataset is here https
Step2: I used the Overpass API to download the OpenStreetMap XML for the corresponding bounding box
Step3: Before processing the data and add it into MongoDB, I should check the "k" value for each 'tag' and see if they can be valid keys in MongoDB, as well as see if there are any other potential problems.
I have built 3 regular expressions to check for certain patterns in the tags to change the data model
and expand the "addr
Step4: Now I will redefine process_map to build a set of unique userid's found within the XML. I will then output the length of this set, representing the number of unique users making edits in the chosen map area.
Step5: 3. Problems Encountered in the Map
3.1 Street name
The majority of this project will be devoted to auditing and cleaning street names in the OSM XML file by changing the variable 'mapping' to reflect the changes needed to fix the unexpected or abbreviated street types to the appropriate ones in the expected list. I will find these abbreviations and replace them with their full text form.
Step6: Let's define a function that not only audits tag elements where k="addr
Step7: The function is_street_name determines if an element contains an attribute k="addr
Step8: Now print the output of audit
Step9: From the results of the audit, I will create a dictionary to map abbreviations to their full, clean representations.
Step10: The first result of audit gives me a list of some abbreviated street types (as well as unexpected clean street types, cardinal directions, and highway numbers). So I need to build an update_name function to replace these abbreviated street types.
Step11: Let's see how this update_name works.
Step12: It seems that all the abbreviated street types updated as expected.
3.2 Cardinal direction
But I can see there is still an issue
Step13: Here is the result of audit the cardinal directions with this new regex 'cardinal_dir_re'
Step14: I will create a dictionary to map abbreviations (N, S, E and W) to their full representations of cardinal directions.
Step15: Look like all expected cardinal directions have been replaced.
Step16: 3.3 Postal codes
Let's exam the postal codes, we can see that there are still some invalid postal codes, so we also need to clean postal codes. I will use regular expressions to identify invalid postal codes and return standardized results. For example, if postal codes like 'MA 02131-4931' and '02131-2460' should be mapped to '02131'.
Step17: 3.4 The total number of nodes and ways
Then I will count the total number of nodes and ways that contain a tag child with k="addr
Step18: 4. Preparing for MongoDB
Before importing the XML data into MongoDB, I have to transform the shape of data into json documents structured (a list of dictionaries) like this
Here are the rules
Step19: It's time to parse the XML, shape the elements, and write to a json file
Step20: 5. Data Overview
Check the size of XML and JSON files
Step21: Execute mongod to run MongoDB
Use the subprocess module to run shell commands.
Step22: http
Step23: When we have to import a large amounts of data, mongoimport is recommended.
First build a mongoimport command, then use subprocess.call to execute
Step24: Get the collection from the database
Step25: Number of Documents
Step26: Number of Unique Users
Step27: Number of Nodes and Ways
Step28: Number of Nodes
Step29: Number of Ways
Step30: Top Contributing User
Step31: Number of users having only 1 post
Step32: Number of Documents Containing a Street Address
Step33: Zip codes
Step34: Top 5 Most Common Cities
Step35: Top 10 Amenities
Step36: Most common building types
Step37: Top Religions with Denominations
Step38: Top 10 Leisures
Step39: Top 15 Universities
Step40: Top 10 Schools
Step41: Top Prisons
Step42: Top 10 Hospitals
Step43: Most popular cuisines in fast foods
fast_food = boston_db.aggregate([
{"$match"
Step44: Most popular banks
Step45: Most popular restaurants
Step46: 6. Additional Ideas
Analyzing the data of Boston I found out that not all nodes or ways include this information since its geographical position is represented within regions of a city. What could be done in this case, is check if each node or way belongs to a city based on the latitude and longitude and ensure that the property "address.city" is properly informed. By doing so, we could get statistics related to cities in a much more reliable way. In fact, I think this is the biggest benefit to anticipate problems and implement improvements to the data you want to analyze. Real world data are very susceptible to being incomplete, noisy and inconsistent which means that if you have low-quality of data the results of their analysis will also be of poor quality.
I think that extending this open source project to include data such as user reviews of establishments, subjective areas of what bound a good and bad neighborhood, housing price data, school reviews, walkability/bikeability, quality of mass transit, and on would form a solid foundation of robust recommender systems. These recommender systems could aid users in anything from finding a new home or apartment to helping a user decide where to spend a weekend afternoon.
Another alternative to help in the absence of information in the region would be the use of gamification or crowdsource information to make more people help in the map contribution. Something like the mobile apps similar to Waze and Minutely have already done to make the users responsible for improving the app and social network around the app.
A different application of this project is that it can be helpful on the problem of how the city boundaries well-defined. The transportation networks (street networks), the built environment can be good indicators of metropolitan area and combining an elementary clustering technique, we consider two street intersections to belong to the same cluster if they have a distance below a given distance threshold (in metres). The geospatial information gives us a good definition of city boundaries through spatial urban networks.
An interesting fact that we can use the geospatial coordinates information to find out country/city name (search OSM data by name and address and to generate synthetic addresses of OSM points). This problem is called reverse geocoding which maps geospatial coordinates to location name. And the <a href="http
Step47: However, potential problems associated with reverse geocoding is that it may give us weird results near the poles and the international date line or for cities within cities, for example certain locations in Rome may return "Vatican City" - depending on the lat/lon specified in the database for each
For example
Step48: Despite the many issues with the reverse coding, I think another benefits of this project that it can be applied in disease mapping which facilitates us use the longitudes and latitudes information to find the plaintext addresses of patients for identifying patterns, correlates, and predictors of disease in academia, government and private sector with the widespread availability of geographic information.
7. Conclusion
This review of the data is cursory, but it seems that the Boston area is incomplete, though I believe it has been well cleaned and represented after this project.
8. References
<a href="http | Python Code:
from IPython.display import HTML
HTML('<iframe width="425" height="350" frameborder="0" scrolling="no" marginheight="0" marginwidth="0" \
src="http://www.openstreetmap.org/export/embed.html?bbox=-71.442,42.1858,-70.6984,42.4918&layer=mapnik"></iframe><br/>')
Explanation: Wrangling OpenStreetMap Data with MongoDB
by Duc Vu in fulfillment of Udacity’s Data Analyst Nanodegree, Project 3
OpenStreetMap is an open project that lets eveyone use and create a free editable map of the world.
1. Chosen Map Area
In this project, I choose to analyze data from Boston, Massachusetts want to show you to fix one type of error, that is the address of the street. And not only that, I also will show you how to put the data that has been audited into MongoDB instance. We also use MongoDB's Agregation Framework to get overview and analysis of the data
End of explanation
filename = 'boston_massachusetts.osm'
Explanation: The dataset is here https://s3.amazonaws.com/metro-extracts.mapzen.com/boston_massachusetts.osm.bz2
End of explanation
import xml.etree.cElementTree as ET
import pprint
def count_tags(filename):
'''
this function will return a dictionary with the tag name as the key
and number of times this tag can be encountered in the map as value.
'''
tags = {}
for event, elem in ET.iterparse(filename):
if elem.tag in tags:
tags[elem.tag] +=1
else:
tags[elem.tag]= 1
return tags
tags = count_tags(filename)
pprint.pprint(tags)
Explanation: I used the Overpass API to download the OpenStreetMap XML for the corresponding bounding box:
2. Auditing the Data
In this project, I will parse through the downloaded OSM XML file with ElementTree and find the number of each type of element since the XML file are too large to work with in memory.
End of explanation
import re
lower = re.compile(r'^([a-z]|_)*$')
lower_colon = re.compile(r'^([a-z]|_)*:([a-z]|_)*$')
problemchars = re.compile(r'[=\+/&<>;\'"\?%#$@\,\. \t\r\n]')
def key_type(element, keys):
'''
this function counts number of times the unusual tag element can be encountered in the map.
Args:
element(string): tag element in the map.
keys(int): number of that encountered tag in the map
'''
if element.tag == "tag":
if lower.search(element.attrib['k']):
keys["lower"] += 1
elif lower_colon.search(element.attrib['k']):
keys["lower_colon"] += 1
elif problemchars.search(element.attrib['k']):
keys["problemchars"] +=1
else:
keys["other"] +=1
return keys
def process_map(filename):
'''
this function will return a dictionary with the unexpexted tag element as the key
and number of times this string can be encountered in the map as value.
Args:
filename(osm): openstreetmap file.
'''
keys = {"lower": 0, "lower_colon": 0, "problemchars": 0, "other": 0}
for _, element in ET.iterparse(filename):
keys = key_type(element, keys)
return keys
'''
#Below unit testing runs process_map with file example.osm
def test():
keys = process_map('example.osm')
pprint.pprint(keys)
assert keys == {'lower': 5, 'lower_colon': 0, 'other': 1, 'problemchars': 1}
if __name__ == "__main__":
test()
'''
keys = process_map(filename)
pprint.pprint(keys)
Explanation: Before processing the data and add it into MongoDB, I should check the "k" value for each 'tag' and see if they can be valid keys in MongoDB, as well as see if there are any other potential problems.
I have built 3 regular expressions to check for certain patterns in the tags to change the data model
and expand the "addr:street" type of keys to a dictionary like this:
Here are three regular expressions: lower, lower_colon, and problemchars.
lower: matches strings containing lower case characters
lower_colon: matches strings containing lower case characters and a single colon within the string
problemchars: matches characters that cannot be used within keys in MongoDB
example: {"address": {"street": "Some value"}}
So, we have to see if we have such tags, and if we have any tags with problematic characters.
Please complete the function 'key_type'.
End of explanation
def process_map(filename):
'''
This function will return a set of unique user IDs ("uid")
making edits in the chosen map area (i.e Boston area).
Args:
filename(osm): openstreetmap file.
'''
users = set()
for _, element in ET.iterparse(filename):
#print element.attrib
try:
users.add(element.attrib['uid'])
except KeyError:
continue
'''
if "uid" in element.attrib:
users.add(element.attrib['uid'])
'''
return users
'''
#Below unit testing runs process_map with file example.osm
def test():
users = process_map('example.osm')
pprint.pprint(users)
assert len(users) == 6
if __name__ == "__main__":
test()
'''
users = process_map(filename)
#pprint.pprint(users)
print len(users)
Explanation: Now I will redefine process_map to build a set of unique userid's found within the XML. I will then output the length of this set, representing the number of unique users making edits in the chosen map area.
End of explanation
from collections import defaultdict
street_type_re = re.compile(r'\b\S+\.?$', re.IGNORECASE)
expected = ["Street", "Avenue", "Boulevard", "Drive", "Court", "Place", "Square", "Lane", "Road",
"Trail", "Parkway", "Commons"]
def audit_street_type(street_types, street_name, rex):
'''
This function will take in the dictionary of street types, a string of street name to audit,
a regex to match against that string, and the list of expected street types.
Args:
street_types(dictionary): dictionary of street types.
street_name(string): a string of street name to audit.
rex(regex): a compiled regular expression to match against the street_name.
'''
#m = street_type_re.search(street_name)
m = rex.search(street_name)
#print m
#print m.group()
if m:
street_type = m.group()
if street_type not in expected:
street_types[street_type].add(street_name)
Explanation: 3. Problems Encountered in the Map
3.1 Street name
The majority of this project will be devoted to auditing and cleaning street names in the OSM XML file by changing the variable 'mapping' to reflect the changes needed to fix the unexpected or abbreviated street types to the appropriate ones in the expected list. I will find these abbreviations and replace them with their full text form.
End of explanation
def audit(osmfile,rex):
'''
This function changes the variable 'mapping' to reflect the changes needed to fix
the unexpected street types to the appropriate ones in the expected list.
Args:
filename(osm): openstreetmap file.
rex(regex): a compiled regular expression to match against the street_name.
'''
osm_file = open(osmfile, "r")
street_types = defaultdict(set)
for event, elem in ET.iterparse(osm_file, events=("start",)):
if elem.tag == "node" or elem.tag == "way":
for tag in elem.iter("tag"):
if is_street_name(tag):
audit_street_type(street_types, tag.attrib['v'],rex)
return street_types
Explanation: Let's define a function that not only audits tag elements where k="addr:street", but whichever tag elements match the is_street_name function. The audit function also takes in a regex and the list of expected matches
End of explanation
def is_street_name(elem):
return (elem.attrib['k'] == "addr:street")
Explanation: The function is_street_name determines if an element contains an attribute k="addr:street". I will use is_street_name as the tag filter when I call the audit function to audit street names.
End of explanation
st_types = audit(filename, rex = street_type_re)
pprint.pprint(dict(st_types))
Explanation: Now print the output of audit
End of explanation
# UPDATE THIS VARIABLE
mapping = { "ave" : "Avenue",
"Ave" : "Avenue",
"Ave.": "Avenue",
"Ct" : "Court",
"HIghway": "Highway",
"Hwy": "Highway",
"LEVEL": "Level",
"Pkwy": "Parkway",
"Pl": "Place",
"rd." : "Road",
"Rd" : "Road",
"Rd." : "Road",
"Sq." : "Square",
"st": "Street",
"St": "Street",
"St.": "Street",
"St,": "Street",
"ST": "Street",
"Street." : "Street",
}
Explanation: From the results of the audit, I will create a dictionary to map abbreviations to their full, clean representations.
End of explanation
def update_name(name, mapping,rex):
'''
This function takes a string with street name as an argument
and replace these abbreviated street types with the fixed name.
Args:
name(string): street name to update.
mapping(dictionary): a mapping dictionary.
rex(regex): a compiled regular expression to match against the street_name.
'''
#m = street_type_re.search(name)
m = rex.search(name)
if m:
street_type = m.group()
new_street_type = mapping[street_type]
name = re.sub(rex, new_street_type, name) # re.sub(old_pattern, new_pattern, file)
#name = street_type_re.sub(new_street_type, name)
return name
Explanation: The first result of audit gives me a list of some abbreviated street types (as well as unexpected clean street types, cardinal directions, and highway numbers). So I need to build an update_name function to replace these abbreviated street types.
End of explanation
for st_type, ways in st_types.iteritems():
if st_type in mapping:
for name in ways:
better_name = update_name(name, mapping, rex = street_type_re)
print name, "=>", better_name
Explanation: Let's see how this update_name works.
End of explanation
cardinal_dir_re = re.compile(r'^[NSEW]\b\.?', re.IGNORECASE)
Explanation: It seems that all the abbreviated street types updated as expected.
3.2 Cardinal direction
But I can see there is still an issue: cardinal directions (North, South, East, and West) appear to be universally abbreviated. Therefore , I will traverse the cardinal_directions dictionary and apply the updates for both street type and cardinal direction
End of explanation
dir_st_types = audit(filename, rex = cardinal_dir_re)
pprint.pprint(dict(dir_st_types))
Explanation: Here is the result of audit the cardinal directions with this new regex 'cardinal_dir_re'
End of explanation
cardinal_directions_mapping = \
{
"E" : "East",
"N" : "North",
"S" : "South",
"W" : "West"
}
Explanation: I will create a dictionary to map abbreviations (N, S, E and W) to their full representations of cardinal directions.
End of explanation
for st_type, ways in dir_st_types.iteritems():
if st_type in cardinal_directions_mapping:
for name in ways:
better_name = update_name(name, cardinal_directions_mapping, rex = cardinal_dir_re)
print name, "=>", better_name
Explanation: Look like all expected cardinal directions have been replaced.
End of explanation
badZipCode = ["MA", "Mass Ave"]
zip_code_re = re.compile(r"(\d{5})(-\d{4})?$") #5 digits in a row @ end of string
#and optionally dash plus 4 digits
#return different parts of the match and an optional clause (?)
#for the dash and four digits at the end of the string ($)
# find the zipcodes
def get_postcode(element):
if (element.attrib['k'] == "addr:postcode"):
postcode = element.attrib['v']
return postcode
# update zipcodes
def update_postal(postcode, rex):
'''
This function takes a string with zip code as an argument
and replace these wrong zip code with the fixed zip code.
Args:
postcode(string): zip code to update.
rex(regex): a compiled regular expression to match against the zip code.
'''
if postcode is not None:
zip_code = re.search(rex,postcode)
if zip_code:
postcode = zip_code.group(1)
return postcode
def audit(osmfile):
'''
This function return a dictionary with the key is the zip code
and the value is the number of that zip code in osm file.
Args:
filename(osm): openstreetmap file.
'''
osm_file = open(osmfile, "r")
data_dict = defaultdict(int)
for event, elem in ET.iterparse(osm_file, events=("start",)):
if elem.tag == "node" or elem.tag == "way":
for tag in elem.iter("tag"):
if get_postcode(tag):
postcode = get_postcode(tag)
data_dict[postcode] += 1
return data_dict
zip_code_types = audit(filename)
pprint.pprint(dict(zip_code_types))
for raw_zip_code in zip_code_types:
if raw_zip_code not in badZipCode:
better_zip_code = update_postal(raw_zip_code, rex = zip_code_re)
print raw_zip_code, "=>", better_zip_code
Explanation: 3.3 Postal codes
Let's exam the postal codes, we can see that there are still some invalid postal codes, so we also need to clean postal codes. I will use regular expressions to identify invalid postal codes and return standardized results. For example, if postal codes like 'MA 02131-4931' and '02131-2460' should be mapped to '02131'.
End of explanation
osm_file = open(filename, "r")
address_count = 0
for event, elem in ET.iterparse(osm_file, events=("start",)):
if elem.tag == "node" or elem.tag == "way":
for tag in elem.iter("tag"):
if is_street_name(tag):
address_count += 1
address_count
Explanation: 3.4 The total number of nodes and ways
Then I will count the total number of nodes and ways that contain a tag child with k="addr:street"
End of explanation
CREATED = [ "version", "changeset", "timestamp", "user", "uid"]
def shape_element(element):
'''
This function will parse the map file and return a dictionary,
containing the shaped data for that element.
Args:
element(string): element in the map.
'''
node = {}
# create an address dictionary
address = {}
if element.tag == "node" or element.tag == "way" :
# YOUR CODE HERE
node["type"] = element.tag
#for key in element.attrib.keys()
for key in element.attrib:
#print key
if key in CREATED:
if "created" not in node:
node["created"] = {}
node["created"][key] = element.attrib[key]
elif key in ["lat","lon"]:
if "pos" not in node:
node["pos"] = [None, None]
if key == "lat":
node["pos"][0] = float(element.attrib[key])
elif key == "lon":
node["pos"][1] = float(element.attrib[key])
else:
node[key] = element.attrib[key]
for tag in element.iter("tag"):
tag_key = tag.attrib["k"] # key
tag_value = tag.attrib["v"] # value
if not problemchars.match(tag_key):
if tag_key.startswith("addr:"):# Single colon beginning with addr
if "address" not in node:
node["address"] = {}
sub_addr = tag_key[len("addr:"):]
if not lower_colon.match(sub_addr): # Tags with no colon
address[sub_addr] = tag_value
node["address"] = address
#node["address"][sub_addr] = tag_value
elif lower_colon.match(tag_key): # Single colon not beginnning with "addr:"
node[tag_key] = tag_value
else:
node[tag_key] = tag_value # Tags with no colon, not beginnning with "addr:"
for nd in element.iter("nd"):
if "node_refs" not in node:
node["node_refs"] = []
node["node_refs"].append(nd.attrib["ref"])
return node
else:
return None
Explanation: 4. Preparing for MongoDB
Before importing the XML data into MongoDB, I have to transform the shape of data into json documents structured (a list of dictionaries) like this
Here are the rules:
- process only 2 types of top level tags: "node" and "way"
- all attributes of "node" and "way" should be turned into regular key/value pairs, except:
- attributes in the CREATED array should be added under a key "created"
- attributes for latitude and longitude should be added to a "pos" array,
for use in geospacial indexing. Make sure the values inside "pos" array are floats
and not strings.
- if second level tag "k" value contains problematic characters, it should be ignored
- if second level tag "k" value starts with "addr:", it should be added to a dictionary "address"
- if second level tag "k" value does not start with "addr:", but contains ":", you can process it
same as any other tag.
- if there is a second ":" that separates the type/direction of a street,
the tag should be ignored, for example:
End of explanation
import codecs
import json
def process_map(file_in, pretty = False):
# You do not need to change this file
file_out = "{0}.json".format(file_in)
data = []
with codecs.open(file_out, "w") as fo:
for _, element in ET.iterparse(file_in):
el = shape_element(element)
if el:
data.append(el)
if pretty:
fo.write(json.dumps(el, indent=2)+"\n")
else:
fo.write(json.dumps(el) + "\n")
return data
process_map(filename)
Explanation: It's time to parse the XML, shape the elements, and write to a json file
End of explanation
import os
print "The downloaded XML file is {} MB".format(os.path.getsize(filename)/1.0e6) # convert from bytes to megabytes
print "The json file is {} MB".format(os.path.getsize(filename + ".json")/1.0e6) # convert from bytes to megabytes
Explanation: 5. Data Overview
Check the size of XML and JSON files
End of explanation
import signal
import subprocess
# The os.setsid() is passed in the argument preexec_fn so
# it's run after the fork() and before exec() to run the shell.
pro = subprocess.Popen("mongod", preexec_fn = os.setsid)
Explanation: Execute mongod to run MongoDB
Use the subprocess module to run shell commands.
End of explanation
from pymongo import MongoClient
db_name = "osm"
client = MongoClient('localhost:27017')
db = client[db_name]
Explanation: http://sharats.me/the-ever-useful-and-neat-subprocess-module.html
Connect database with pymongo
End of explanation
# Build mongoimport command
collection = filename[:filename.find(".")]
#print collection
working_directory = "/Users/ducvu/Documents/ud032-master/final_project/"
json_file = filename + ".json"
#print json_file
mongoimport_cmd = "mongoimport --db " + db_name + \
" --collection " + collection + \
" --file " + working_directory + json_file
#print mongoimport_cmd
# Before importing, drop collection if it exists
if collection in db.collection_names():
print "dropping collection"
db[collection].drop()
# Execute the command
print "Executing: " + mongoimport_cmd
subprocess.call(mongoimport_cmd.split())
Explanation: When we have to import a large amounts of data, mongoimport is recommended.
First build a mongoimport command, then use subprocess.call to execute
End of explanation
boston_db = db[collection]
Explanation: Get the collection from the database
End of explanation
boston_db.find().count()
Explanation: Number of Documents
End of explanation
len(boston_db.distinct('created.user'))
Explanation: Number of Unique Users
End of explanation
node_way = boston_db.aggregate([
{"$group" : {"_id" : "$type", "count" : {"$sum" : 1}}}])
pprint.pprint(list(node_way))
Explanation: Number of Nodes and Ways
End of explanation
boston_db.find({"type":"node"}).count()
Explanation: Number of Nodes
End of explanation
boston_db.find({"type":"way"}).count()
Explanation: Number of Ways
End of explanation
top_user = boston_db.aggregate([
{"$match":{"type":"node"}},
{"$group":{"_id":"$created.user","count":{"$sum":1}}},
{"$sort":{"count":-1}},
{"$limit":1}
])
#print(list(top_user))
pprint.pprint(list(top_user))
Explanation: Top Contributing User
End of explanation
type_buildings = boston_db.aggregate([
{"$group":{"_id":"$created.user","count":{"$sum":1}}},
{"$group":{"_id":{"postcount":"$count"},"num_users":{"$sum":1}}},
{"$project":{"_id":0,"postcount":"$_id.postcount","num_users":1}},
{"$sort":{"postcount":1}},
{"$limit":1}
])
pprint.pprint(list(type_buildings))
Explanation: Number of users having only 1 post
End of explanation
boston_db.find({"address.street" : {"$exists" : 1}}).count()
Explanation: Number of Documents Containing a Street Address
End of explanation
zipcodes = boston_db.aggregate([
{"$match" : {"address.postcode" : {"$exists" : 1}}}, \
{"$group" : {"_id" : "$address.postcode", "count" : {"$sum" : 1}}}, \
{"$sort" : {"count" : -1}}])
#for document in zipcodes:
# print(document)
pprint.pprint(list(zipcodes))
Explanation: Zip codes
End of explanation
cities = boston_db.aggregate([{"$match" : {"address.city" : {"$exists" : 1}}}, \
{"$group" : {"_id" : "$address.city", "count" : {"$sum" : 1}}}, \
{"$sort" : {"count" : -1}}, \
{"$limit" : 5}])
#for city in cities :
# print city
pprint.pprint(list(cities))
Explanation: Top 5 Most Common Cities
End of explanation
amenities = boston_db.aggregate([
{"$match" : {"amenity" : {"$exists" : 1}}}, \
{"$group" : {"_id" : "$amenity", "count" : {"$sum" : 1}}}, \
{"$sort" : {"count" : -1}}, \
{"$limit" : 10}])
#for document in amenities:
# print document
pprint.pprint(list(amenities))
amenities = boston_db.aggregate([
{"$match":{"amenity":{"$exists":1},"type":"node"}},
{"$group":{"_id":"$amenity","count":{"$sum":1}}},
{"$sort":{"count":-1}},
{"$limit":10}
])
pprint.pprint(list(amenities))
Explanation: Top 10 Amenities
End of explanation
type_buildings = boston_db.aggregate([
{'$match': {'building': {'$exists': 1}}},
{'$group': { '_id': '$building','count': {'$sum': 1}}},
{'$sort': {'count': -1}}, {'$limit': 20}
])
pprint.pprint(list(type_buildings))
Explanation: Most common building types
End of explanation
religions = boston_db.aggregate([
{"$match" : {"amenity" : "place_of_worship"}}, \
{"$group" : {"_id" : {"religion" : "$religion", "denomination" : "$denomination"}, "count" : {"$sum" : 1}}}, \
{"$sort" : {"count" : -1}}])
#for document in religions:
# print document
pprint.pprint(list(religions))
Explanation: Top Religions with Denominations
End of explanation
leisures = boston_db.aggregate([{"$match" : {"leisure" : {"$exists" : 1}}}, \
{"$group" : {"_id" : "$leisure", "count" : {"$sum" : 1}}}, \
{"$sort" : {"count" : -1}}, \
{"$limit" : 10}])
#for document in leisures:
# print document
pprint.pprint(list(leisures))
Explanation: Top 10 Leisures
End of explanation
universities = boston_db.aggregate([
{"$match" : {"amenity" : "university"}}, \
{"$group" : {"_id" : {"name" : "$name"}, "count" : {"$sum" : 1}}}, \
{"$sort" : {"count" : -1}},
{"$limit":15}
])
pprint.pprint(list(universities))
Explanation: Top 15 Universities
End of explanation
schools = boston_db.aggregate([
{"$match" : {"amenity" : "school"}}, \
{"$group" : {"_id" : {"name" : "$name"}, "count" : {"$sum" : 1}}}, \
{"$sort" : {"count" : -1}},
{"$limit":10}
])
pprint.pprint(list(schools))
Explanation: Top 10 Schools
End of explanation
prisons = boston_db.aggregate([
{"$match" : {"amenity" : "prison"}}, \
{"$group" : {"_id" : {"name" : "$name"}, "count" : {"$sum" : 1}}}, \
{"$sort" : {"count" : -1}}])
pprint.pprint(list(prisons))
Explanation: Top Prisons
End of explanation
hospitals = boston_db.aggregate([
{"$match" : {"amenity" : "hospital"}}, \
{"$group" : {"_id" : {"name" : "$name"}, "count" : {"$sum" : 1}}}, \
{"$sort" : {"count" : -1}},
{"$limit":10}
])
pprint.pprint(list(hospitals))
Explanation: Top 10 Hospitals
End of explanation
gas_station_brands = boston_db.aggregate([
{"$match":{"brand":{"$exists":1},"amenity":"fuel"}},
{"$group":{"_id":"$brand","count":{"$sum":1}}},
{"$sort":{"count":-1}},
{"$limit":10}
])
pprint.pprint(list(gas_station_brands))
Explanation: Most popular cuisines in fast foods
fast_food = boston_db.aggregate([
{"$match":{"cuisine":{"$exists":1},"amenity":"fast_food"}},
{"$group":{"_id":"$cuisine","count":{"$sum":1}}},
{"$sort":{"count":-1}},
{"$limit":10}
])
pprint.pprint(list(fast_food))
Most popular gas station brands
End of explanation
banks = boston_db.aggregate([
{"$match":{"name":{"$exists":1},"amenity":"bank"}},
{"$group":{"_id":"$name","count":{"$sum":1}}},
{"$sort":{"count":-1}},
{"$limit":10}
])
pprint.pprint(list(banks))
Explanation: Most popular banks
End of explanation
restaurants = boston_db.aggregate([
{"$match":{"name":{"$exists":1},"amenity":"restaurant"}},
{"$group":{"_id":"$name","count":{"$sum":1}}},
{"$sort":{"count":-1}},
{"$limit":10}
])
pprint.pprint(list(restaurants))
Explanation: Most popular restaurants
End of explanation
from geopy.geocoders import Nominatim
geolocator = Nominatim()
location = geolocator.reverse("42.3725677, -71.1193068")
print(location.address)
Explanation: 6. Additional Ideas
Analyzing the data of Boston I found out that not all nodes or ways include this information since its geographical position is represented within regions of a city. What could be done in this case, is check if each node or way belongs to a city based on the latitude and longitude and ensure that the property "address.city" is properly informed. By doing so, we could get statistics related to cities in a much more reliable way. In fact, I think this is the biggest benefit to anticipate problems and implement improvements to the data you want to analyze. Real world data are very susceptible to being incomplete, noisy and inconsistent which means that if you have low-quality of data the results of their analysis will also be of poor quality.
I think that extending this open source project to include data such as user reviews of establishments, subjective areas of what bound a good and bad neighborhood, housing price data, school reviews, walkability/bikeability, quality of mass transit, and on would form a solid foundation of robust recommender systems. These recommender systems could aid users in anything from finding a new home or apartment to helping a user decide where to spend a weekend afternoon.
Another alternative to help in the absence of information in the region would be the use of gamification or crowdsource information to make more people help in the map contribution. Something like the mobile apps similar to Waze and Minutely have already done to make the users responsible for improving the app and social network around the app.
A different application of this project is that it can be helpful on the problem of how the city boundaries well-defined. The transportation networks (street networks), the built environment can be good indicators of metropolitan area and combining an elementary clustering technique, we consider two street intersections to belong to the same cluster if they have a distance below a given distance threshold (in metres). The geospatial information gives us a good definition of city boundaries through spatial urban networks.
An interesting fact that we can use the geospatial coordinates information to find out country/city name (search OSM data by name and address and to generate synthetic addresses of OSM points). This problem is called reverse geocoding which maps geospatial coordinates to location name. And the <a href="http://wiki.openstreetmap.org/wiki/Nominatim#Reverse_Geocoding">Nominatim</a> from Open Street Map enables us to do that.
End of explanation
from geopy.geocoders import Nominatim
geolocator = Nominatim()
vatican=(41.89888433, 12.45376451)
location = geolocator.reverse(vatican)
print(location.address)
from geopy.geocoders import Nominatim
geolocator = Nominatim()
artic=(-86.06303611,6.81517107)
location = geolocator.reverse(artic)
print(location.address)
Explanation: However, potential problems associated with reverse geocoding is that it may give us weird results near the poles and the international date line or for cities within cities, for example certain locations in Rome may return "Vatican City" - depending on the lat/lon specified in the database for each
For example : Pontificio Collegio Teutonico di Santa Maria in Campo Santo (Collegio Teutonico) is located in Vatican City but the result of given set of coordinates gives us the location in Roma, Italia.
End of explanation
os.killpg(pro.pid, signal.SIGTERM) # Send the signal to all the process groups, killing the MongoDB instance
Explanation: Despite the many issues with the reverse coding, I think another benefits of this project that it can be applied in disease mapping which facilitates us use the longitudes and latitudes information to find the plaintext addresses of patients for identifying patterns, correlates, and predictors of disease in academia, government and private sector with the widespread availability of geographic information.
7. Conclusion
This review of the data is cursory, but it seems that the Boston area is incomplete, though I believe it has been well cleaned and represented after this project.
8. References
<a href="http://wiki.openstreetmap.org/wiki/Main_Page">OpenStreetMap Wiki Page</a>
<a href="https://wiki.openstreetmap.org/wiki/OSM_XML">OpenStreetMap Wiki Page - OSM XML
</a>
<a href="http://wiki.openstreetmap.org/wiki/Map_Features">OpenStreetMap Map Features</a>
<a href="https://docs.python.org/2/library/re.html#search-vs-match">Python Regular Expressions</a>
<a href="https://docs.mongodb.org/v2.4/reference/operator/">MongoDB Operators</a>
<a href="http://www.choskim.me/how-to-install-mongodb-on-apples-mac-os-x/">Install MongoDB on Apple's Mac OS X</a>
<a href="https://books.google.com/books?id=_VkrAQAAQBAJ&pg=PA241&lpg=PA241&dq=execute+mongodb+command+in+ipython&source=bl&ots=JqnwlwRvkN&sig=h-TrwspKAmHt1g1ELItnWkDmRHs&hl=en&sa=X&ved=0ahUKEwiJnaiikIrLAhUM8CYKHZ8mBrcQ6AEILzAD#v=onepage&q=execute%20mongodb%20command%20in%20ipython&f=false/">Install MongoDB</a>
<a href="http://michaelcrump.net/how-to-run-html-files-in-your-browser-from-github/"> Run HTML files in your Browser from GitHub </a>
<a href="http://eberlitz.github.io/2015/09/18/data-wrangle-openstreetmaps-data/">Data Wrangling OpenStreetMap 1</a>
<a href="https://htmlpreview.github.io/?https://github.com/jdamiani27/Data-Wrangling-with-MongoDB/blob/master/Final_Project/OSM_wrangling.html#Top-Contributing-User">Data Wrangling OpenStreetMap 2</a>
<a href="http://stackoverflow.com/questions/6159074/given-the-lat-long-coordinates-how-can-we-find-out-the-city-country">Find the city from lat-long coordinates</a>
<a href="http://ij-healthgeographics.biomedcentral.com/articles/10.1186/1476-072X-5-56">Disease Mapping</a>
<a href="https://myadventuresincoding.wordpress.com/2011/10/02/mongodb-geospatial-queries/">Mongodb geospatial queries</a>
<a href="http://tugdualgrall.blogspot.com/2014/08/introduction-to-mongodb-geospatial.html">Intro to mongodb geospatial</a>
<a href="http://www.longitude-latitude-maps.com/city/231_0,Vatican+City,Vatican+City">Long Lat Maps</a>
<a href="http://www.spatialcomplexity.info/files/2015/10/BATTY-JRSInterface-2015.pdf">City Boundaries</a>
<a href="http://www.innovation-cities.com/how-do-you-define-a-city-4-definitions-of-city-boundaries/1314">City Boundaries 2</a>
End of explanation |
1,318 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian analysis of the Curtis Flowers trials
Copyright 2020 Allen B. Downey
License
Step1: On September 5, 2020, prosecutors in Mississippi dropped charges against Curtis Flowers, freeing him after 23 years of incarceration.
Flowers had been tried six times for a 1996 multiple murder. Two trials ended in a mistrial due to a hung jury; four trials ended in convictions.
According to this NPR report
After each conviction, a higher court struck down the initial ruling. The latest ruling invalidating Flowers' conviction, and death sentence, came from the U.S. Supreme Court in June of last year. The justices noted the Mississippi Supreme Court had found that in three prior convictions the prosecution had misrepresented evidence and deliberately eliminated Black jurors.
Since the racial composition of the juries was the noted reason the last conviction was invalidated, the purpose of this article is to explore the relationship between the composition of the juries and the outcome of the trials.
Flowers' trials were the subject of the In the Dark podcast, which reported the racial composition of the juries and the outcomes
Step2: To prepare for the updates, I'll form a joint distribution of the two probabilities.
Step3: Here's how we compute the update.
Assuming that a guilty verdict must be unanimous, the probability of conviction is
$ p = p_1^{n_1} ~ p_2^{n_2}$
where
$p_1$ is the probability a white juror votes to convict
$p_2$ is the probability a black juror votes to convict
$n_1$ is the number of white jurors
$n_2$ is the number of black jurors
The probability of an acquittal or hung jury is the complement of $p$.
The following function performs a Bayesian update given the composition of the jury and the outcome, either 'guilty' or 'hung'. We could also do an update for an acquittal, but since that didn't happen, I didn't implement it.
Step4: I'll use the following function to plot the marginal posterior distributions after each update.
Step5: Here's the update for the first trial.
Step6: Since there were no black jurors for the first trial, we learn nothing about their probability of conviction, so the posterior distribution is the same as the prior.
The posterior distribution for white voters reflects the data that 12 of them voted to convict.
Here are the posterior distributions after the second trial.
Step7: And the third.
Step8: Since the first three verdicts were guilty, we infer that all 36 jurors voted to convict, so the estimated probabilities for both groups are high.
The fourth trials ended in a mistrial due to a hung jury, which implies that at least one juror refused to vote to convict. That decreases the estimated probabilities for both juror pools, but it has a bigger effect on the estimate for black jurors because the total prior data pertaining to black jurors is less, so the same amount of new data moves the needle more.
Step9: The effect of the fifth trial is similar; it decreases the estimates for both pools, but the effect on the estimate for black jurors is greater.
Step10: Finally, here are the posterior distributions after all six trials.
Step11: The posterior distributions for the two pools are substantially different. Here are the posterior means.
Step12: Based on the outcomes of all six trials, we estimate that the probability is 98% that a white juror would vote to convict, and the probability is 68% that a black juror would vote to convict.
Again, those results are based on the modeling simplifications that
All six juries saw essentially the same evidence,
The probabilities we're estimating did not change over the period of the trials, and
Interactions between jurors did not have substantial effects on their votes.
Prediction
Now we can use the joint posterior distribution to estimate the probability of conviction as a function of the composition of the jury.
I'll draw a sample from the joint posterior distribution.
Step13: Here's the probability that white jurors were more likely to convict.
Step14: The following function takes this sample and a hypothetical composition and returns the posterior predictive distribution for the probability of conviction.
Step15: According to Wikipedia
Step16: And with 6 white and 6 black jurors.
Step17: With a jury that represents the population of Montgomery County, the probability Flowers would be convicted is 14-15%.
However, notice that the credible intervals for these estimates are quite wide. Based on the data, the actual probabilities could be in the range from near 0 to 50%.
The following figure shows the probability of conviction as a function of the number of black jurors.
The probability of conviction is highest with an all-white jury, and drops quickly if there are a few black jurors. After that, the addition of more black jurors has a relatively small effect.
These results suggest that all-white juries have a substantially higher probability of convicting a defendant, compared to a jury with even a few non-white jurors.
Step18: Double Check
Let's compute the results a different way to double check.
For the four guilty verdicts, we don't need to make or update the joint distribution; we can update the distributions for the two pools separately.
Step19: We can use the posteriors from those updates as priors and update them based on the two trials that resulted in a hung jury.
Step20: The posterior marginals look the same.
Step21: And yield the same posterior means.
Step22: Here's the probability that a fair jury would convict four times out of six. | Python Code:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/code/soln/utils.py
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from empiricaldist import Pmf
from utils import decorate, savefig
Explanation: Bayesian analysis of the Curtis Flowers trials
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
ps = np.linspace(0, 1, 101)
prior_p1 = Pmf(1.0, ps)
prior_p1.index.name = 'p1'
prior_p2 = Pmf(1.0, ps)
prior_p2.index.name = 'p2'
Explanation: On September 5, 2020, prosecutors in Mississippi dropped charges against Curtis Flowers, freeing him after 23 years of incarceration.
Flowers had been tried six times for a 1996 multiple murder. Two trials ended in a mistrial due to a hung jury; four trials ended in convictions.
According to this NPR report
After each conviction, a higher court struck down the initial ruling. The latest ruling invalidating Flowers' conviction, and death sentence, came from the U.S. Supreme Court in June of last year. The justices noted the Mississippi Supreme Court had found that in three prior convictions the prosecution had misrepresented evidence and deliberately eliminated Black jurors.
Since the racial composition of the juries was the noted reason the last conviction was invalidated, the purpose of this article is to explore the relationship between the composition of the juries and the outcome of the trials.
Flowers' trials were the subject of the In the Dark podcast, which reported the racial composition of the juries and the outcomes:
Trial Jury Outcome
1 All white Guilty
2 11 white, 1 black Guilty
3 11 white, 1 black Guilty
4 7 white, 5 black Hung jury
5 9 white, 3 black Hung jury
6 11 white, 1 black Guilty
We can use this data to estimate the probability that white and black jurors would vote to convict, and then use those estimates to compute the probability of a guilty verdict.
As a modeling simplification, I'll assume:
The six juries were presented with essentially the same evidence, prosecution case, and defense;
The probabilities of conviction did not change over the years of the trials (from 1997 to 2010); and
Each juror votes independently of the others; that is, I ignore interactions between jurors.
I'll use the same prior distribution for white and black jurors, a uniform distribution from 0 to 1.
End of explanation
from utils import make_joint
joint = make_joint(prior_p2, prior_p1)
prior_pmf = Pmf(joint.stack())
prior_pmf.head()
Explanation: To prepare for the updates, I'll form a joint distribution of the two probabilities.
End of explanation
def update(prior, data):
n1, n2, outcome = data
likelihood = prior.copy()
for p1, p2 in prior.index:
like = p1**n1 * p2**n2
if outcome == 'guilty':
likelihood.loc[p1, p2] = like
elif outcome == 'hung':
likelihood.loc[p1, p2] = 1-like
else:
raise ValueError()
posterior = prior * likelihood
posterior.normalize()
return posterior
Explanation: Here's how we compute the update.
Assuming that a guilty verdict must be unanimous, the probability of conviction is
$ p = p_1^{n_1} ~ p_2^{n_2}$
where
$p_1$ is the probability a white juror votes to convict
$p_2$ is the probability a black juror votes to convict
$n_1$ is the number of white jurors
$n_2$ is the number of black jurors
The probability of an acquittal or hung jury is the complement of $p$.
The following function performs a Bayesian update given the composition of the jury and the outcome, either 'guilty' or 'hung'. We could also do an update for an acquittal, but since that didn't happen, I didn't implement it.
End of explanation
from utils import pmf_marginal
def plot_marginals(posterior):
marginal0 = pmf_marginal(posterior, 0)
marginal0.plot(label='white')
marginal1 = pmf_marginal(posterior, 1)
marginal1.plot(label='black')
decorate(xlabel='Probability of voting to convict',
ylabel='PDF',
title='Marginal posterior distributions')
Explanation: I'll use the following function to plot the marginal posterior distributions after each update.
End of explanation
data1 = 12, 0, 'guilty'
posterior1 = update(prior_pmf, data1)
plot_marginals(posterior1)
Explanation: Here's the update for the first trial.
End of explanation
data2 = 11, 1, 'guilty'
posterior2 = update(posterior1, data2)
plot_marginals(posterior2)
Explanation: Since there were no black jurors for the first trial, we learn nothing about their probability of conviction, so the posterior distribution is the same as the prior.
The posterior distribution for white voters reflects the data that 12 of them voted to convict.
Here are the posterior distributions after the second trial.
End of explanation
data3 = 11, 1, 'guilty'
posterior3 = update(posterior2, data3)
plot_marginals(posterior3)
Explanation: And the third.
End of explanation
data4 = 7, 5, 'hung'
posterior4 = update(posterior3, data4)
plot_marginals(posterior4)
Explanation: Since the first three verdicts were guilty, we infer that all 36 jurors voted to convict, so the estimated probabilities for both groups are high.
The fourth trials ended in a mistrial due to a hung jury, which implies that at least one juror refused to vote to convict. That decreases the estimated probabilities for both juror pools, but it has a bigger effect on the estimate for black jurors because the total prior data pertaining to black jurors is less, so the same amount of new data moves the needle more.
End of explanation
data5 = 9, 3, 'hung'
posterior5 = update(posterior4, data5)
plot_marginals(posterior5)
Explanation: The effect of the fifth trial is similar; it decreases the estimates for both pools, but the effect on the estimate for black jurors is greater.
End of explanation
data6 = 11, 1, 'guilty'
posterior6 = update(posterior5, data6)
plot_marginals(posterior6)
Explanation: Finally, here are the posterior distributions after all six trials.
End of explanation
marginal_p1 = pmf_marginal(posterior6, 0)
marginal_p2 = pmf_marginal(posterior6, 1)
marginal_p1.mean(), marginal_p2.mean(),
Explanation: The posterior distributions for the two pools are substantially different. Here are the posterior means.
End of explanation
sample = posterior6.sample(1000)
Explanation: Based on the outcomes of all six trials, we estimate that the probability is 98% that a white juror would vote to convict, and the probability is 68% that a black juror would vote to convict.
Again, those results are based on the modeling simplifications that
All six juries saw essentially the same evidence,
The probabilities we're estimating did not change over the period of the trials, and
Interactions between jurors did not have substantial effects on their votes.
Prediction
Now we can use the joint posterior distribution to estimate the probability of conviction as a function of the composition of the jury.
I'll draw a sample from the joint posterior distribution.
End of explanation
np.mean([p1 > p2 for p1, p2 in sample])
Explanation: Here's the probability that white jurors were more likely to convict.
End of explanation
def prob_guilty(sample, n1, n2):
ps = [p1**n1 * p2**n2 for p1, p2 in sample]
return Pmf.from_seq(ps)
Explanation: The following function takes this sample and a hypothetical composition and returns the posterior predictive distribution for the probability of conviction.
End of explanation
pmf = prob_guilty(sample, 7, 5)
pmf.mean(), pmf.credible_interval(0.9)
Explanation: According to Wikipedia:
As of the 2010 United States Census, there were 10,925 people living in the county. 53.0% were White, 45.5% Black or African American, 0.4% Asian, 0.1% Native American, 0.5% of some other race and 0.5% of two or more races. 0.9% were Hispanic or Latino (of any race).
A jury drawn at random from the population of Montgomery County would be expected to have 5 or 6 black jurors.
Here's the probability of conviction with a panel of 7 white and 5 black jurors.
End of explanation
pmf = prob_guilty(sample, 6, 6)
pmf.mean(), pmf.credible_interval(0.9)
Explanation: And with 6 white and 6 black jurors.
End of explanation
pmf_seq = []
n2s = range(0, 13)
for n2 in n2s:
n1 = 12 - n2
pmf = prob_guilty(sample, n1, n2)
pmf_seq.append(pmf)
means = [pmf.mean() for pmf in pmf_seq]
lows = [pmf.quantile(0.05) for pmf in pmf_seq]
highs = [pmf.quantile(0.95) for pmf in pmf_seq]
means
plt.plot(n2s, means)
plt.fill_between(n2s, lows, highs, color='C0', alpha=0.1)
decorate(xlabel='Number of black jurors',
ylabel='Probability of a guilty verdict',
title='Probability of a guilty verdict vs jury composition',
ylim=[0, 1])
Explanation: With a jury that represents the population of Montgomery County, the probability Flowers would be convicted is 14-15%.
However, notice that the credible intervals for these estimates are quite wide. Based on the data, the actual probabilities could be in the range from near 0 to 50%.
The following figure shows the probability of conviction as a function of the number of black jurors.
The probability of conviction is highest with an all-white jury, and drops quickly if there are a few black jurors. After that, the addition of more black jurors has a relatively small effect.
These results suggest that all-white juries have a substantially higher probability of convicting a defendant, compared to a jury with even a few non-white jurors.
End of explanation
from scipy.stats import binom
k1 = 12 + 11 + 11 + 11
like1 = binom(k1, ps).pmf(k1)
prior_p1 = Pmf(like1, ps)
k2 = 0 + 1 + 1 + 1
like2 = binom(k2, ps).pmf(k2)
prior_p2 = Pmf(like2, ps)
prior_p1.plot()
prior_p2.plot()
Explanation: Double Check
Let's compute the results a different way to double check.
For the four guilty verdicts, we don't need to make or update the joint distribution; we can update the distributions for the two pools separately.
End of explanation
prior = Pmf(make_joint(prior_p2, prior_p1).stack())
posterior = update(prior, data4)
posterior = update(posterior, data5)
Explanation: We can use the posteriors from those updates as priors and update them based on the two trials that resulted in a hung jury.
End of explanation
plot_marginals(posterior)
Explanation: The posterior marginals look the same.
End of explanation
marginal_p1 = pmf_marginal(posterior, 0)
marginal_p2 = pmf_marginal(posterior, 1)
marginal_p1.mean(), marginal_p2.mean(),
Explanation: And yield the same posterior means.
End of explanation
binom.pmf(4, 6, 0.15)
Explanation: Here's the probability that a fair jury would convict four times out of six.
End of explanation |
1,319 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AveragePooling3D
[pooling.AveragePooling3D.0] input 4x4x4x2, pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_last'
Step1: [pooling.AveragePooling3D.1] input 4x4x4x2, pool_size=(2, 2, 2), strides=(1, 1, 1), padding='valid', data_format='channels_last'
Step2: [pooling.AveragePooling3D.2] input 4x5x2x3, pool_size=(2, 2, 2), strides=(2, 1, 1), padding='valid', data_format='channels_last'
Step3: [pooling.AveragePooling3D.3] input 4x4x4x2, pool_size=(3, 3, 3), strides=None, padding='valid', data_format='channels_last'
Step4: [pooling.AveragePooling3D.4] input 4x4x4x2, pool_size=(3, 3, 3), strides=(3, 3, 3), padding='valid', data_format='channels_last'
Step5: [pooling.AveragePooling3D.5] input 4x4x4x2, pool_size=(2, 2, 2), strides=None, padding='same', data_format='channels_last'
Step6: [pooling.AveragePooling3D.6] input 4x4x4x2, pool_size=(2, 2, 2), strides=(1, 1, 1), padding='same', data_format='channels_last'
Step7: [pooling.AveragePooling3D.7] input 4x5x4x2, pool_size=(2, 2, 2), strides=(1, 2, 1), padding='same', data_format='channels_last'
Step8: [pooling.AveragePooling3D.8] input 4x4x4x2, pool_size=(3, 3, 3), strides=None, padding='same', data_format='channels_last'
Step9: [pooling.AveragePooling3D.9] input 4x4x4x2, pool_size=(3, 3, 3), strides=(3, 3, 3), padding='same', data_format='channels_last'
Step10: [pooling.AveragePooling3D.10] input 2x3x3x4, pool_size=(3, 3, 3), strides=(2, 2, 2), padding='valid', data_format='channels_first'
Step11: [pooling.AveragePooling3D.11] input 2x3x3x4, pool_size=(3, 3, 3), strides=(1, 1, 1), padding='same', data_format='channels_first'
Step12: [pooling.AveragePooling3D.12] input 3x4x4x3, pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_first'
Step13: export for Keras.js tests | Python Code:
data_in_shape = (4, 4, 4, 2)
L = AveragePooling3D(pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(290)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: AveragePooling3D
[pooling.AveragePooling3D.0] input 4x4x4x2, pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (4, 4, 4, 2)
L = AveragePooling3D(pool_size=(2, 2, 2), strides=(1, 1, 1), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(291)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling3D.1] input 4x4x4x2, pool_size=(2, 2, 2), strides=(1, 1, 1), padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (4, 5, 2, 3)
L = AveragePooling3D(pool_size=(2, 2, 2), strides=(2, 1, 1), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(282)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling3D.2] input 4x5x2x3, pool_size=(2, 2, 2), strides=(2, 1, 1), padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (4, 4, 4, 2)
L = AveragePooling3D(pool_size=(3, 3, 3), strides=None, padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(283)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.3'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling3D.3] input 4x4x4x2, pool_size=(3, 3, 3), strides=None, padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (4, 4, 4, 2)
L = AveragePooling3D(pool_size=(3, 3, 3), strides=(3, 3, 3), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(284)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.4'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling3D.4] input 4x4x4x2, pool_size=(3, 3, 3), strides=(3, 3, 3), padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (4, 4, 4, 2)
L = AveragePooling3D(pool_size=(2, 2, 2), strides=None, padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(285)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.5'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling3D.5] input 4x4x4x2, pool_size=(2, 2, 2), strides=None, padding='same', data_format='channels_last'
End of explanation
data_in_shape = (4, 4, 4, 2)
L = AveragePooling3D(pool_size=(2, 2, 2), strides=(1, 1, 1), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(286)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.6'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling3D.6] input 4x4x4x2, pool_size=(2, 2, 2), strides=(1, 1, 1), padding='same', data_format='channels_last'
End of explanation
data_in_shape = (4, 5, 4, 2)
L = AveragePooling3D(pool_size=(2, 2, 2), strides=(1, 2, 1), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(287)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.7'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling3D.7] input 4x5x4x2, pool_size=(2, 2, 2), strides=(1, 2, 1), padding='same', data_format='channels_last'
End of explanation
data_in_shape = (4, 4, 4, 2)
L = AveragePooling3D(pool_size=(3, 3, 3), strides=None, padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(288)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.8'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling3D.8] input 4x4x4x2, pool_size=(3, 3, 3), strides=None, padding='same', data_format='channels_last'
End of explanation
data_in_shape = (4, 4, 4, 2)
L = AveragePooling3D(pool_size=(3, 3, 3), strides=(3, 3, 3), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(289)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.9'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling3D.9] input 4x4x4x2, pool_size=(3, 3, 3), strides=(3, 3, 3), padding='same', data_format='channels_last'
End of explanation
data_in_shape = (2, 3, 3, 4)
L = AveragePooling3D(pool_size=(3, 3, 3), strides=(2, 2, 2), padding='valid', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(290)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.10'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling3D.10] input 2x3x3x4, pool_size=(3, 3, 3), strides=(2, 2, 2), padding='valid', data_format='channels_first'
End of explanation
data_in_shape = (2, 3, 3, 4)
L = AveragePooling3D(pool_size=(3, 3, 3), strides=(1, 1, 1), padding='same', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(291)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.11'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling3D.11] input 2x3x3x4, pool_size=(3, 3, 3), strides=(1, 1, 1), padding='same', data_format='channels_first'
End of explanation
data_in_shape = (3, 4, 4, 3)
L = AveragePooling3D(pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(292)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling3D.12'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.AveragePooling3D.12] input 3x4x4x3, pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_first'
End of explanation
import os
filename = '../../../test/data/layers/pooling/AveragePooling3D.json'
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))
with open(filename, 'w') as f:
json.dump(DATA, f)
print(json.dumps(DATA))
Explanation: export for Keras.js tests
End of explanation |
1,320 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Predizione sul sesso dei maratoneti di Boston
I dati si riferiscono alla maratona di Boston del 2016 e sono stati recuperato da Kaggle.
Utilizzo della libreria Pandas
La lettura e la manipolazione di dati risulta molto semplice usando la libreria Pandas.
Gli oggetti principali di questa libreria sono
Step2: Analisi dei dati naive
Step3: Rappresentazioni grafiche con Seaborn
Step4: Predizione tramite i k campioni più simili
Per una rapida descrizione del metodo si rimanda alla documentazione ufficiale | Python Code:
import pandas as pd
# E` necessario convertire il tempo di gara in secondi, per poterlo confrontare nelle regressioni
bm = pd.read_csv('./data/marathon_results_2016.csv')
bm[:3]
type(bm)
bm[['Age', 'Official Time']][:3]
bm.info()
# Vorremmo predire il sesso (label) in base all'età e al tempo ufficiale (covariate)
bm = bm[['Age', 'M/F', 'Official Time']]
bm.info()
ts = bm['Age']
print(type(ts))
print(ts[:4])
def time_convert(x):
Converti una stringa dal formato hh:mm:ss in numero
in caso di dato non leggibile, restituire il numero 0
try:
# DA COMPLETARE
# QUANDO QUESTO METODO VIENE INVOCATO, x È UNA STRINGA
except:
return float('0.0')
# E` necessario convertire il tempo di gara in secondi, per poterlo confrontare nelle regressioni
bm = pd.read_csv('./data/marathon_results_2016.csv', converters={'Official Time': time_convert})
bm[-10:]
from sklearn.model_selection import train_test_split
# Generate TRAINING e TEST sets
x_train, x_test, y_train, y_test = train_test_split(bm[['Age', 'Official Time']], list(map(lambda x: int(x == 'M'), bm['M/F'])), random_state=0)
x_train[:3]
y_train[:3]
Explanation: Predizione sul sesso dei maratoneti di Boston
I dati si riferiscono alla maratona di Boston del 2016 e sono stati recuperato da Kaggle.
Utilizzo della libreria Pandas
La lettura e la manipolazione di dati risulta molto semplice usando la libreria Pandas.
Gli oggetti principali di questa libreria sono:
Le Serie
I DataFrame
Invece di dare prima una definizione formale di cosa sono, vediamo subito dei semplici esempio d'uso.
Prima ancora, vediamo come leggere un semplice file .CSV in Python, che lo memorizza direttamente in un dataFrame.
Lettura dati da un file .CSV
End of explanation
# Percentuale di uomini vs. donne che hanno concluso la gara
n_male = sum(y_train)
n_female = len(y_train) - n_male
perc_male = round(100*n_male/(n_male+n_female), 2)
perc_female = 100 - perc_male
print("Uomini: ", n_male, ", " , perc_male, "% - Donne: ", n_female, ", ", perc_female, "%", sep='')
def ConstantModel(Xs, Y):
# DA FARE: SCRIVERE UNA FUNZIONE CHE RESTITUISCA UNA FUNZIONE
# CHE PER OGNI PREDIZIONE RICHIESTA, PREDICE SEMPRE "male"
pass
# Test il modello costante
LearnedModel = ConstantModel(x_train, y_train)
y_pred = LearnedModel(x_test)
# SI OSSERVINO LE MISURE FONDAMENTALI...
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
print(mean_absolute_error(y_test, y_pred), mean_squared_error(y_test, y_pred), r2_score(y_test, y_pred))
# ... E LE ALTRE STATISTICHE VISTE A LEZIONE
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
print('ACCURACY:',accuracy_score(y_test, y_pred))
print('REPORT:',classification_report(y_test, y_pred))
print('Confusion Matrix:', confusion_matrix(y_test, y_pred))
Explanation: Analisi dei dati naive
End of explanation
import seaborn as sns
sns.jointplot(data=bm, x='Age', y='Official Time', kind='reg', color='g')
sns.plt.show()
bm_female = bm[bm['M/F'] == 'F']
sns.jointplot(data=bm_female, x='Age', y='Official Time', kind='reg', color='r')
sns.plt.show()
bm_male = bm[bm['M/F'] == 'M']
sns.jointplot(data=bm_male, x='Age', y='Official Time', kind='reg', color='b')
sns.plt.show()
Explanation: Rappresentazioni grafiche con Seaborn
End of explanation
from sklearn import neighbors
knn = neighbors.KNeighborsClassifier(n_neighbors=5)
# Input to this function must be "DataFrames"
knn.fit(x_train, y_train)
y_pred = knn.predict(x_test)
print(mean_absolute_error(y_test, y_pred), mean_squared_error(y_test, y_pred), r2_score(y_test, y_pred))
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
print('ACCURACY:',accuracy_score(y_test, y_pred))
print('REPORT:',classification_report(y_test, y_pred))
print('Confusion Matrix:', confusion_matrix(y_test, y_pred))
Explanation: Predizione tramite i k campioni più simili
Per una rapida descrizione del metodo si rimanda alla documentazione ufficiale:
Introduzione: Nearest Neighbors
Metodo da usare: KNeighborsClassifier
IMPORTANTE: Per imparare a programmare, si deve imparare a leggere e studiare la DOCUMENTAZIONE delle librerie che si utilizzano!
ESERCIZIO 1: Provare il classificatore seguente provando a cambiare il numero di vicini utilizzati (parametro n_neighbors). Provare a usare due algoritmi diversi: ball_tree e kd_tree (leggere la documentazione per capire come fare). Commentare la differenza di risultati ottenuti con i due metodi.
End of explanation |
1,321 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Evaluate classification accuracy
This notebook demonstrates how to evaluate classification accuracy of "cross-validated" simulated communities. Due to the unique nature of this analysis, the metrics that we use to evaluate classification accuracy are different from those used for mock.
The key measure here is rate of match vs. overclassification, hence P/R/F are not useful metrics. Instead, we define and measure the following as percentages
Step1: Evaluate classification results
First, enter in filepaths and directory paths where your data are stored, and the destination
Step2: This cell performs the classification evaluation and should not be modified.
Step3: Plot classification accuracy
Finally, we plot our results. Line plots show the mean +/- 95% confidence interval for each classification result at each taxonomic level (1 = phylum, 6 = species) in each dataset tested. Do not modify the cell below, except to adjust the color_pallette used for plotting. This palette can be a dictionary of colors for each group, as shown below, or a seaborn color palette.
match_ratio = proportion of correct matches.
underclassification_ratio = proportion of assignments to correct lineage but to a lower level than expected.
misclassification_ratio = proportion of assignments to an incorrect lineage.
Step4: Per-level classification accuracy statistic
Kruskal-Wallis FDR-corrected p-values comparing classification methods at each level of taxonomic assignment
Step5: Heatmaps of method accuracy by parameter
Heatmaps show the performance of individual method/parameter combinations at each taxonomic level, in each reference database (i.e., for bacterial and fungal simulated datasets individually).
Step6: Rank-based statistics comparing the performance of the optimal parameter setting run for each method on each data set.
Rank parameters for each method to determine the best parameter configuration within each method. Count best values in each column indicate how many samples a given method achieved within one mean absolute deviation of the best result (which is why they may sum to more than the total number of samples).
Step7: Rank performance of optimized methods
Now we rank the top-performing method/parameter combination for each method at genus and species levels. Methods are ranked by top F-measure, and the average value for each metric is shown (rather than count best as above). F-measure distributions are plotted for each method, and compared using paired t-tests with FDR-corrected P-values. This cell does not need to be altered, unless if you wish to change the metric used for sorting best methods and for plotting. | Python Code:
from tax_credit.framework_functions import (novel_taxa_classification_evaluation,
extract_per_level_accuracy)
from tax_credit.eval_framework import parameter_comparisons
from tax_credit.plotting_functions import (pointplot_from_data_frame,
heatmap_from_data_frame,
per_level_kruskal_wallis,
rank_optimized_method_performance_by_dataset)
import pandas as pd
from os.path import expandvars, join, exists
from glob import glob
from IPython.display import display, Markdown
Explanation: Evaluate classification accuracy
This notebook demonstrates how to evaluate classification accuracy of "cross-validated" simulated communities. Due to the unique nature of this analysis, the metrics that we use to evaluate classification accuracy are different from those used for mock.
The key measure here is rate of match vs. overclassification, hence P/R/F are not useful metrics. Instead, we define and measure the following as percentages:
* Match vs. overclassification rate
* Match: exact match at level L
* underclassification: lineage assignment is correct, but shorter than expected (e.g., not to species level)
* misclassification: incorrect assignment
Where L = taxonomic level being tested
Functions
End of explanation
project_dir = expandvars("$HOME/Desktop/projects/short-read-tax-assignment")
analysis_name = "cross-validated"
precomputed_results_dir = join(project_dir, "data", "precomputed-results", analysis_name)
expected_results_dir = join(project_dir, "data", analysis_name)
summary_fp = join(precomputed_results_dir, 'evaluate_classification_summary.csv')
results_dirs = glob(join(precomputed_results_dir, '*', '*', '*', '*'))
Explanation: Evaluate classification results
First, enter in filepaths and directory paths where your data are stored, and the destination
End of explanation
if not exists(summary_fp):
accuracy_results = novel_taxa_classification_evaluation(results_dirs, expected_results_dir,
summary_fp, test_type='cross-validated')
else:
accuracy_results = pd.DataFrame.from_csv(summary_fp)
Explanation: This cell performs the classification evaluation and should not be modified.
End of explanation
color_pallette={
'rdp': 'seagreen', 'sortmerna': 'gray', 'vsearch': 'brown',
'uclust': 'blue', 'blast': 'black', 'blast+': 'purple', 'q2-nb': 'pink',
}
level_results = extract_per_level_accuracy(accuracy_results)
y_vars = ['Precision', 'Recall', 'F-measure']
pointplot_from_data_frame(level_results, "level", y_vars,
group_by="Dataset", color_by="Method",
color_pallette=color_pallette)
Explanation: Plot classification accuracy
Finally, we plot our results. Line plots show the mean +/- 95% confidence interval for each classification result at each taxonomic level (1 = phylum, 6 = species) in each dataset tested. Do not modify the cell below, except to adjust the color_pallette used for plotting. This palette can be a dictionary of colors for each group, as shown below, or a seaborn color palette.
match_ratio = proportion of correct matches.
underclassification_ratio = proportion of assignments to correct lineage but to a lower level than expected.
misclassification_ratio = proportion of assignments to an incorrect lineage.
End of explanation
result = per_level_kruskal_wallis(level_results, y_vars, group_by='Method',
dataset_col='Dataset', alpha=0.05,
pval_correction='fdr_bh')
result
Explanation: Per-level classification accuracy statistic
Kruskal-Wallis FDR-corrected p-values comparing classification methods at each level of taxonomic assignment
End of explanation
heatmap_from_data_frame(level_results, metric="Precision", rows=["Method", "Parameters"], cols=["Dataset", "level"])
heatmap_from_data_frame(level_results, metric="Recall", rows=["Method", "Parameters"], cols=["Dataset", "level"])
heatmap_from_data_frame(level_results, metric="F-measure", rows=["Method", "Parameters"], cols=["Dataset", "level"])
Explanation: Heatmaps of method accuracy by parameter
Heatmaps show the performance of individual method/parameter combinations at each taxonomic level, in each reference database (i.e., for bacterial and fungal simulated datasets individually).
End of explanation
for method in level_results['Method'].unique():
top_params = parameter_comparisons(level_results, method, metrics=y_vars,
sample_col='Dataset', method_col='Method',
dataset_col='Dataset')
display(Markdown('## {0}'.format(method)))
display(top_params[:10])
Explanation: Rank-based statistics comparing the performance of the optimal parameter setting run for each method on each data set.
Rank parameters for each method to determine the best parameter configuration within each method. Count best values in each column indicate how many samples a given method achieved within one mean absolute deviation of the best result (which is why they may sum to more than the total number of samples).
End of explanation
rank_optimized_method_performance_by_dataset(level_results,
metric="F-measure",
level="level",
level_range=range(6,7),
display_fields=["Method",
"Parameters",
"Precision",
"Recall",
"F-measure"],
paired=True,
parametric=True,
color=None,
color_pallette=color_pallette)
Explanation: Rank performance of optimized methods
Now we rank the top-performing method/parameter combination for each method at genus and species levels. Methods are ranked by top F-measure, and the average value for each metric is shown (rather than count best as above). F-measure distributions are plotted for each method, and compared using paired t-tests with FDR-corrected P-values. This cell does not need to be altered, unless if you wish to change the metric used for sorting best methods and for plotting.
End of explanation |
1,322 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Python part XII (And a discussion of stochastic differential equations)
Activity 1
Step2: Programs like the Firefox browser are full of assertions
Step3: The preconditions on lines 6, 8, and 9 catch invalid inputs
Step4: but if we normalize one that’s wider than it is tall, the assertion is triggered | Python Code:
numbers = [1.5, 2.3, 0.7, -0.001, 4.4]
total = 0.0
for num in numbers:
assert num > 0.0, 'Data should only contain positive values'
total += num
print('total is:', total)
Explanation: Introduction to Python part XII (And a discussion of stochastic differential equations)
Activity 1: Discussion stochastic differential equations
What is Lipshitz continuity? How can this be interpreted relative to differentiability? How is this related to initial value problems?
What do we refer to when we say a Dirac "measure" or Dirac "distribution"? How is this related to an initial value problem?
What is the formal equation of an SDE? What is the "drift" and what is the "diffusion"?
What is "additive noise"? What is special about such a system?
What is a path solution to an SDE? How is this related to the Fokker-Plank equations?
Activity 2: Defensive Programming
Our previous lessons have introduced the basic tools of programming: variables and lists, file I/O, loops, conditionals, and functions. What they haven’t done is show us how to tell whether a program is getting the right answer, and how to tell if it’s still getting the right answer as we make changes to it.
To achieve that, we need to:
Write programs that check their own operation.
Write and run tests for widely-used functions.
Make sure we know what “correct” actually means.
The good news is, doing these things will speed up our programming, not slow it down. As in real carpentry — the kind done with lumber — the time saved by measuring carefully before cutting a piece of wood is much greater than the time that measuring takes.
The first step toward getting the right answers from our programs is to assume that mistakes will happen and to guard against them. This is called defensive programming, and the most common way to do it is to add assertions to our code so that it checks itself as it runs. An assertion is simply a statement that something must be true at a certain point in a program. When Python sees one, it evaluates the assertion’s condition. If it’s true, Python does nothing, but if it’s false, Python halts the program immediately and prints the error message if one is provided. For example, this piece of code halts as soon as the loop encounters a value that isn’t positive:
End of explanation
def normalize_rectangle(rect):
Normalizes a rectangle so that it is at the origin and 1.0 units long on its longest axis.
Input should be of the format (x0, y0, x1, y1).
(x0, y0) and (x1, y1) define the lower left and upper right corners
of the rectangle, respectively.
assert len(rect) == 4, 'Rectangles must contain 4 coordinates'
x0, y0, x1, y1 = rect
assert x0 < x1, 'Invalid X coordinates'
assert y0 < y1, 'Invalid Y coordinates'
dx = x1 - x0
dy = y1 - y0
if dx > dy:
scaled = float(dx) / dy
upper_x, upper_y = 1.0, scaled
else:
scaled = float(dx) / dy
upper_x, upper_y = scaled, 1.0
assert 0 < upper_x <= 1.0, 'Calculated upper X coordinate invalid'
assert 0 < upper_y <= 1.0, 'Calculated upper Y coordinate invalid'
return (0, 0, upper_x, upper_y)
Explanation: Programs like the Firefox browser are full of assertions: 10-20% of the code they contain are there to check that the other 80–90% are working correctly. Broadly speaking, assertions fall into three categories:
A precondition is something that must be true at the start of a function in order for it to work correctly.
A postcondition is something that the function guarantees is true when it finishes.
An invariant is something that is always true at a particular point inside a piece of code.
For example, suppose we are representing rectangles using a tuple of four coordinates (x0, y0, x1, y1), representing the lower left and upper right corners of the rectangle. In order to do some calculations, we need to normalize the rectangle so that the lower left corner is at the origin and the longest side is 1.0 units long. This function does that, but checks that its input is correctly formatted and that its result makes sense:
End of explanation
print(normalize_rectangle( (0.0, 1.0, 2.0) )) # missing the fourth coordinate
print(normalize_rectangle( (4.0, 2.0, 1.0, 5.0) )) # X axis inverted
print(normalize_rectangle( (0.0, 0.0, 1.0, 5.0) ))
Explanation: The preconditions on lines 6, 8, and 9 catch invalid inputs:
End of explanation
print(normalize_rectangle( (0.0, 0.0, 5.0, 1.0) ))
Explanation: but if we normalize one that’s wider than it is tall, the assertion is triggered:
End of explanation |
1,323 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demo Gaussian Generation
Illustrate the generation of a d-dimensional Gaussian image
Description
The sequence below shows a technique to a d-dimensional Gaussian image,
understanding the difficulties in computing an equation with vector and
matrix notation.
One dimensional case
The Gaussian function is a symmetric bell shaped function that is characterized by
two parameters
Step1: Computing the Gaussian function at many points, using the same code
Step2: d-dimensional Case
If a sample point is a vector of dimension d | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import sys,os
ia898path = os.path.abspath('/etc/jupyterhub/ia898_1s2017/')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
# First case: unidimensional
# x: single value (single sample) or a row of values (many samples)
# mu and sigma are scalar
def fun1(x, mu, sigma):
return (1./(np.sqrt(2 * np.pi) * sigma)) * np.exp(-1./2 * ((x-mu)/ sigma)**2)
print('Computing the Gaussian function at a single point')
ex1 = "fun1( 10, 10, 5)"
print(ex1,"=>", eval(ex1))
ex2 = "fun1( 15, 10, 5)"
print(ex2,"=>", eval(ex2))
Explanation: Demo Gaussian Generation
Illustrate the generation of a d-dimensional Gaussian image
Description
The sequence below shows a technique to a d-dimensional Gaussian image,
understanding the difficulties in computing an equation with vector and
matrix notation.
One dimensional case
The Gaussian function is a symmetric bell shaped function that is characterized by
two parameters: mean and variance. The one-dimensional Gaussian function at point
$x$ is given by the following equation, with mean $\mu$ and variance
$\sigma^2$. The function is maximum at point $x=\mu$ and it falls by the
factor $e^{-\frac{1}{2}}$ (approx. 0.6) at point $x=\sigma$ away from the mean.
Equation
$$ f(x) = \frac{1}{\sqrt{2 \pi} \sigma} exp\left[ -\frac{1}{2} \frac{\left(x - \mu\right)^2}{\sigma^2} \right] $$
As this function is scalar, it is possible to compute this function on N samples represented
as a N x 1 vector ${\mathbf x} = [x_0, x_1, x_2, \ldots x_{N-1}]^\mathrm{T}$:
$$ f({\mathbf x}) = \frac{1}{\sqrt{2 \pi} \sigma} exp\left[ -\frac{1}{2} \frac{\left({\mathbf x} - \mu\right)^2}{\sigma^2} \right]$$
End of explanation
ex3 = "fun1( np.array([[10,15,20]]).T, 10, 5)"
print(ex3,"=>\n", eval(ex3))
x = np.arange(-5,26).reshape(-1,1)
y = fun1(x, 10, 5)
plt.plot(x,y)
Explanation: Computing the Gaussian function at many points, using the same code
End of explanation
# Second case: d-dimensional, single sample
# x: single column vector (single sample with d characteristics)
# mu: column vector, 1 x d
# sigma: covariance matrix, square and symmetric, d x d
def funn(X, MU, COV):
d = len(X)
Xc = X - MU
aux = np.linalg.inv(COV).dot(Xc)
k = 1. * (Xc.T).dot(aux)
return (1./((2 * np.pi)**(d/2.) * np.sqrt(np.linalg.det(COV)))) * np.exp(-1./2 * k)
print('\ncomputing the Gaussian function at a single 3-D sample')
X1 = np.array([[10],
[5],
[3]])
MU = X1
COV = np.array([[10*10, 0, 0],
[0, 5*5, 0],
[0, 0, 3*3]])
print('X1=',X1)
print('MU=',MU)
print('COV=',COV)
ex4 = "funn( X1, MU, COV)"
print(ex4,"=>", eval(ex4))
print('\nComputing the Gaussian function at two 3-D samples')
print('\nNote that it does not work')
X2 = 1. * X1/2
X = np.hstack([X1,X2])
print('X=',X)
ex5 = "funn( X, MU, COV)"
print(ex5,"=>", eval(ex5))
Explanation: d-dimensional Case
If a sample point is a vector of dimension d: ${\mathbf x} = [x_0, x_1, \ldots x_{d-1}]^T$,
the d-dimensional Gaussian function is characterized by the mean
vector: ${\mathbf \mu} = [\mu_0, \mu_1, \ldots \mu_{d-1}]^T$ and the symmetric square
covariance matrix:
$$ \Sigma_d = \left(
\begin{array}{cccc}
\sigma_0^2 & \sigma_0\sigma_1 & \ldots & \sigma_0\sigma_{d-1} \
\sigma_1\sigma_0 & \sigma_1^2 & \ldots & \sigma_1\sigma_{d-1} \
\vdots & \vdots & \vdots & \vdots \
\sigma_{d-1}\sigma_0 & \sigma_{d-1}\sigma_1 & \ldots & \sigma_{d-1}^2
\end{array}
\right) $$
Equation
$$ f({\mathbf x}) = \frac{1}{(2 \pi)^{d/2}|\Sigma|^{1/2}} exp\left[ -\frac{1}{2}\left({\mathbf x} - {\mathbf \mu} \right)^\mathrm{T}\Sigma^{-1}\left({\mathbf x} - {\mathbf \mu} \right)\right] $$
End of explanation |
1,324 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inclusive ML - Understanding Bias
Learning Objectives
Invoke the What-if Tool against a deployed Model
Explore attributes of the dataset
Examine aspects of bias in model results
Evaluate how the What-if Tool provides suggestions to remediate bias
Introduction
This notebook shows use of the What-If Tool inside of a Jupyter notebook. The What-If Tool, among many other things, allows you to explore the impacts of Fairness in model design and deployment.
The notebook invokes a previously deployed XGBoost classifier model on the UCI census dataset which predicts whether a person earns more than $50K based on their census information.
You will then visualize the results of the trained classifier on test data using the What-If Tool.
First, you will import various libaries and settings that are required to complete the lab.
Step1: Set up the notebook environment
First you must perform a few environment and project configuration steps.
These steps may take 8 to 10 minutes, please wait until you see the following response before proceeding
Step2: Finally, download the data and arrays needed to use the What-if Tool.
Step3: Now take a quick look at the data. The ML model type used for this analysis is XGBoost. XGBoost is a machine learning framework that uses decision trees and gradient boosting to build predictive models. It works by ensembling multiple decision trees together based on the score associated with different leaf nodes in a tree.
XGBoost requires all values to be numeric so the orginial dataset was slightly modified. The biggest change made was to assign a numeric value to Sex. The originial dataset only had the values "Female" and "Male" for Sex. The decision was made to assign the value "1" to Female and "2" to Male. As part of the data prepartion effort the Pandas function "get_dummies" was used to convert the remaining domain values into numerical equivalent. For instance the "Education" column was turned into several sub-columns named after the value in the column. For instance the "Education_HS-grad" has a value of "1" for when that was the orginial categorical value and a value of "0" for other cateogries.
Step4: To connect the What-if Tool to an AI Platform model, you need to pass it a subset of your test examples. The command below will create a Numpy array of 2000 from our test examples.
Step5: Instantiating the What-if Tool is as simple as creating a WitConfigBuilder object and passing it the AI Platform model desired to be analyzed.
The optional "adjust_prediction" parameter is used because the What-if Tool expects a list of scores for each class in our model (in this case 2). Since the model only returns a single value from 0 to 1, it must be transformed to the correct format in this function. Lastly, the name 'income_prediction' is used as the ground truth label.
It may take 1 to 2 minutes for the What-if Tool to load and render the visualization palette, please be patient. | Python Code:
!pip freeze | grep httplib2==0.18.1 || pip install httplib2==0.18.1
import os
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import witwidget
from witwidget.notebook.visualization import (
WitWidget,
WitConfigBuilder,
)
pd.options.display.max_columns = 50
PROJECT = !(gcloud config list --format="value(core.project)")
PROJECT = PROJECT[0]
BUCKET = "gs://{project}".format(project=PROJECT)
MODEL = 'xgboost_model'
VERSION = 'v1'
MODEL_DIR = os.path.join(BUCKET, MODEL)
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['MODEL'] = MODEL
os.environ['VERSION'] = VERSION
os.environ['MODEL_DIR'] = MODEL_DIR
Explanation: Inclusive ML - Understanding Bias
Learning Objectives
Invoke the What-if Tool against a deployed Model
Explore attributes of the dataset
Examine aspects of bias in model results
Evaluate how the What-if Tool provides suggestions to remediate bias
Introduction
This notebook shows use of the What-If Tool inside of a Jupyter notebook. The What-If Tool, among many other things, allows you to explore the impacts of Fairness in model design and deployment.
The notebook invokes a previously deployed XGBoost classifier model on the UCI census dataset which predicts whether a person earns more than $50K based on their census information.
You will then visualize the results of the trained classifier on test data using the What-If Tool.
First, you will import various libaries and settings that are required to complete the lab.
End of explanation
%%bash
gcloud config set project $PROJECT
gsutil mb $BUCKET
gsutil cp gs://cloud-training-demos/mlfairness/model.bst $MODEL_DIR/model.bst
gcloud ai-platform models list | grep $MODEL || gcloud ai-platform models create $MODEL
gcloud ai-platform versions list --model $MODEL | grep $VERSION ||
gcloud ai-platform versions create $VERSION \
--model=$MODEL \
--framework='XGBOOST' \
--runtime-version=1.14 \
--origin=$MODEL_DIR \
--python-version=3.5 \
--project=$PROJECT
Explanation: Set up the notebook environment
First you must perform a few environment and project configuration steps.
These steps may take 8 to 10 minutes, please wait until you see the following response before proceeding:
"Creating version (this might take a few minutes)......done."
End of explanation
%%bash
gsutil cp gs://cloud-training-demos/mlfairness/income.pkl .
gsutil cp gs://cloud-training-demos/mlfairness/x_test.npy .
gsutil cp gs://cloud-training-demos/mlfairness/y_test.npy .
features = pd.read_pickle('income.pkl')
x_test = np.load('x_test.npy')
y_test = np.load('y_test.npy')
Explanation: Finally, download the data and arrays needed to use the What-if Tool.
End of explanation
features.head()
Explanation: Now take a quick look at the data. The ML model type used for this analysis is XGBoost. XGBoost is a machine learning framework that uses decision trees and gradient boosting to build predictive models. It works by ensembling multiple decision trees together based on the score associated with different leaf nodes in a tree.
XGBoost requires all values to be numeric so the orginial dataset was slightly modified. The biggest change made was to assign a numeric value to Sex. The originial dataset only had the values "Female" and "Male" for Sex. The decision was made to assign the value "1" to Female and "2" to Male. As part of the data prepartion effort the Pandas function "get_dummies" was used to convert the remaining domain values into numerical equivalent. For instance the "Education" column was turned into several sub-columns named after the value in the column. For instance the "Education_HS-grad" has a value of "1" for when that was the orginial categorical value and a value of "0" for other cateogries.
End of explanation
# Combine the features and labels into one array for the What-if Tool
num_wit_examples = 2000
test_examples = np.hstack((
x_test[:num_wit_examples],
y_test[:num_wit_examples].reshape(-1, 1)
))
Explanation: To connect the What-if Tool to an AI Platform model, you need to pass it a subset of your test examples. The command below will create a Numpy array of 2000 from our test examples.
End of explanation
# TODO 1
FEATURE_NAMES = features.columns.tolist() + ['income_prediction']
def adjust(pred):
return [1 - pred, pred]
config_builder = (
WitConfigBuilder(test_examples.tolist(), FEATURE_NAMES)
.set_ai_platform_model(PROJECT, MODEL, VERSION, adjust_prediction=adjust)
.set_target_feature('income_prediction')
.set_label_vocab(['low', 'high'])
)
WitWidget(config_builder, height=800)
Explanation: Instantiating the What-if Tool is as simple as creating a WitConfigBuilder object and passing it the AI Platform model desired to be analyzed.
The optional "adjust_prediction" parameter is used because the What-if Tool expects a list of scores for each class in our model (in this case 2). Since the model only returns a single value from 0 to 1, it must be transformed to the correct format in this function. Lastly, the name 'income_prediction' is used as the ground truth label.
It may take 1 to 2 minutes for the What-if Tool to load and render the visualization palette, please be patient.
End of explanation |
1,325 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I'm using scipy.optimize.minimize to solve a complex reservoir optimization model (SQSLP and COBYLA as the problem is constrained by both bounds and constraint equations). There is one decision variable per day (storage), and releases from the reservoir are calculated as a function of change in storage, within the objective function. Penalties based on releases and storage penalties are then applied with the goal of minimizing penalties (the objective function is a summation of all penalties). I've added some constraints within this model to limit the change in storage to the physical system limits which is the difference between decision variable x(t+1) and x(t), and also depends on inflows at that time step I(t). These constraints are added to the list of constraint dictionaries using a for loop. Constraints added outside of this for loop function as they should. However the constraints involving time that are initiated within the for loop, do not. | Problem:
import numpy as np
from scipy.optimize import minimize
def function(x):
return -1*(18*x[0]+16*x[1]+12*x[2]+11*x[3])
I=np.array((20,50,50,80))
x0=I
cons=[]
steadystate={'type':'eq', 'fun': lambda x: x.sum()-I.sum() }
cons.append(steadystate)
def f(a):
def g(x):
return x[a]
return g
for t in range (4):
cons.append({'type':'ineq', 'fun': f(t)}) |
1,326 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https
Step1: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
Step2: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code)
Step3: Below I'm running images through the VGG network in batches.
Exercise
Step4: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
Step5: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise
Step6: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so
Step7: If you did it right, you should see these sizes for the training sets
Step9: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
Step10: Training
Here, we'll train the network.
Exercise
Step11: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
Step12: Below, feel free to choose images and see how the trained classifier predicts the flowers in them. | Python Code:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
Explanation: Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash.
git clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg
This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.
End of explanation
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
Explanation: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
End of explanation
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
Explanation: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):
```
self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')
self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')
self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')
self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')
self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')
self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)
```
So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,
feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)
End of explanation
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
# Build the vgg network here
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
# Get the values from the relu6 layer of the VGG network
feed_dict = {input_:images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
Explanation: Below I'm running images through the VGG network in batches.
Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).
End of explanation
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
Explanation: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
End of explanation
# Your one-hot encoded labels array here
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
lb.fit(labels)
labels_vecs = lb.transform(labels)
Explanation: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
End of explanation
from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
train_idx, val_idx = next(ss.split(codes, labels))
half_val_len = int(len(val_idx)/2)
val_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:]
train_x, train_y = codes[train_idx], labels_vecs[train_idx]
val_x, val_y = codes[val_idx], labels_vecs[val_idx]
test_x, test_y = codes[test_idx], labels_vecs[test_idx]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so:
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
Then split the data with
splitter = ss.split(x, y)
ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.
from sklearn.model_selection import StratifiedShuffleSplit> Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.
End of explanation
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
fc = tf.contrib.layers.fully_connected(inputs_, 256)
logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer().minimize(cost)
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: If you did it right, you should see these sizes for the training sets:
Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)
Classifier layers
Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.
Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.
End of explanation
def get_batches(x, y, n_batches=10):
Return a generator that yields batches from arrays x and y.
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
Explanation: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
End of explanation
epochs = 10
iteration = 0
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in get_batches(train_x, train_y):
feed = {inputs_: x,
labels_: y}
loss, _ = sess.run([cost, optimizer], feed_dict=feed)
print("Epoch: {}/{}".format(e+1, epochs),
"Iteration: {}".format(iteration),
"Training loss: {:.5f}".format(loss))
iteration += 1
if iteration % 5 == 0:
feed = {inputs_: val_x,
labels_: val_y}
val_acc = sess.run(accuracy, feed_dict=feed)
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Validation Acc: {:.4f}".format(val_acc))
saver.save(sess, "checkpoints/flowers.ckpt")
Explanation: Training
Here, we'll train the network.
Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!
End of explanation
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
Explanation: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
End of explanation
test_img_path = 'flower_photos/dandelion/146023167_f905574d97_m.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
End of explanation |
1,327 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple RNN
In ths notebook, we're going to train a simple RNN to do time-series prediction. Given some set of input data, it should be able to generate a prediction for the next time step!
<img src='assets/time_prediction.png' width=40% />
First, we'll create our data
Then, define an RNN in PyTorch
Finally, we'll train our network and see how it performs
Import resources and create data
Step1: Define the RNN
Next, we define an RNN in PyTorch. We'll use nn.RNN to create an RNN layer, then we'll add a last, fully-connected layer to get the output size that we want. An RNN takes in a number of parameters
Step2: Check the input and output dimensions
As a check that your model is working as expected, test out how it responds to input data.
Step3: Training the RNN
Next, we'll instantiate an RNN with some specified hyperparameters. Then train it over a series of steps, and see how it performs.
Step4: Loss and Optimization
This is a regression problem
Step5: Defining the training function
This function takes in an rnn, a number of steps to train for, and returns a trained rnn. This function is also responsible for displaying the loss and the predictions, every so often.
Hidden State
Pay close attention to the hidden state, here | Python Code:
import torch
from torch import nn
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(8,5))
# how many time steps/data pts are in one batch of data
seq_length = 20
# generate evenly spaced data pts
time_steps = np.linspace(0, np.pi, seq_length + 1)
data = np.sin(time_steps)
data.resize((seq_length + 1, 1)) # size becomes (seq_length+1, 1), adds an input_size dimension
x = data[:-1] # all but the last piece of data
y = data[1:] # all but the first
# display the data
plt.plot(time_steps[1:], x, 'r.', label='input, x') # x
plt.plot(time_steps[1:], y, 'b.', label='target, y') # y
plt.legend(loc='best')
plt.show()
Explanation: Simple RNN
In ths notebook, we're going to train a simple RNN to do time-series prediction. Given some set of input data, it should be able to generate a prediction for the next time step!
<img src='assets/time_prediction.png' width=40% />
First, we'll create our data
Then, define an RNN in PyTorch
Finally, we'll train our network and see how it performs
Import resources and create data
End of explanation
class RNN(nn.Module):
def __init__(self, input_size, output_size, hidden_dim, n_layers):
super(RNN, self).__init__()
self.hidden_dim=hidden_dim
# define an RNN with specified parameters
# batch_first means that the first dim of the input and output will be the batch_size
self.rnn = nn.RNN(input_size, hidden_dim, n_layers, batch_first=True)
# last, fully-connected layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, x, hidden):
# x (batch_size, seq_length, input_size)
# hidden (n_layers, batch_size, hidden_dim)
# r_out (batch_size, time_step, hidden_size)
batch_size = x.size(0)
# get RNN outputs
r_out, hidden = self.rnn(x, hidden)
# shape output to be (batch_size*seq_length, hidden_dim)
r_out = r_out.view(-1, self.hidden_dim)
# get final output
output = self.fc(r_out)
return output, hidden
Explanation: Define the RNN
Next, we define an RNN in PyTorch. We'll use nn.RNN to create an RNN layer, then we'll add a last, fully-connected layer to get the output size that we want. An RNN takes in a number of parameters:
* input_size - the size of the input
* hidden_dim - the number of features in the RNN output and in the hidden state
* n_layers - the number of layers that make up the RNN, typically 1-3; greater than 1 means that you'll create a stacked RNN
* batch_first - whether or not the input/output of the RNN will have the batch_size as the first dimension (batch_size, seq_length, hidden_dim)
Take a look at the RNN documentation to read more about recurrent layers.
End of explanation
# test that dimensions are as expected
test_rnn = RNN(input_size=1, output_size=1, hidden_dim=10, n_layers=2)
# generate evenly spaced, test data pts
time_steps = np.linspace(0, np.pi, seq_length)
data = np.sin(time_steps)
data.resize((seq_length, 1))
test_input = torch.Tensor(data).unsqueeze(0) # give it a batch_size of 1 as first dimension
print('Input size: ', test_input.size())
# test out rnn sizes
test_out, test_h = test_rnn(test_input, None)
print('Output size: ', test_out.size())
print('Hidden state size: ', test_h.size())
Explanation: Check the input and output dimensions
As a check that your model is working as expected, test out how it responds to input data.
End of explanation
# decide on hyperparameters
input_size=1
output_size=1
hidden_dim=32
n_layers=1
# instantiate an RNN
rnn = RNN(input_size, output_size, hidden_dim, n_layers)
print(rnn)
Explanation: Training the RNN
Next, we'll instantiate an RNN with some specified hyperparameters. Then train it over a series of steps, and see how it performs.
End of explanation
# MSE loss and Adam optimizer with a learning rate of 0.01
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(rnn.parameters(), lr=0.01)
Explanation: Loss and Optimization
This is a regression problem: can we train an RNN to accurately predict the next data point, given a current data point?
The data points are coordinate values, so to compare a predicted and ground_truth point, we'll use a regression loss: the mean squared error.
It's typical to use an Adam optimizer for recurrent models.
End of explanation
# train the RNN
def train(rnn, n_steps, print_every):
# initialize the hidden state
hidden = None
for batch_i, step in enumerate(range(n_steps)):
# defining the training data
time_steps = np.linspace(step * np.pi, (step+1)*np.pi, seq_length + 1)
data = np.sin(time_steps)
data.resize((seq_length + 1, 1)) # input_size=1
x = data[:-1]
y = data[1:]
# convert data into Tensors
x_tensor = torch.Tensor(x).unsqueeze(0) # unsqueeze gives a 1, batch_size dimension
y_tensor = torch.Tensor(y)
# outputs from the rnn
prediction, hidden = rnn(x_tensor, hidden)
## Representing Memory ##
# make a new variable for hidden and detach the hidden state from its history
# this way, we don't backpropagate through the entire history
hidden = hidden.data
# calculate the loss
loss = criterion(prediction, y_tensor)
# zero gradients
optimizer.zero_grad()
# perform backprop and update weights
loss.backward()
optimizer.step()
# display loss and predictions
if batch_i%print_every == 0:
print('Loss: ', loss.item())
plt.plot(time_steps[1:], x, 'r.') # input
plt.plot(time_steps[1:], prediction.data.numpy().flatten(), 'b.') # predictions
plt.show()
return rnn
# train the rnn and monitor results
n_steps = 75
print_every = 15
trained_rnn = train(rnn, n_steps, print_every)
Explanation: Defining the training function
This function takes in an rnn, a number of steps to train for, and returns a trained rnn. This function is also responsible for displaying the loss and the predictions, every so often.
Hidden State
Pay close attention to the hidden state, here:
* Before looping over a batch of training data, the hidden state is initialized
* After a new hidden state is generated by the rnn, we get the latest hidden state, and use that as input to the rnn for the following steps
End of explanation |
1,328 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Two Phase Predictions Design Pattern
The Two Phased Prediction design pattern provides a way to address the problem of keeping models for specific use cases sophisticated and performant when they have to be deployed onto distributed devices.
We'll use this Kaggle environmental sound dataset to build a two phase model
Step1: Data pre-processing
First we'll take a look at a CSV file with the audio filenames and their respective labels. Then we'll add a column for the label we'll be using in the first model to determine whether the sound is an instrument or not.
All of the data we'll be using to train the models has been made publicly available in a GCS bucket.
Inspect the labels
Step2: Add a column for our first model
Step3: To ensure quality, we'll only use manually verified samples from the dataset.
Step4: Preview the spectrogram for a sample trumpet sound from our training dataset
Step5: Download all the spectrogram data
Step6: Phase 1
Step7: Load the images as a tf dataset
Step8: Build a model for binary classification
Step9: Note
Step10: Convert model to TFLite and quantize
Save the TF Lite model file and get predictions on it using Python.
Step11: Get a prediction on one spectrogram from our validation dataset, print the model's prediction (sigmoid probability from 0 to 1) along with ground truth label. For more details, check out the TF Lite inference docs.
Step12: Phase 2
Step13: Create directories for each instrument label. We'll use this later when we load our images with Keras's ImageDataGenerator class.
Step14: Note
Step15: Get a test prediction on the instrument model | Python Code:
import numpy as np
import pandas as pd
import tensorflow_hub as hub
import tensorflow as tf
import os
import pathlib
import matplotlib.pyplot as plt
from scipy import signal
from scipy.io import wavfile
Explanation: Two Phase Predictions Design Pattern
The Two Phased Prediction design pattern provides a way to address the problem of keeping models for specific use cases sophisticated and performant when they have to be deployed onto distributed devices.
We'll use this Kaggle environmental sound dataset to build a two phase model:
Phase 1 (on-device): is it an instrument?
Phase 2 (cloud): which instrument is it?
To build this solution, both of our models will be trained on audio spectrograms -- image representations of audio.
End of explanation
# Copy the label lookup file from GCS
!gsutil cp gs://ml-design-patterns/audio-train.csv .
label_data = pd.read_csv('audio-train.csv')
label_data.head()
Explanation: Data pre-processing
First we'll take a look at a CSV file with the audio filenames and their respective labels. Then we'll add a column for the label we'll be using in the first model to determine whether the sound is an instrument or not.
All of the data we'll be using to train the models has been made publicly available in a GCS bucket.
Inspect the labels
End of explanation
instrument_labels = ['Cello', 'Clarinet', 'Double_bass', 'Saxophone', 'Violin_or_fiddle', 'Snare_drum', 'Hi-hat', 'Flute', 'Bass_drum', 'Trumpet', 'Acoustic_guitar', 'Oboe', 'Gong', 'Tambourine', 'Cowbell', 'Harmonica', 'Electric_piano', 'Glockenspiel']
def check_instrument(row):
if row['label'] in instrument_labels:
return 1
else:
return 0
label_data['is_instrument'] = label_data.apply(check_instrument, axis=1)
label_data.head()
label_data['is_instrument'].value_counts()
Explanation: Add a column for our first model: whether the sound is an instrument or not
End of explanation
verified = label_data.loc[label_data['manually_verified'] == 1]
verified['is_instrument'].value_counts()
verified.head()
Explanation: To ensure quality, we'll only use manually verified samples from the dataset.
End of explanation
!gsutil cp gs://ml-design-patterns/audio_train/001ca53d.wav .
sample_rate, samples = wavfile.read('001ca53d.wav')
freq, times, spectro = signal.spectrogram(samples, sample_rate)
plt.figure()
fig = plt.gcf()
plt.axis('off')
plt.pcolormesh(times, freq, np.log(spectro))
plt.show()
Explanation: Preview the spectrogram for a sample trumpet sound from our training dataset
End of explanation
# This might take a few minutes
!gsutil -m cp -r gs://ml-design-patterns/audio_train_spectro .
Explanation: Download all the spectrogram data
End of explanation
!mkdir audio_spectros
!mkdir audio_spectros/not_instrument
!mkdir audio_spectros/instrument
keys = verified['fname'].values
vals = verified['is_instrument'].values
label_lookup = dict(zip(keys, vals))
for i in os.listdir(os.getcwd() + '/audio_train_spectro'):
id = i.split('.')[0] + '.wav'
is_instra = label_lookup[id]
im_path = os.getcwd() + '/audio_train_spectro/' + i
if is_instra == 0:
!mv $im_path audio_spectros/not_instrument/
else:
!mv $im_path audio_spectros/instrument/
Explanation: Phase 1: build the offline-optimized binary classification model
First we'll move the images into labeled directories since the Keras ImageDataGenerator class uses this format (directories with names corresponding to labels) to read our data. Then we'll create our training and validation batches and train a model using the MobileNet V2 architecture with frozen weights from ImageNet.
End of explanation
data_dir = pathlib.Path(os.getcwd() + '/audio')
class_names = ['not_instrument', 'instrument']
BATCH_SIZE = 64
IMG_HEIGHT = 128
IMG_WIDTH = 128
STEPS_PER_EPOCH = np.ceil(3700/BATCH_SIZE)
image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, validation_split=0.1)
train_data_gen = image_generator.flow_from_directory(directory=data_dir,
batch_size=BATCH_SIZE,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
classes = class_names,
class_mode='binary')
val_data_gen = image_generator.flow_from_directory(directory=data_dir,
batch_size=BATCH_SIZE,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
classes = class_names,
class_mode='binary',
subset='validation')
image_count = len(list(data_dir.glob('*/*.png')))
print(image_count)
instrument_modellabel_batch = next(train_data_gen)
val_image, val_label = next(val_data_gen)
Explanation: Load the images as a tf dataset
End of explanation
mobilenet = tf.keras.applications.MobileNetV2(
input_shape=((128,128,3)),
include_top=False,
weights='imagenet'
)
mobilenet.trainable = False
feature_batch = mobilenet(image_batch)
global_avg_layer = tf.keras.layers.GlobalAveragePooling2D()
feature_batch_avg = global_avg_layer(feature_batch)
print(feature_batch_avg.shape)
prediction_layer = tf.keras.layers.Dense(1, activation='sigmoid')
prediction_batch = prediction_layer(feature_batch_avg)
print(prediction_batch.shape)
model = tf.keras.Sequential([
mobilenet,
global_avg_layer,
prediction_layer
])
model.summary()
model.compile(optimizer='SGD',
loss='binary_crossentropy',
metrics=['accuracy'])
Explanation: Build a model for binary classification: is it an instrument or not? We'll use transfer learning for this, by loading the MobileNet V2 model architecture trained on the ImageNet dataset and then adding a few additional layers on top specific to our prediction task.
End of explanation
model.fit_generator(train_data_gen,
validation_data=val_data_gen,
steps_per_epoch=STEPS_PER_EPOCH, epochs=10)
Explanation: Note: we could make changes to our model architecture, perform progressive fine-tuning to find the optimal number of layers to fine-tune, or employ hyperparameter tuning to improve model accuracy. Here the focus is on tooling and process rather than accuracy.
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
open('converted_model.tflite', 'wb').write(tflite_model)
Explanation: Convert model to TFLite and quantize
Save the TF Lite model file and get predictions on it using Python.
End of explanation
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test model on random input data.
input_shape = input_details[0]['shape']
input_data = np.array([val_image[0]], dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
# The function `get_tensor()` returns a copy of the tensor data.
# Use `tensor()` in order to get a pointer to the tensor.
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
print(val_label[0])
input_details
Explanation: Get a prediction on one spectrogram from our validation dataset, print the model's prediction (sigmoid probability from 0 to 1) along with ground truth label. For more details, check out the TF Lite inference docs.
End of explanation
instrument_data = verified.loc[verified['is_instrument'] == 1]
instrument_data.head()
instrument_data['label'].value_counts()
inst_keys = instrument_data['fname'].values
inst_vals = instrument_data['label'].values
instrument_label_lookup = dict(zip(inst_keys, inst_vals))
!mkdir instruments
for i in instrument_labels:
path = os.getcwd() + '/instruments/' + i
!mkdir $path
Explanation: Phase 2: identifying instrument sounds
Train a multilabel classification model to predict the instrument associated with a given instrument sound.
End of explanation
for i in os.listdir(os.getcwd() + '/audio_train_spectro'):
id = i.split('.')[0] + '.wav'
try:
instrument_name = instrument_label_lookup[id]
im_path = os.getcwd() + '/audio_train_spectro/' + i
new_path = os.getcwd() + '/instruments/' + instrument_name + '/' + i
!mv $im_path $new_path
except:
pass
instrument_image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, validation_split=0.1)
BATCH_SIZE = 256
IMG_HEIGHT = 128
IMG_WIDTH = 128
STEPS_PER_EPOCH = np.ceil(2002/BATCH_SIZE)
train_data_instrument = instrument_image_generator.flow_from_directory(directory=os.getcwd() + '/instruments',
batch_size=BATCH_SIZE,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
classes = instrument_labels)
val_data_instrument = instrument_image_generator.flow_from_directory(directory=os.getcwd() + '/instruments',
batch_size=BATCH_SIZE,
shuffle=False,
target_size=(IMG_HEIGHT, IMG_WIDTH),
classes = instrument_labels,
subset='validation')
image_instrument_train, label_instrument_train = next(train_data_instrument)
image_instrument_val, label_instrument_val = next(val_data_instrument)
vgg_model = tf.keras.applications.VGG19(
include_top=False,
weights='imagenet',
input_shape=((128,128,3))
)
vgg_model.trainable = False
feature_batch = vgg_model(image_batch)
global_avg_layer = tf.keras.layers.GlobalAveragePooling2D()
feature_batch_avg = global_avg_layer(feature_batch)
print(feature_batch_avg.shape)
prediction_layer = tf.keras.layers.Dense(len(instrument_labels), activation='softmax')
prediction_batch = prediction_layer(feature_batch_avg)
print(prediction_batch.shape)
instrument_model = tf.keras.Sequential([
vgg_model,
global_avg_layer,
prediction_layer
])
instrument_model.summary()
instrument_model.compile(optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: Create directories for each instrument label. We'll use this later when we load our images with Keras's ImageDataGenerator class.
End of explanation
instrument_model.fit_generator(
train_data_instrument,
validation_data=val_data_instrument,
steps_per_epoch=STEPS_PER_EPOCH, epochs=10)
Explanation: Note: we could make changes to our model architecture, perform progressive fine-tuning to find the optimal number of layers to fine-tune, or employ hyperparameter tuning to improve model accuracy. Here the focus is on tooling and process rather than accuracy.
End of explanation
test_pred = instrument_model.predict(np.array([image_instrument_val[0]]))
predicted_index = np.argmax(test_pred)
confidence = test_pred[0][predicted_index]
test_pred[0]
print('Predicted instrument: ', instrument_labels[predicted_index], round(confidence * 100), '% confidence')
print('Actual instrument: ', instrument_labels[np.argmax(label_instrument_val[0])])
Explanation: Get a test prediction on the instrument model
End of explanation |
1,329 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Meterstick package provides a concise and flexible syntax to describe and execute
routine data analysis tasks. The easiest way to learn to use Meterstick is by example.
For External users
You can open this notebook in Google Colab.
Installation
You can install from pip for the stable version
Step1: or from GitHub for the latest version.
Step2: Demo Starts
Step3: Simple Metrics
There are many built-in simple Metrics in Meterstick. They directly operate on a DataFrame.
Sum
Step4: Count
Step5: Dot (inner product)
Step6: It can also be normalized.
Step7: Max
Step8: Min
Step9: Mean
Step10: Weighted Mean
Step11: Quantile
Step12: Interpolation
You can specify how you want to interpolate the quantile. It could be any of (‘linear’, ‘lower’, ‘higher’, ‘midpoint’, ‘nearest’).
Step13: Weighted Quantile
Step14: Variance
Step15: Biased Variance
The default Variance is unbiased, namely, the divisor used in calculations is N - 1. You could set unbiased=False to use N as the divisor.
Step16: Weighted Variance
Step17: Standard Deviation
Step18: Biased Standard Deviation
Similar to biased Variance, it's possible to compute biased standard deviation.
Step19: Weighted Standard Deviation
Step20: Coefficient of Variation
Step21: Correlation
Step22: Weighted Correlation
Step23: Covariance
Step24: Weighted Covariance
Step25: Slicing
You can group your DataFrame and compute the Metrics on slices.
Step26: Multiple Metrics
You can put multiple Metrics into a MetricList and compute them together. It's not only makes your codes terser, it might make the computation much faster. See Caching section for more infomation.
Step27: Arithmetic of Metrics
You can do many arithmetic operations on Metrics. It can also be between a Metric and a scalar. You can call set_name() to give your composite Metric a new name. Internally, we operate on the results returned by Metrics with return_dataframe=False to avoid incompatible DataFrame columns names. However, if both Metrics return DataFrames even when return_dataframe is set to False, you might get lots of NAs. The solution is use rename_columns() to unify the column names. See section "Compare the standard errors between Jackknife and Bootstrap" for an example.
Add
Step28: Divide
Step29: Ratio
Since division between two Sums is common, we make a Ratio() Metric as a syntax sugar. Its third arg is the name for the Metric and is optional.
Step30: We also support many other common arithmetic operations.
Step31: Output Format
There are two options for you to control the format of the return.
1. return_dataframe
Step32: Operations
An Operation is a special type of Metric that is built on top of another Metric (called a "child"). A Metric is anything that has the compute_on() method, so the child doesn't need to be a simple Metric like Sum. It could be a MetricList, a composite Metric, or even another Operation.
Distribution
Compute the child Metric on a DataFrame grouped by a column, then normalize the numbers to 1 within group.
Step33: It's equal to
Step34: Distribution has an alias Normalize.
Step35: Cumulative Distribution
Similar to Distribution except that it returns the cumulative sum after normalization, but unlike Distribution the order of the cumulating column matters. As the result, we always sort the column and there is an 'order' arg for you to customize the ordering.
Step36: PercentChange
Computes the percent change to a certain group on the DataFrame returned by the child Metric. The returned value is the # of percent points.
Step37: You can include the base group in your result.
Step38: You can also specify multiple columns as the condition columns, then your base value should be a tuple.
Step39: Absolute Change
Very similar to PercentChange, but the absolute difference is returned.
Step40: You can also include the base group in your result.
Step41: Cochran-Mantel-Haenszel statistics
Please refer to the Wikepedia page for its definition. Besides the condition column and baseline key that PercentChange and AbsoluteChange take, CMH also needs a column to stratify. The child Metric must be a ratio of two single-column Metrics or CMH doesn't make sense. So instead of passing
AbsoluteChange(MetricList([a, b])) / AbsoluteChange(MetricList([c, d])),
please use
MetricList([AbsoluteChange(a) / AbsoluteChange(c),
AbsoluteChange(b) / AbsoluteChange(d)]).
Step42: CUPED
It computes the absolute change that has been adjusted using the CUPED approach. It provides an unbiased estimate to the absolute change with lower variance.
Let's see how it works on a fake data with preperiod metrics that are correlated with postperiod metrics and the effect of the experiment is small and noisy.
Step43: CUPED essentially fits a linear model of Postperiod metric ~ 1 + preperiod metric and uses it to control for the variance in the preperiod.
Step44: We can see that CUPED's result is similar to the absolute change but has smaller variance.
Step45: PrePostChange
It computes the percent change that has been adjusted using the PrePost approach. It's similar to CUPED but control for treatment groups additionally. Essentially, it fits
Postperiod metric ~ 1 + is_treated * preperiod metric, or more verbosely,
Postperiod metric = β1 + β2 * is_treated + β3 * preperiod metric + β4 * is_treated * preperiod metric.
Note that the estimate of β2 will be the estimate of treatment effect and the control arm metric can be estimated using β1 if we centered preperiod metric. As the result, β2 / β1 will be the estimate of the percent change that PrePostChange returns.
Step46: Standard Errors
Jackknife
Unlike all Metrics we have seen so far, Jackknife returns a multiple-column DataFrame because by default we return point estimate and standard error.
Step47: You can also specify a confidence level, the we'll return the confidence interval. The returned DataFrame also comes with a display() method for visualization which will highlight significant changes. To customize the display(), please take a look at confidence_interval_display_demo.ipynb.
Step48: Bootstrap
The output is similar to Jackknife. The different args are
- unit
Step49: Models
Meterstick also has built-in support for model fitting. The module is not imported by default, so you need to manually import it.
Step50: Linear Regression
Step51: What Model(y, x, groupby).compute_on(data) does is
1. Computes MetricList((y, x)).compute_on(data, groupby).
2. Fits the underlying sklearn model on the result from #1.
Step52: Ridge Regression
Step53: Lasso Regression
Step54: Logistic Regression
Step55: If y is not binary, by default a multinomal model is fitted. The behavior can be controlled via the 'multinomial' arg.
Step56: Classes are the unique values of y.
Step57: Wrapping sklearn models into Meterstick provides the ability to combine Models with other built-in Metrics and Operations. For example, you can Jackknife the Model to get the uncertainty of coefficients.
Step58: Pipeline
You have already seen this. Instead of
Jackknife(PercentChange(MetricList(...))).compute_on(df)
you can write
MetricList(...) | PercentChange() | Jackknife() | compute_on(df)
which is more intuitive. We overwrite the "|" operator on Metric and the __call__() of Operation so a Metric can be pipelined to an Operation. As Operation is a special kind of Metric, so it can bu further pipelined to another Operation. At last, compute_on() takes a Metric from the pipeline and is equavalent to calling metric.compute_on().
Filter
There is a "where" arg for Metric. It'll be passed to df.query() at the beginning of compute_on(df). By default the filter is not reflected in the name of Metric so same Metrics with different filters would have same column names in the returned DataFrames. It makes combining them easy.
Step59: It's equivalent to
Step60: SQL
You can easily get SQL query for all built-in Metrics and Operations, except for weighted Quantile/CV/Correlation/Cov, by calling
to_sql(sql_table, split_by).
You can also directly execute the query by calling
compute_on_sql(sql_table, split_by, execute, melted),
where execute is a function that can execute SQL queries. The return is very similar to compute_on().
The dialect it uses is the standard SQL in Google Cloud's BigQuery.
Step61: Custom Metric
We provide many Metrics out of box but we understand there are cases you need more, so we make it easy for you to write you own Metrics.
First you need to understand the dataflow of a DataFrame when it's passed to compute_on(). The dataflow looks like this.
<-------------------------------------------compute_on(handles caching)---------------------------------------------->
<-------------------------------------compute_through-----------------------------------> |
| <------compute_slices------> | |
| |-> slice1 -> compute | | | |
df -> df.query(where) -> precompute -> split_data -|-> slice2 -> compute | -> pd.concat -> postcompute -> manipulate -> final_compute
|-> ... |
In summary, compute() operates on a slice of data and hence only takes one arg, df. While precompute(), postcompute(), compute_slices(), compute_through() and final_compute() operate on the whole DataFrame so they take the df that has been processed by the dataflow till them and the split_by passed to compute_on(). final_compute() also has access to the original df passed to compute_on() for you to make additional manipulation. manipulate() does common data manipulation like melting and cleaning. Besides wrapping all the computations above, compute_on() also caches the result from compute_through(). Please refer to the section of Caching for more details.
Depending on your case, you can overwrite all the methods, but we suggest you NOT to overwrite compute_on() because it might mess up the caching mechanism, nor manipulate(), because it might not work well with other Metrics' data manipulation. Here are some rules to help you decide.
1. If your Metric has no vectorization over slices, overwrite compute() which only takes one arg, df. To overwrite, you can either create a new class inheriting from Metric or just pass a lambda function into Metric.
2. If you have vectorization logic over slices, overwrite compute_slices().
3. As compute() operates on a slice of data, it doesn't have access to the columns to split_by and the index value of the slice. If you need them, overwrite compute_with_split_by(self, df, split_by, slice_name), which is just a wrapper of compute(), but has access to split_by and the value of current slice, slice_name.
4. The data passed into manipulate() should be a number, a pd.Series, or a wide/unmelted pd.DataFrame.
5. split_data() returns (sub_dataframe, corresponding slice value). You might want to overwrite it for non-vectorized Operations. See section Linear Regression for examples.
Also there are some requirements.
1. Your Metric shouldn't change the input DataFrame inplace or it might not work with other Metrics.
2. Your Metric shouldn't rely on the index of the input DataFrame if you want it to work with Jackknife. The reason is Jackknife might reset the index.
No Vectorization
Step62: CustomSum doesn't have vectorization. It loops through the DataFrame and sum on every slice. As the result, it's slower than vectorized summation.
Step63: With Vectorization
We can do better. Let's implement a Sum with vectorization.
Step64: Precompute, postcompute and final_compute
They are useful when you need to preprocess and postprocess the data.
Step65: Overwrite using Lambda Functions
For one-off Metrics, you can also overwrite precompute, compute, postcompute, compute_slices and final_compute by passing them to Metric() as lambda functions.
Step67: Custom Operation
Writing a custom Operation is a bit more complex. Take a look at the Caching section below as well. Typically an Operation first computes its children Metrics with expanded split_by. Here are some rules to keep in mind.
1. Always use compute_on and compute_child to compute the children Metrics. They handle caching so your Operation can interact with other Metrics correctly.
2. If the Operation extends the split_by when computing children Metrics, you need to register the extra columns added in the __init__().
3. The extra columns should come after the original split_by.
4. If you really cannot obey #2 or #3, you need to overwrite Operation.flush_children(), or it won't work with Jackknife and Bootstrap.
5. Try to vectorize the Operation as much as possible. At least you can compute the children Metrics in a vectorized way by calling compute_child(). It makes the caching of the children Metrics more available.
6. Jackknife takes shortcuts when computing leave-one-out (LOO) estimates for Sum, Mean and Count, so if you want your Operation to work with Jackknife fast, delegate computations to Sum, Mean and Count as much as possible. See section Linear Regression for a comparison.
7. For the same reason, you computation logic should avoid using input df other than in compute_on() and compute_child(). When cutting corners, Jackknife emits None as the input df for LOO estimation. The compute_on() and compute_child() functions know to read from cache but other functions may not know what to do. If your Operation uses df outside the compute_on() and compute_child() functions, you have either to
* ensure that your computation doesn't break when df is None.
* set attribute 'precomputable_in_jk' to False (which will force the jackknife to be computed the manual way, which is slower).
Let's see Distribution for an example.
Step68: SQL Generation
If you want the custom Metric to generate SQL query, you need to implement to_sql() or get_sql_and_with_clause(). The latter is more common and recommended. Please refer to built-in Metrics to see how it should be implemented. Here we show two examples, one for Metric and the other for Operation.
Step70: For an Operation, you ususally call the child metrics' get_sql_and_with_clause() to get the subquery you need.
Step71: Caching
tl;dr
Step72: Now let's see what heppens if we reuse sum_clicks.
Step73: Then sum_clicks only gets computed once. For Metics that are not quite compatible, you can still put them in a MeticList and set return_dataframe to False to maximize the caching.
Step74: If you really cannot compute Metrics together, you can use a cache_key.
Step75: The resulte are cached in ctr, a composite Metric, as well as its children, the Sum Metrics.
Step76: You can flush the cache by calling flush_cache(key, split_by=None, recursive=True, prune=True), where "recursive" means if you want to flush the cache of the children Metrics as well, and "prune" means if the key is not found in current Metric, do you still want to flush the children Metrics or stop early. It's useful when a high level Metric appears in several places then during the flushing we will hit it multiple times. We can save time by stop early.
Step77: Though ctr's cache has been flushed, we can still compute ctr from cache because all its children are cached.
Step78: We won't be able to re-compute ctr if we recursively flush its cache.
Step79: However, the behavior becomes subtle when Operation is involved.
Step80: Note that it's sum_clicks.compute_on(df, 'country') instead of sum_clicks.compute_on(df) got saved in the cache. The reason is we need the former not the latter to compute the PercentChange. Using sum_clicks.compute_on(df, cache_key=42) will always give you the right result so it's not a big issue, just might confuse you sometime.
Step81: Advanced Examples
Click Split
Step82: Difference in differences
Step83: Compare the standard errors between Jackknife and Bootstrap
Step86: Linear Regression
Here we fit a linear regression on mean values of groups. We show two versions, the former delgates computations to Mean so its Jackknife is faster than the latter which doesn't delegate.
Step87: LOWESS
Step88: Coefficient Shrikage | Python Code:
!pip install meterstick
Explanation: The Meterstick package provides a concise and flexible syntax to describe and execute
routine data analysis tasks. The easiest way to learn to use Meterstick is by example.
For External users
You can open this notebook in Google Colab.
Installation
You can install from pip for the stable version
End of explanation
!git clone https://github.com/google/meterstick.git
import sys, os
sys.path.append(os.getcwd())
Explanation: or from GitHub for the latest version.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from meterstick import *
np.random.seed(42)
platform = ('Desktop', 'Mobile', 'Tablet')
exprs = ('ctrl', 'expr')
country = ('US', 'non-US')
size = 1000
impressions = np.random.randint(10, 20, size)
clicks = impressions * 0.1 * np.random.random(size)
df = pd.DataFrame({'impressions': impressions, 'clicks': clicks})
df['platform'] = np.random.choice(platform, size=size)
df['expr_id'] = np.random.choice(exprs, size=size)
df['country'] = np.random.choice(country, size=size)
df['cookie'] = np.random.choice(range(5), size=size)
df.loc[df.country == 'US', 'clicks'] *= 2
df.loc[(df.country == 'US') & (df.platform == 'Desktop'), 'impressions'] *= 4
df.head()
Explanation: Demo Starts
End of explanation
Sum('clicks').compute_on(df)
Explanation: Simple Metrics
There are many built-in simple Metrics in Meterstick. They directly operate on a DataFrame.
Sum
End of explanation
Count('country').compute_on(df)
Count('country', distinct=True).compute_on(df)
Explanation: Count
End of explanation
Dot('clicks', 'impressions').compute_on(df)
Explanation: Dot (inner product)
End of explanation
Dot('clicks', 'clicks', True).compute_on(df)
Explanation: It can also be normalized.
End of explanation
Max('clicks').compute_on(df)
Explanation: Max
End of explanation
Min('clicks').compute_on(df)
Explanation: Min
End of explanation
Mean('clicks').compute_on(df)
Explanation: Mean
End of explanation
Mean('clicks', 'impressions').compute_on(df)
Explanation: Weighted Mean
End of explanation
Quantile('clicks').compute_on(df) # Default is median.
Quantile('clicks', 0.2).compute_on(df)
Quantile('clicks', (0.2, 0.5)).compute_on(df) # Quantile can take multiple quantiles.
Explanation: Quantile
End of explanation
Quantile('clicks', 0.5, interpolation='higher').compute_on(df)
Explanation: Interpolation
You can specify how you want to interpolate the quantile. It could be any of (‘linear’, ‘lower’, ‘higher’, ‘midpoint’, ‘nearest’).
End of explanation
Quantile('clicks', weight='impressions').compute_on(df)
Explanation: Weighted Quantile
End of explanation
Variance('clicks').compute_on(df)
Explanation: Variance
End of explanation
Variance('clicks', unbiased=False).compute_on(df)
Explanation: Biased Variance
The default Variance is unbiased, namely, the divisor used in calculations is N - 1. You could set unbiased=False to use N as the divisor.
End of explanation
Variance('clicks', weight='impressions').compute_on(df)
Explanation: Weighted Variance
End of explanation
StandardDeviation('clicks').compute_on(df)
Explanation: Standard Deviation
End of explanation
StandardDeviation('clicks', False).compute_on(df)
Explanation: Biased Standard Deviation
Similar to biased Variance, it's possible to compute biased standard deviation.
End of explanation
StandardDeviation('clicks', weight='impressions').compute_on(df)
Explanation: Weighted Standard Deviation
End of explanation
CV('clicks').compute_on(df)
Explanation: Coefficient of Variation
End of explanation
Correlation('clicks', 'impressions').compute_on(df)
Explanation: Correlation
End of explanation
Correlation('clicks', 'impressions', weight='impressions').compute_on(df)
Explanation: Weighted Correlation
End of explanation
Cov('clicks', 'impressions').compute_on(df)
Explanation: Covariance
End of explanation
Cov('clicks', 'impressions', weight='impressions').compute_on(df)
Explanation: Weighted Covariance
End of explanation
Sum('clicks').compute_on(df, 'country')
Mean('clicks').compute_on(df, ['platform', 'country'])
Explanation: Slicing
You can group your DataFrame and compute the Metrics on slices.
End of explanation
MetricList((Sum('clicks'), Count('clicks'))).compute_on(df)
Explanation: Multiple Metrics
You can put multiple Metrics into a MetricList and compute them together. It's not only makes your codes terser, it might make the computation much faster. See Caching section for more infomation.
End of explanation
(Sum('clicks') + 1).compute_on(df)
sum((Sum('clicks'), Sum('impressions'), 1)).compute_on(df)
sum((Sum('clicks'), Sum('impressions'), 1)).set_name('meaningless sum').compute_on(df)
Explanation: Arithmetic of Metrics
You can do many arithmetic operations on Metrics. It can also be between a Metric and a scalar. You can call set_name() to give your composite Metric a new name. Internally, we operate on the results returned by Metrics with return_dataframe=False to avoid incompatible DataFrame columns names. However, if both Metrics return DataFrames even when return_dataframe is set to False, you might get lots of NAs. The solution is use rename_columns() to unify the column names. See section "Compare the standard errors between Jackknife and Bootstrap" for an example.
Add
End of explanation
(Sum('clicks') / Sum('impressions')).compute_on(df)
Explanation: Divide
End of explanation
Ratio('clicks', 'impressions', 'ctr').compute_on(df)
Explanation: Ratio
Since division between two Sums is common, we make a Ratio() Metric as a syntax sugar. Its third arg is the name for the Metric and is optional.
End of explanation
MetricList(
(Sum('clicks') - 1,
-Sum('clicks'),
2 * Sum('clicks'),
Sum('clicks')**2,
2**Mean('clicks'),
(Mean('impressions')**Mean('clicks')).set_name('meaningless power'))
).compute_on(df, melted=True)
Explanation: We also support many other common arithmetic operations.
End of explanation
Sum('clicks').compute_on(df, return_dataframe=False)
Count('clicks').compute_on(df, ['platform', 'country'], return_dataframe=False)
Mean('clicks').compute_on(df, melted=True)
MetricList((Sum('clicks'), Count('clicks'))).compute_on(df, 'country')
Quantile('clicks', [0.2, 0.7]).compute_on(df, 'country', melted=True)
# Don't worry. We will talk more about the pipeline operator "|" later.
(MetricList((Sum('clicks'), Count('clicks')))
| Jackknife('cookie')
| compute_on(df, 'country'))
(MetricList((Sum('clicks'), Count('clicks')))
| Bootstrap(n_replicates=100)
| compute_on(df, 'country', melted=True))
Explanation: Output Format
There are two options for you to control the format of the return.
1. return_dataframe: Default True, if False, we try to return a scalar or pd.Series. For complex Metrics it might have no effect and a DataFrame is always returned. For example, all Metrics in the Operations section below always return a DataFrame.
return_dataframe has different effect on MetricList. If False, MetricList will return a list of DataFrames instead of trying to concat them. This is a convenient way to compute incompatible Metrics together to maximize caching (see section Caching also). There is an attribute "children_return_dataframe" in MetricList which will be passed to children Metrics as their return_dataframe so you can get a list of numbers or pd.Series.
2. melted: Dafault False. It decides if the returned DataFrame is in wide/unmelted or long/melted form. It doesn't have effect if the return is not a DataFrame.
- Long/melted means the leftmost index is 'Metric' so
`MetricList((m1, m2)).compute_on(df, melted=True).loc[m1.name] ≡ m1.compute_on(df, melted=True)`
Wide/unmelted means the outermost column index is 'Metric' so
MetricList((m1, m2)).compute_on(df)[m1.name] ≡ m1.compute_on(df)
End of explanation
Distribution('country', Sum('clicks')).compute_on(df)
Explanation: Operations
An Operation is a special type of Metric that is built on top of another Metric (called a "child"). A Metric is anything that has the compute_on() method, so the child doesn't need to be a simple Metric like Sum. It could be a MetricList, a composite Metric, or even another Operation.
Distribution
Compute the child Metric on a DataFrame grouped by a column, then normalize the numbers to 1 within group.
End of explanation
(Sum('clicks').compute_on(df, 'country') /
Sum('clicks').compute_on(df, return_dataframe=False))
Explanation: It's equal to
End of explanation
Normalize('country', Sum('clicks')).compute_on(df)
Explanation: Distribution has an alias Normalize.
End of explanation
CumulativeDistribution('country', MetricList(
(Sum('clicks'), Sum('impressions')))).compute_on(df)
CumulativeDistribution(
'country', Sum('clicks'), order=('non-US', 'US')).compute_on(df, 'platform')
CumulativeDistribution(
'country', MetricList((Sum('clicks'), Sum('impressions')))
).compute_on(df, melted=True)
Explanation: Cumulative Distribution
Similar to Distribution except that it returns the cumulative sum after normalization, but unlike Distribution the order of the cumulating column matters. As the result, we always sort the column and there is an 'order' arg for you to customize the ordering.
End of explanation
PercentChange('country', 'US', Mean('clicks')).compute_on(df)
mean = Mean('clicks').compute_on(df, 'country')
(mean.loc['non-US'] / mean.loc['US'] - 1) * 100
Explanation: PercentChange
Computes the percent change to a certain group on the DataFrame returned by the child Metric. The returned value is the # of percent points.
End of explanation
PercentChange(
'country',
'US',
MetricList((Count('clicks'), Count('impressions'))),
include_base=True).compute_on(df, 'platform')
Explanation: You can include the base group in your result.
End of explanation
PercentChange(
['country', 'platform'],
('US', 'Desktop'),
MetricList((Count('clicks'), Count('impressions'))),
include_base=True).compute_on(df)
Explanation: You can also specify multiple columns as the condition columns, then your base value should be a tuple.
End of explanation
AbsoluteChange('country', 'US', Mean('clicks')).compute_on(df)
Explanation: Absolute Change
Very similar to PercentChange, but the absolute difference is returned.
End of explanation
AbsoluteChange(
'country', 'US', Count('clicks'), include_base=True).compute_on(
df, 'platform', melted=True)
Explanation: You can also include the base group in your result.
End of explanation
ctr = Ratio('clicks', 'impressions')
MH('country', 'US', 'platform', ctr).compute_on(df) # stratified by platform
Explanation: Cochran-Mantel-Haenszel statistics
Please refer to the Wikepedia page for its definition. Besides the condition column and baseline key that PercentChange and AbsoluteChange take, CMH also needs a column to stratify. The child Metric must be a ratio of two single-column Metrics or CMH doesn't make sense. So instead of passing
AbsoluteChange(MetricList([a, b])) / AbsoluteChange(MetricList([c, d])),
please use
MetricList([AbsoluteChange(a) / AbsoluteChange(c),
AbsoluteChange(b) / AbsoluteChange(d)]).
End of explanation
np.random.seed(42)
exprs = ('ctrl', 'expr')
n = 10000
df_prepost = pd.DataFrame({'impressions': np.random.randint(10, 30, n)})
df_prepost['expr_id'] = np.random.choice(exprs, size=n)
df_prepost['cookie'] = np.random.choice(range(20), size=n)
# Preperiod correlates with postperiod.
df_prepost['pre_impressions'] = np.random.normal(df_prepost.impressions, 3)
# Add small and noisy improvments.
df_prepost.loc[df_prepost.expr_id == 'expr', 'impressions'] += np.random.randint(-2, 4, size=len(df_prepost.loc[df_prepost.expr_id == 'expr', 'impressions']))
abs = AbsoluteChange('expr_id', 'ctrl', Mean('impressions'))
cuped = CUPED('expr_id', 'ctrl', Mean('impressions'), Mean('pre_impressions'), 'cookie')
MetricList((abs, cuped)).compute_on(df_prepost)
Explanation: CUPED
It computes the absolute change that has been adjusted using the CUPED approach. It provides an unbiased estimate to the absolute change with lower variance.
Let's see how it works on a fake data with preperiod metrics that are correlated with postperiod metrics and the effect of the experiment is small and noisy.
End of explanation
from sklearn import linear_model
df_agg = MetricList((Mean('impressions'), Mean('pre_impressions'))).compute_on(df_prepost, ['expr_id', 'cookie'])
lm = linear_model.LinearRegression()
lm.fit(df_agg[['mean(pre_impressions)']], df_agg['mean(impressions)'])
theta = lm.coef_[0]
df_agg['adjusted'] = df_agg['mean(impressions)'] - theta * df_agg['mean(pre_impressions)']
adjusted = df_agg.groupby('expr_id').adjusted.mean()
adjusted['expr'] - adjusted['ctrl']
Explanation: CUPED essentially fits a linear model of Postperiod metric ~ 1 + preperiod metric and uses it to control for the variance in the preperiod.
End of explanation
from plotnine import ggplot, aes, geom_density, after_stat, facet_grid
data_to_plot = pd.concat([df_agg['mean(impressions)'], df_agg.adjusted], keys=['Raw', 'CUPED'], names=['Adjusted'])
data_to_plot = pd.DataFrame(data_to_plot, columns=['Value']).reset_index()
(
ggplot(data_to_plot)
+ aes(x="Value", y=after_stat('density'), color='expr_id')
+ geom_density()
+ facet_grid('Adjusted ~ .')
)
# Jackknife is explained in the 'Standard Errors' section.
Jackknife('cookie', MetricList((abs, cuped))).compute_on(df_prepost)
# It's possible to control for multiple metrics.
CUPED('expr_id', 'ctrl', Mean('impressions'),
[Mean('pre_impressions'), Mean('pre_impressions')**2],
'cookie').compute_on(df_prepost)
Explanation: We can see that CUPED's result is similar to the absolute change but has smaller variance.
End of explanation
pct = PercentChange('expr_id', 'ctrl', Mean('impressions'))
prepost = PrePostChange('expr_id', 'ctrl', Mean('impressions'), Mean('pre_impressions'), 'cookie')
MetricList((pct, prepost)).compute_on(df_prepost)
df_agg = MetricList((Mean('impressions'), Mean('pre_impressions'))).compute_on(
df_prepost, ['expr_id', 'cookie']).reset_index()
df_agg['mean(pre_impressions)'] -= df_agg['mean(pre_impressions)'].mean()
df_agg['is_treated'] = df_agg.expr_id == 'expr'
df_agg['interaction'] = df_agg.is_treated * df_agg['mean(pre_impressions)']
lm = linear_model.LinearRegression()
lm.fit(df_agg[['is_treated', 'mean(pre_impressions)', 'interaction']],
df_agg['mean(impressions)'])
beta1 = lm.intercept_
beta2 = lm.coef_[0]
beta2 / beta1 * 100
# Jackknife is explained in the 'Standard Errors' section.
Jackknife('cookie', MetricList((pct, prepost))).compute_on(df_prepost)
Explanation: PrePostChange
It computes the percent change that has been adjusted using the PrePost approach. It's similar to CUPED but control for treatment groups additionally. Essentially, it fits
Postperiod metric ~ 1 + is_treated * preperiod metric, or more verbosely,
Postperiod metric = β1 + β2 * is_treated + β3 * preperiod metric + β4 * is_treated * preperiod metric.
Note that the estimate of β2 will be the estimate of treatment effect and the control arm metric can be estimated using β1 if we centered preperiod metric. As the result, β2 / β1 will be the estimate of the percent change that PrePostChange returns.
End of explanation
Jackknife('cookie', MetricList((Sum('clicks'), Sum('impressions')))).compute_on(df)
metrics = MetricList((Sum('clicks'), Sum('impressions')))
Jackknife('cookie', metrics).compute_on(df, 'country', True)
Explanation: Standard Errors
Jackknife
Unlike all Metrics we have seen so far, Jackknife returns a multiple-column DataFrame because by default we return point estimate and standard error.
End of explanation
Jackknife('cookie', metrics, 0.9).compute_on(df)
res = (
MetricList((Ratio('clicks', 'impressions', 'ctr'), Sum('clicks')))
| PercentChange('country', 'US')
| Jackknife('cookie', confidence=0.9)
| compute_on(df, 'platform'))
res.display()
Explanation: You can also specify a confidence level, the we'll return the confidence interval. The returned DataFrame also comes with a display() method for visualization which will highlight significant changes. To customize the display(), please take a look at confidence_interval_display_demo.ipynb.
End of explanation
np.random.seed(42)
Bootstrap(None, Sum('clicks'), 100).compute_on(df)
np.random.seed(42)
Bootstrap('cookie', Sum('clicks'), 100).compute_on(df, 'country')
np.random.seed(42)
Bootstrap('cookie', Sum('clicks'), 100, 0.95).compute_on(df, 'country')
np.random.seed(42)
res = (
MetricList((Ratio('clicks', 'impressions', 'ctr'), Sum('impressions')))
| AbsoluteChange('country', 'US')
| Bootstrap(None, n_replicates=100, confidence=0.9)
| compute_on(df, 'platform'))
res.display()
Explanation: Bootstrap
The output is similar to Jackknife. The different args are
- unit: If None, we bootstrap on rows. Otherwise we do a block bootstrap. The unique values in unit column will be used as the resampling buckets.
- n_replicates: The number of resamples. Default to 10000, which is recommended in Tim Hesterberg's What Teachers Should Know About the Bootstrap. Here we use a smaller number for faster demonstration.
End of explanation
from meterstick.models import *
Explanation: Models
Meterstick also has built-in support for model fitting. The module is not imported by default, so you need to manually import it.
End of explanation
m = LinearRegression(Mean('clicks'), Mean('impressions'), 'platform')
m.compute_on(df)
Explanation: Linear Regression
End of explanation
from sklearn import linear_model
x = Mean('impressions').compute_on(df, 'platform')
y = Mean('clicks').compute_on(df, 'platform')
m = linear_model.LinearRegression().fit(x, y)
print(m.coef_, m.intercept_)
Explanation: What Model(y, x, groupby).compute_on(data) does is
1. Computes MetricList((y, x)).compute_on(data, groupby).
2. Fits the underlying sklearn model on the result from #1.
End of explanation
# x can also be a list of Metrics or a MetricList.
m = Ridge(
Mean('clicks'),
[Mean('impressions'), Variance('clicks')],
'platform',
alpha=2)
m.compute_on(df, melted=True)
Explanation: Ridge Regression
End of explanation
m = Lasso(
Mean('clicks'),
Mean('impressions'),
'platform',
fit_intercept=False,
alpha=5)
m.compute_on(df, 'country')
Explanation: Lasso Regression
End of explanation
m = LogisticRegression(Count('clicks'), Mean('impressions'), 'country')
m.compute_on(df, melted=True)
Explanation: Logistic Regression
End of explanation
m = LogisticRegression(Count('clicks'), Mean('impressions'), 'platform', name='LR')
m.compute_on(df, melted=True)
Explanation: If y is not binary, by default a multinomal model is fitted. The behavior can be controlled via the 'multinomial' arg.
End of explanation
Count('clicks').compute_on(df, 'platform')
Explanation: Classes are the unique values of y.
End of explanation
(LinearRegression(
Mean('clicks'),
[Mean('impressions'), Variance('impressions')],
'country',
name='lm')
| AbsoluteChange('platform', 'Desktop')
| Jackknife('cookie', confidence=0.9)
| compute_on(df)).display()
Explanation: Wrapping sklearn models into Meterstick provides the ability to combine Models with other built-in Metrics and Operations. For example, you can Jackknife the Model to get the uncertainty of coefficients.
End of explanation
clicks_us = Sum('clicks', where='country == "US"')
clicks_not_us = Sum('clicks', where='country != "US"')
(clicks_not_us - clicks_us).compute_on(df)
Explanation: Pipeline
You have already seen this. Instead of
Jackknife(PercentChange(MetricList(...))).compute_on(df)
you can write
MetricList(...) | PercentChange() | Jackknife() | compute_on(df)
which is more intuitive. We overwrite the "|" operator on Metric and the __call__() of Operation so a Metric can be pipelined to an Operation. As Operation is a special kind of Metric, so it can bu further pipelined to another Operation. At last, compute_on() takes a Metric from the pipeline and is equavalent to calling metric.compute_on().
Filter
There is a "where" arg for Metric. It'll be passed to df.query() at the beginning of compute_on(df). By default the filter is not reflected in the name of Metric so same Metrics with different filters would have same column names in the returned DataFrames. It makes combining them easy.
End of explanation
Sum('clicks') | AbsoluteChange('country', 'US') | compute_on(df)
Explanation: It's equivalent to
End of explanation
MetricList((Sum('X', where='Y > 0'), Sum('X'))).to_sql('T', 'grp')
m = MetricList((Sum('clicks'), Mean('impressions')))
m = AbsoluteChange('country', 'US', m)
m.compute_on(df, 'platform')
from sqlalchemy import create_engine
engine = create_engine('sqlite://', echo=False)
df.to_sql('T', con=engine)
# Meterstick uses a different SQL dialect from SQLAlchemy, so this doesn't
# always work.
m.compute_on_sql('T', 'platform', execute=lambda sql: pd.read_sql(sql, engine))
Explanation: SQL
You can easily get SQL query for all built-in Metrics and Operations, except for weighted Quantile/CV/Correlation/Cov, by calling
to_sql(sql_table, split_by).
You can also directly execute the query by calling
compute_on_sql(sql_table, split_by, execute, melted),
where execute is a function that can execute SQL queries. The return is very similar to compute_on().
The dialect it uses is the standard SQL in Google Cloud's BigQuery.
End of explanation
class CustomSum(Metric):
def __init__(self, var):
name = 'custom sum(%s)' % var
super(CustomSum, self).__init__(name)
self.var = var
def compute(self, df):
return df[self.var].sum()
CustomSum('clicks').compute_on(df, 'country')
Sum('clicks').compute_on(df, 'country')
Explanation: Custom Metric
We provide many Metrics out of box but we understand there are cases you need more, so we make it easy for you to write you own Metrics.
First you need to understand the dataflow of a DataFrame when it's passed to compute_on(). The dataflow looks like this.
<-------------------------------------------compute_on(handles caching)---------------------------------------------->
<-------------------------------------compute_through-----------------------------------> |
| <------compute_slices------> | |
| |-> slice1 -> compute | | | |
df -> df.query(where) -> precompute -> split_data -|-> slice2 -> compute | -> pd.concat -> postcompute -> manipulate -> final_compute
|-> ... |
In summary, compute() operates on a slice of data and hence only takes one arg, df. While precompute(), postcompute(), compute_slices(), compute_through() and final_compute() operate on the whole DataFrame so they take the df that has been processed by the dataflow till them and the split_by passed to compute_on(). final_compute() also has access to the original df passed to compute_on() for you to make additional manipulation. manipulate() does common data manipulation like melting and cleaning. Besides wrapping all the computations above, compute_on() also caches the result from compute_through(). Please refer to the section of Caching for more details.
Depending on your case, you can overwrite all the methods, but we suggest you NOT to overwrite compute_on() because it might mess up the caching mechanism, nor manipulate(), because it might not work well with other Metrics' data manipulation. Here are some rules to help you decide.
1. If your Metric has no vectorization over slices, overwrite compute() which only takes one arg, df. To overwrite, you can either create a new class inheriting from Metric or just pass a lambda function into Metric.
2. If you have vectorization logic over slices, overwrite compute_slices().
3. As compute() operates on a slice of data, it doesn't have access to the columns to split_by and the index value of the slice. If you need them, overwrite compute_with_split_by(self, df, split_by, slice_name), which is just a wrapper of compute(), but has access to split_by and the value of current slice, slice_name.
4. The data passed into manipulate() should be a number, a pd.Series, or a wide/unmelted pd.DataFrame.
5. split_data() returns (sub_dataframe, corresponding slice value). You might want to overwrite it for non-vectorized Operations. See section Linear Regression for examples.
Also there are some requirements.
1. Your Metric shouldn't change the input DataFrame inplace or it might not work with other Metrics.
2. Your Metric shouldn't rely on the index of the input DataFrame if you want it to work with Jackknife. The reason is Jackknife might reset the index.
No Vectorization
End of explanation
%%timeit
CustomSum('clicks').compute_on(df, 'country')
%%timeit
Sum('clicks').compute_on(df, 'country')
%%timeit
df.groupby('country')['clicks'].sum()
Explanation: CustomSum doesn't have vectorization. It loops through the DataFrame and sum on every slice. As the result, it's slower than vectorized summation.
End of explanation
class VectorizedSum(Metric):
def __init__(self, var):
name = 'vectorized sum(%s)' % var
super(VectorizedSum, self).__init__(name = name)
self.var = var
def compute_slices(self, df, split_by):
if split_by:
return df.groupby(split_by)[self.var].sum()
return df[self.var].sum()
VectorizedSum('clicks').compute_on(df, 'country')
%%timeit
VectorizedSum('clicks').compute_on(df, 'country')
Explanation: With Vectorization
We can do better. Let's implement a Sum with vectorization.
End of explanation
class USOnlySum(Sum):
def precompute(self, df, split_by):
return df[df.country == 'US']
def postcompute(self, data, split_by):
print('Inside postcompute():')
print('Input data: ', data)
print('Input split_by: ', split_by)
print('\n')
return data
def final_compute(self, res, melted, return_dataframe, split_by, df):
# res is the result processed by the dataflow till now. df is the original
# DataFrme passed to compute_on().
print('Inside final_compute():')
for country in df.country.unique():
if country not in res.index:
print('Country "%s" is missing!' % country)
return res
USOnlySum('clicks').compute_on(df, 'country')
Explanation: Precompute, postcompute and final_compute
They are useful when you need to preprocess and postprocess the data.
End of explanation
normalize = metrics.Sum('clicks', postcompute=lambda res, split_by: res / res.sum())
normalize.compute_on(df, 'country')
# The above is equivalent to Normalize by 'country'.
Normalize('country', Sum('clicks')).compute_on(df)
Explanation: Overwrite using Lambda Functions
For one-off Metrics, you can also overwrite precompute, compute, postcompute, compute_slices and final_compute by passing them to Metric() as lambda functions.
End of explanation
class Distribution(Operation):
Computes the normalized values of a Metric over column(s).
Attributes:
extra_index: A list of column(s) to normalize over.
children: A tuple of a Metric whose result we normalize on. And all other
attributes inherited from Operation.
def __init__(self,
over: Union[Text, List[Text]],
child: Optional[Metric] = None,
**kwargs):
self.over = over
# The 3rd argument is the extra column that will be added to split_by. It'll
# be converted to a list then assigned to self.extra_index.
super(Distribution, self).__init__(child, 'Distribution of {}', over,
**kwargs)
def compute_slices(self, df, split_by=None):
# extra_index is after the split_by.
lvls = split_by + self.extra_index if split_by else self.extra_index
res = self.compute_child(df, lvls)
total = res.groupby(level=split_by).sum() if split_by else res.sum()
return res / total
Explanation: Custom Operation
Writing a custom Operation is a bit more complex. Take a look at the Caching section below as well. Typically an Operation first computes its children Metrics with expanded split_by. Here are some rules to keep in mind.
1. Always use compute_on and compute_child to compute the children Metrics. They handle caching so your Operation can interact with other Metrics correctly.
2. If the Operation extends the split_by when computing children Metrics, you need to register the extra columns added in the __init__().
3. The extra columns should come after the original split_by.
4. If you really cannot obey #2 or #3, you need to overwrite Operation.flush_children(), or it won't work with Jackknife and Bootstrap.
5. Try to vectorize the Operation as much as possible. At least you can compute the children Metrics in a vectorized way by calling compute_child(). It makes the caching of the children Metrics more available.
6. Jackknife takes shortcuts when computing leave-one-out (LOO) estimates for Sum, Mean and Count, so if you want your Operation to work with Jackknife fast, delegate computations to Sum, Mean and Count as much as possible. See section Linear Regression for a comparison.
7. For the same reason, you computation logic should avoid using input df other than in compute_on() and compute_child(). When cutting corners, Jackknife emits None as the input df for LOO estimation. The compute_on() and compute_child() functions know to read from cache but other functions may not know what to do. If your Operation uses df outside the compute_on() and compute_child() functions, you have either to
* ensure that your computation doesn't break when df is None.
* set attribute 'precomputable_in_jk' to False (which will force the jackknife to be computed the manual way, which is slower).
Let's see Distribution for an example.
End of explanation
class SumWithSQL(SimpleMetric):
def __init__(self,
var: Text,
name: Optional[Text] = None,
where: Optional[Text] = None,
**kwargs):
super(SumWithSQL, self).__init__(var, name, 'sum({})', where, **kwargs)
self._sum = Sum(var, name, where, **kwargs)
def compute_slices(self, df, split_by):
return self._sum.compute_slices(df, split_by)
# All the SQL-related classes, like Datasource, Filters, Columns, and so on,
# are defined in sql.py.
def get_sql_and_with_clause(self, table: Datasource, split_by: Columns,
global_filter: Filters, indexes: Columns,
local_filter: Filters, with_data: Datasources):
del indexes # unused
# Always starts with this line unless you know what you are doing.
local_filter = Filters([self.where, local_filter]).remove(global_filter)
columns = Column(self.var, 'SUM({})', self.name, local_filter)
# Returns a Sql instance and the WITH clause it needs.
return Sql(columns, table, global_filter, split_by), with_data
m = Sum('clicks') - SumWithSQL('clicks', 'custom_sum')
m.compute_on_sql('T', 'platform', execute=lambda sql: pd.read_sql(sql, engine))
Explanation: SQL Generation
If you want the custom Metric to generate SQL query, you need to implement to_sql() or get_sql_and_with_clause(). The latter is more common and recommended. Please refer to built-in Metrics to see how it should be implemented. Here we show two examples, one for Metric and the other for Operation.
End of explanation
class DistributionWithSQL(Operation):
def __init__(self,
over: Union[Text, List[Text]],
child: Optional[Metric] = None,
**kwargs):
super(DistributionWithSQL, self).__init__(child, 'Distribution of {}', over,
**kwargs)
def compute_slices(self, df, split_by=None):
lvls = split_by + self.extra_index if split_by else self.extra_index
res = self.compute_child(df, lvls)
total = res.groupby(level=split_by).sum() if split_by else res.sum()
return res / total
def get_sql_and_with_clause(self,
table: Datasource,
split_by: Columns,
global_filter: Filters,
indexes: Columns,
local_filter: Filters,
with_data: Datasources):
Gets the SQL query and WITH clause.
The query is constructed by
1. Get the query for the child metric.
2. Keep all indexing/groupby columns unchanged.
3. For all value columns, get
value / SUM(value) OVER (PARTITION BY split_by).
Args:
table: The table we want to query from.
split_by: The columns that we use to split the data.
global_filter: The Filters that can be applied to the whole Metric tree.
indexes: The columns that we shouldn't apply any arithmetic operation.
local_filter: The Filters that have been accumulated so far.
with_data: A global variable that contains all the WITH clauses we need.
Returns:
The SQL instance for metric, without the WITH clause component.
The global with_data which holds all datasources we need in the WITH
clause.
# Always starts with this line unless you know what you are doing.
local_filter = Filters([self.where, local_filter]).remove(global_filter)
# The intermediate tables needed by child metrics will be added to with_data
# in-place.
child_sql, with_data = self.children[0].get_sql_and_with_clause(
table, indexes, global_filter, indexes, local_filter, with_data)
child_table = sql.Datasource(child_sql, 'DistributionRaw')
# Always use the alias returned by with_data.add(), because if the with_data
# already holds a different table that also has 'DistributionRaw' as its
# alias, we'll use a different alias for the child_table, which is returned
# by with_data.add().
child_table_alias = with_data.add(child_table)
groupby = sql.Columns(indexes.aliases, distinct=True)
columns = sql.Columns()
for c in child_sql.columns:
if c.alias in groupby:
continue
col = sql.Column(c.alias) / sql.Column(
c.alias, 'SUM({})', partition=split_by.aliases)
col.set_alias('Distribution of %s' % c.alias_raw)
columns.add(col)
return sql.Sql(groupby.add(columns), child_table_alias), with_data
m = DistributionWithSQL('country', Sum('clicks'))
m.to_sql('T')
Explanation: For an Operation, you ususally call the child metrics' get_sql_and_with_clause() to get the subquery you need.
End of explanation
class SumWithTrace(Sum):
def compute_through(self, data, split_by):
print('Computing %s...' % self.name)
return super(SumWithTrace, self).compute_through(data, split_by)
sum_clicks = SumWithTrace('clicks', 'sum of clicks')
ctr = SumWithTrace('clicks') / SumWithTrace('impressions')
MetricList((sum_clicks, ctr)).compute_on(df)
Explanation: Caching
tl;dr: Reuse Metrics as much as possible and compute them together.
Computation can be slow so it'd nice if we pass in the same DataFrame multiple
times the computation is actually only done once. The difficulty is that
DataFrame is mutable so it's hard to decide whether we really saw this DataFrame
before. However, in one round of compute_on(), the DataFrame shouldn't change
(our Metrics never change the original DataFrame and your custom Metrics
shouldn't either), so we can cache the result, namely, a Metric appearing in
multiple places will only be computed once. This all happens automatically so
you don't need to worry about it. If you really cannot compute all your Metrics
in one round, there is a "cache_key" arg in compute_on(). What it does is
if the key is in cache, just read the cache;
if not, compute and save the result to cache under the key.
Note:
1. All we check is cache_key, nothing more, so it's your responsibility to
make sure same key really corresponds to the same input DataFrame AND split_by.
2. The caching and retrieving happen in all levels of Metrics, so
PercentChange(..., Sum('x')).compute_on(df, cache_key='foo')
not only cache the percent change to PercentChange's cache, but also cache
Sum('x').compute_through(df)
to Sum('x')'s cache. Note it's the output of compute_through() is cached so we
do't need to re-compute just because you change "melted" from True to False.
3. Anything that can be a key of a dict can be used as cache_key, except '_RESERVED' and tuples like ('_RESERVED', ...).
First, let's illustrate that when we don't reuse Metrics, everything gets
computed once as expected.
End of explanation
sum_clicks = SumWithTrace('clicks', 'sum of clicks')
ctr = sum_clicks / SumWithTrace('impressions')
MetricList((sum_clicks, ctr)).compute_on(df)
Explanation: Now let's see what heppens if we reuse sum_clicks.
End of explanation
sum_clicks = SumWithTrace('clicks', 'sum of clicks')
jk, s = MetricList(
[Jackknife('cookie', sum_clicks), sum_clicks],
children_return_dataframe=False).compute_on(
df, return_dataframe=False)
print(s)
jk
Explanation: Then sum_clicks only gets computed once. For Metics that are not quite compatible, you can still put them in a MeticList and set return_dataframe to False to maximize the caching.
End of explanation
sum_clicks = SumWithTrace('clicks', 'sum of clicks')
ctr = sum_clicks / SumWithTrace('impressions')
sum_clicks.compute_on(df, 'country', cache_key='foo')
ctr.compute_on(df, 'country', cache_key='foo')
Explanation: If you really cannot compute Metrics together, you can use a cache_key.
End of explanation
sum_clicks = SumWithTrace('clicks', 'sum of clicks')
ctr = sum_clicks / SumWithTrace('impressions')
MetricList((sum_clicks, ctr)).compute_on(df, cache_key='foo')
print('sum_clicks cached: ', sum_clicks.get_cached('foo'))
print('ctr cached: ', ctr.get_cached('foo'))
ctr.compute_on(None, cache_key='foo')
Explanation: The resulte are cached in ctr, a composite Metric, as well as its children, the Sum Metrics.
End of explanation
sum_clicks = SumWithTrace('clicks', 'sum of clicks')
ctr = sum_clicks / SumWithTrace('impressions')
MetricList((sum_clicks, ctr)).compute_on(df, cache_key='foo')
ctr.flush_cache('foo', recursive=False)
sum_clicks.compute_on(None, cache_key='foo') # sum is not flushed.
ctr.in_cache('foo')
Explanation: You can flush the cache by calling flush_cache(key, split_by=None, recursive=True, prune=True), where "recursive" means if you want to flush the cache of the children Metrics as well, and "prune" means if the key is not found in current Metric, do you still want to flush the children Metrics or stop early. It's useful when a high level Metric appears in several places then during the flushing we will hit it multiple times. We can save time by stop early.
End of explanation
ctr.compute_on(None, cache_key='foo')
ctr.in_cache('foo')
Explanation: Though ctr's cache has been flushed, we can still compute ctr from cache because all its children are cached.
End of explanation
ctr.flush_cache('foo')
sum_clicks.compute_on(None, cache_key='foo') # sum is flushed too.
Explanation: We won't be able to re-compute ctr if we recursively flush its cache.
End of explanation
sum_clicks = SumWithTrace('clicks')
PercentChange('country', 'US', sum_clicks).compute_on(df, cache_key=42)
sum_clicks.compute_on(None, 'country', cache_key=42)
Explanation: However, the behavior becomes subtle when Operation is involved.
End of explanation
sum_clicks.compute_on(df, cache_key=42)
sum_clicks.compute_on(df, 'country', cache_key=42)
Explanation: Note that it's sum_clicks.compute_on(df, 'country') instead of sum_clicks.compute_on(df) got saved in the cache. The reason is we need the former not the latter to compute the PercentChange. Using sum_clicks.compute_on(df, cache_key=42) will always give you the right result so it's not a big issue, just might confuse you sometime.
End of explanation
np.random.seed(42)
df['duration'] = np.random.random(len(df)) * 200
long_clicks = Sum('clicks', where='duration > 60')
short_clicks = Sum('clicks', where='duration < 30')
click_split = (long_clicks / short_clicks).set_name('click split')
click_split | Jackknife('cookie') | compute_on(df, 'country')
Explanation: Advanced Examples
Click Split
End of explanation
np.random.seed(42)
df['period'] = np.random.choice(('preperiod', 'postperiod'), size=size)
sum_clicks = Sum('clicks')
ctr = sum_clicks / Sum('impressions')
metrics = (sum_clicks, ctr)
preperiod_clicks = MetricList(metrics, where='period == "preperiod"')
postperiod_clicks = MetricList(metrics, where='period == "postperiod"')
pct = PercentChange('platform', 'Desktop')
did = (pct(postperiod_clicks) - pct(preperiod_clicks)).rename_columns(
['clicks% DID', 'ctr% DID'])
Jackknife('cookie', did).compute_on(df)
Explanation: Difference in differences
End of explanation
np.random.seed(42)
sum_clicks = Sum('clicks')
ctr = sum_clicks / Sum('impressions')
metrics = MetricList((sum_clicks, ctr))
(Jackknife('cookie', metrics) /
Bootstrap('cookie', metrics, 100)).rename_columns(
pd.MultiIndex.from_product(
(('sum(clicks)', 'ctr'), ('Value', 'SE')))).compute_on(df, 'country')
Explanation: Compare the standard errors between Jackknife and Bootstrap
End of explanation
np.random.seed(42)
size = 1000000
df_lin = pd.DataFrame({'grp': np.random.choice(range(10), size=size)})
df_lin['x'] = df_lin.grp + np.random.random(size=size)
df_lin['y'] = 2 * df_lin.x + np.random.random(size=size)
df_lin['cookie'] = np.random.choice(range(20), size=size)
df_lin_mean = df_lin.groupby('grp').mean()
plt.scatter(df_lin_mean.x, df_lin_mean.y)
plt.show()
from sklearn import linear_model
class LinearReg(Operation):
def __init__(self, x, y, grp):
self.lm = linear_model.LinearRegression()
# Delegate most of the computations to Mean Metrics.
child = MetricList((Mean(x), Mean(y)))
self.grp = grp
# Register grp as the extra_index.
super(LinearReg, self).__init__(child, '%s ~ %s' % (y, x), grp)
def split_data(self, df, split_by=None):
The 1st element in yield will be passed to compute().
if not split_by:
yield self.compute_child(df, self.grp), None
else:
# grp needs to come after split_by.
child = self.compute_child(df, split_by + [self.grp])
keys, indices = list(zip(*child.groupby(split_by).groups.items()))
for i, idx in enumerate(indices):
yield child.loc[idx.unique()].droplevel(split_by), keys[i]
def compute(self, df):
self.lm.fit(df.iloc[:, [0]], df.iloc[:, 1])
return pd.Series((self.lm.coef_[0], self.lm.intercept_))
lr = LinearReg('x', 'y', 'grp')
Jackknife('cookie', lr, 0.95).compute_on(df_lin)
class LinearRegSlow(Metric):
def __init__(self, x, y, grp):
self.lm = linear_model.LinearRegression()
# Doesn't delegate.
self.x = x
self.y = y
self.grp = grp
super(LinearRegSlow, self).__init__('%s ~ %s' % (y, x))
def split_data(self, df, split_by=None):
The 1st element in yield will be passed to compute().
idx = split_by + [self.grp] if split_by else self.grp
mean = df.groupby(idx).mean()
if not split_by:
yield mean, None
else:
keys, indices = list(zip(*mean.groupby(split_by).groups.items()))
for i, idx in enumerate(indices):
yield mean.loc[idx.unique()].droplevel(split_by), keys[i]
def compute(self, df):
self.lm.fit(df.iloc[:, [0]], df.iloc[:, 1])
return pd.Series((self.lm.coef_[0], self.lm.intercept_))
lr_slow = LinearRegSlow('x', 'y', 'grp')
Jackknife('cookie', lr_slow, 0.95).compute_on(df_lin)
%%timeit
Jackknife('cookie', lr, 0.95).compute_on(df_lin)
%%timeit
Jackknife('cookie', lr_slow, 0.95).compute_on(df_lin)
Explanation: Linear Regression
Here we fit a linear regression on mean values of groups. We show two versions, the former delgates computations to Mean so its Jackknife is faster than the latter which doesn't delegate.
End of explanation
# Mimics that measurements, y, are taken repeatedly at a fixed grid, x.
np.random.seed(42)
size = 10
x = list(range(5))
df_sin = pd.DataFrame({'x': x * size, 'cookie': np.repeat(range(size), len(x))})
df_sin['y'] = np.sin(df_sin.x) + np.random.normal(scale=0.5, size=len(df_sin.x))
df_sin.head(10)
import statsmodels.api as sm
lowess = sm.nonparametric.lowess
class Lowess(Metric):
def __init__(self, x, y, name=None, where=None):
self.x = x
self.y = y
name = name or 'LOWESS(%s ~ %s)' % (y, x)
super(Lowess, self).__init__(name, where=where)
def compute(self, data):
lowess_fit = pd.DataFrame(
lowess(data[self.y], data[self.x]), columns=[self.x, self.y])
return lowess_fit.drop_duplicates().reset_index(drop=True)
Lowess('x', 'y') | compute_on(df_sin)
jk = Lowess('x', 'y') | Jackknife('cookie', confidence=0.9) | compute_on(df_sin)
point_est = jk[('y', 'Value')]
ci_lower = jk[('y', 'Jackknife CI-lower')]
ci_upper = jk[('y', 'Jackknife CI-upper')]
plt.scatter(df_sin.x, df_sin.y)
plt.plot(x, point_est, c='g')
plt.fill_between(
x, ci_lower,
ci_upper,
color='g',
alpha=0.5)
plt.show()
Explanation: LOWESS
End of explanation
from plotnine import ggplot, geom_point, geom_ribbon, aes, ylab
y = Mean('clicks')
x = [Mean('impressions'), Variance('impressions')]
grpby = 'platform'
baseline = LinearRegression(y, x, grpby, fit_intercept=False)
shrinkage = [(Ridge(y, x, grpby, a, False) / baseline).rename_columns(
('%s::mean(impressions)' % a, '%s::var(impressions)' % a))
for a in range(10)]
jk = (MetricList(shrinkage)
| Jackknife('cookie', confidence=0.95)
| compute_on(df, melted=True)).reset_index()
jk[['penalty', 'X']] = jk.Metric.str.split('::', expand=True)
jk.penalty = jk.penalty.astype(int)
(ggplot(jk, aes('penalty', 'Value', color='X'))
+ ylab('Shrinkage')
+ geom_point()
+ geom_ribbon(
aes(ymin='Jackknife CI-lower', ymax='Jackknife CI-upper', fill='X'),
alpha=0.1))
Explanation: Coefficient Shrikage
End of explanation |
1,330 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using custom containers with Vertex AI Training
Learning Objectives
Step1: Configure environment settings
Set location paths, connections strings, and other environment settings. Make sure to update REGION, and ARTIFACT_STORE with the settings reflecting your lab environment.
REGION - the compute region for Vertex AI Training and Prediction
ARTIFACT_STORE - A GCS bucket in the created in the same region.
Step2: We now create the ARTIFACT_STORE bucket if it's not there. Note that this bucket should be created in the region specified in the variable REGION (if you have already a bucket with this name in a different region than REGION, you may want to change the ARTIFACT_STORE name so that you can recreate a bucket in REGION with the command in the cell below).
Step3: Importing the dataset into BigQuery
Step4: Explore the Covertype dataset
Step5: Create training and validation splits
Use BigQuery to sample training and validation splits and save them to GCS storage
Create a training split
Step6: Create a validation split
Step7: Develop a training application
Configure the sklearn training pipeline.
The training pipeline preprocesses data by standardizing all numeric features using sklearn.preprocessing.StandardScaler and encoding all categorical features using sklearn.preprocessing.OneHotEncoder. It uses stochastic gradient descent linear classifier (SGDClassifier) for modeling.
Step8: Convert all numeric features to float64
To avoid warning messages from StandardScaler all numeric features are converted to float64.
Step9: Run the pipeline locally.
Step10: Calculate the trained model's accuracy.
Step11: Prepare the hyperparameter tuning application.
Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on Vertex AI Training.
Step12: Write the tuning script.
Notice the use of the hypertune package to report the accuracy optimization metric to Vertex AI hyperparameter tuning service.
Step13: Package the script into a docker image.
Notice that we are installing specific versions of scikit-learn and pandas in the training image. This is done to make sure that the training runtime in the training container is aligned with the serving runtime in the serving container.
Make sure to update the URI for the base image so that it points to your project's Container Registry.
Step14: Build the docker image.
You use Cloud Build to build the image and push it your project's Container Registry. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
Step15: Submit an Vertex AI hyperparameter tuning job
Create the hyperparameter configuration file.
Recall that the training code uses SGDClassifier. The training application has been designed to accept two hyperparameters that control SGDClassifier
Step16: Go to the Vertex AI Training dashboard and view the progression of the HP tuning job under "Hyperparameter Tuning Jobs".
Retrieve HP-tuning results.
After the job completes you can review the results using GCP Console or programmatically using the following functions (note that this code supposes that the metrics that the hyperparameter tuning engine optimizes is maximized)
Step17: You'll need to wait for the hyperparameter job to complete before being able to retrieve the best job by running the cell below.
Step20: Retrain the model with the best hyperparameters
You can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset.
Configure and run the training job
Step21: Examine the training output
The training script saved the trained model as the 'model.pkl' in the JOB_DIR folder on GCS.
Note
Step22: Deploy the model to Vertex AI Prediction
Step23: Uploading the trained model
Step24: Deploying the uploaded model
Step25: Serve predictions
Prepare the input file with JSON formated instances. | Python Code:
!pip freeze | grep google-cloud-aiplatform || pip install google-cloud-aiplatform
import os
import time
from google.cloud import aiplatform
from google.cloud import bigquery
import pandas as pd
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
Explanation: Using custom containers with Vertex AI Training
Learning Objectives:
1. Learn how to create a train and a validation split with BigQuery
1. Learn how to wrap a machine learning model into a Docker container and train in on Vertex AI
1. Learn how to use the hyperparameter tuning engine on Vertex AI to find the best hyperparameters
1. Learn how to deploy a trained machine learning model on Vertex AI as a REST API and query it
In this lab, you develop, package as a docker image, and run on Vertex AI Training a training application that trains a multi-class classification model that predicts the type of forest cover from cartographic data. The dataset used in the lab is based on Covertype Data Set from UCI Machine Learning Repository.
The training code uses scikit-learn for data pre-processing and modeling. The code has been instrumented using the hypertune package so it can be used with Vertex AI hyperparameter tuning.
End of explanation
REGION = 'us-central1'
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
ARTIFACT_STORE = f'gs://{PROJECT_ID}-vertex'
DATA_ROOT = f'{ARTIFACT_STORE}/data'
JOB_DIR_ROOT = f'{ARTIFACT_STORE}/jobs'
TRAINING_FILE_PATH = f'{DATA_ROOT}/training/dataset.csv'
VALIDATION_FILE_PATH = f'{DATA_ROOT}/validation/dataset.csv'
API_ENDPOINT = f'{REGION}-aiplatform.googleapis.com'
os.environ['JOB_DIR_ROOT'] = JOB_DIR_ROOT
os.environ['TRAINING_FILE_PATH'] = TRAINING_FILE_PATH
os.environ['VALIDATION_FILE_PATH'] = VALIDATION_FILE_PATH
os.environ['PROJECT_ID'] = PROJECT_ID
os.environ['REGION'] = REGION
Explanation: Configure environment settings
Set location paths, connections strings, and other environment settings. Make sure to update REGION, and ARTIFACT_STORE with the settings reflecting your lab environment.
REGION - the compute region for Vertex AI Training and Prediction
ARTIFACT_STORE - A GCS bucket in the created in the same region.
End of explanation
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
Explanation: We now create the ARTIFACT_STORE bucket if it's not there. Note that this bucket should be created in the region specified in the variable REGION (if you have already a bucket with this name in a different region than REGION, you may want to change the ARTIFACT_STORE name so that you can recreate a bucket in REGION with the command in the cell below).
End of explanation
%%bash
DATASET_LOCATION=US
DATASET_ID=covertype_dataset
TABLE_ID=covertype
DATA_SOURCE=gs://workshop-datasets/covertype/small/dataset.csv
SCHEMA=Elevation:INTEGER,\
Aspect:INTEGER,\
Slope:INTEGER,\
Horizontal_Distance_To_Hydrology:INTEGER,\
Vertical_Distance_To_Hydrology:INTEGER,\
Horizontal_Distance_To_Roadways:INTEGER,\
Hillshade_9am:INTEGER,\
Hillshade_Noon:INTEGER,\
Hillshade_3pm:INTEGER,\
Horizontal_Distance_To_Fire_Points:INTEGER,\
Wilderness_Area:STRING,\
Soil_Type:STRING,\
Cover_Type:INTEGER
bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
Explanation: Importing the dataset into BigQuery
End of explanation
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
Explanation: Explore the Covertype dataset
End of explanation
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
Explanation: Create training and validation splits
Use BigQuery to sample training and validation splits and save them to GCS storage
Create a training split
End of explanation
!bq query \
-n 0 \
--destination_table covertype_dataset.validation \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (8)'
!bq extract \
--destination_format CSV \
covertype_dataset.validation \
$VALIDATION_FILE_PATH
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
Explanation: Create a validation split
End of explanation
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log', tol=1e-3))
])
Explanation: Develop a training application
Configure the sklearn training pipeline.
The training pipeline preprocesses data by standardizing all numeric features using sklearn.preprocessing.StandardScaler and encoding all categorical features using sklearn.preprocessing.OneHotEncoder. It uses stochastic gradient descent linear classifier (SGDClassifier) for modeling.
End of explanation
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
Explanation: Convert all numeric features to float64
To avoid warning messages from StandardScaler all numeric features are converted to float64.
End of explanation
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
Explanation: Run the pipeline locally.
End of explanation
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
Explanation: Calculate the trained model's accuracy.
End of explanation
TRAINING_APP_FOLDER = 'training_app'
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
Explanation: Prepare the hyperparameter tuning application.
Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on Vertex AI Training.
End of explanation
%%writefile {TRAINING_APP_FOLDER}/train.py
import os
import subprocess
import sys
import fire
import hypertune
import numpy as np
import pandas as pd
import pickle
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path, validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
accuracy = pipeline.score(X_validation, y_validation)
print('Model accuracy: {}'.format(accuracy))
# Log it with hypertune
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=accuracy
)
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
Explanation: Write the tuning script.
Notice the use of the hypertune package to report the accuracy optimization metric to Vertex AI hyperparameter tuning service.
End of explanation
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
WORKDIR /app
COPY train.py .
ENTRYPOINT ["python", "train.py"]
Explanation: Package the script into a docker image.
Notice that we are installing specific versions of scikit-learn and pandas in the training image. This is done to make sure that the training runtime in the training container is aligned with the serving runtime in the serving container.
Make sure to update the URI for the base image so that it points to your project's Container Registry.
End of explanation
IMAGE_NAME='trainer_image'
IMAGE_TAG='latest'
IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG)
os.environ['IMAGE_URI'] = IMAGE_URI
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
Explanation: Build the docker image.
You use Cloud Build to build the image and push it your project's Container Registry. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
End of explanation
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"forestcover_tuning_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
os.environ['JOB_NAME'] = JOB_NAME
os.environ['JOB_DIR'] = JOB_DIR
%%bash
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
CONFIG_YAML=config.yaml
cat <<EOF > $CONFIG_YAML
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
- parameterId: max_iter
discreteValueSpec:
values:
- 10
- 20
- parameterId: alpha
doubleValueSpec:
minValue: 1.0e-4
maxValue: 1.0e-1
scaleType: UNIT_LINEAR_SCALE
algorithm: ALGORITHM_UNSPECIFIED # results in Bayesian optimization
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: $MACHINE_TYPE
replicaCount: $REPLICA_COUNT
containerSpec:
imageUri: $IMAGE_URI
args:
- --job_dir=$JOB_DIR
- --training_dataset_path=$TRAINING_FILE_PATH
- --validation_dataset_path=$VALIDATION_FILE_PATH
- --hptune
EOF
gcloud ai hp-tuning-jobs create \
--region=$REGION \
--display-name=$JOB_NAME \
--config=$CONFIG_YAML \
--max-trial-count=5 \
--parallel-trial-count=5
echo "JOB_NAME: $JOB_NAME"
Explanation: Submit an Vertex AI hyperparameter tuning job
Create the hyperparameter configuration file.
Recall that the training code uses SGDClassifier. The training application has been designed to accept two hyperparameters that control SGDClassifier:
- Max iterations
- Alpha
The file below configures Vertex AI hypertuning to run up to 5 trials in parallel and to choose from two discrete values of max_iter and the linear range between 1.0e-4 and 1.0e-1 for alpha.
End of explanation
def get_trials(job_name):
jobs = aiplatform.HyperparameterTuningJob.list()
match = [job for job in jobs if job.display_name == JOB_NAME]
tuning_job = match[0] if match else None
return tuning_job.trials if tuning_job else None
def get_best_trial(trials):
metrics = [trial.final_measurement.metrics[0].value for trial in trials]
best_trial = trials[metrics.index(max(metrics))]
return best_trial
def retrieve_best_trial_from_job_name(jobname):
trials = get_trials(jobname)
best_trial = get_best_trial(trials)
return best_trial
Explanation: Go to the Vertex AI Training dashboard and view the progression of the HP tuning job under "Hyperparameter Tuning Jobs".
Retrieve HP-tuning results.
After the job completes you can review the results using GCP Console or programmatically using the following functions (note that this code supposes that the metrics that the hyperparameter tuning engine optimizes is maximized):
End of explanation
best_trial = retrieve_best_trial_from_job_name(JOB_NAME)
Explanation: You'll need to wait for the hyperparameter job to complete before being able to retrieve the best job by running the cell below.
End of explanation
alpha = best_trial.parameters[0].value
max_iter = best_trial.parameters[1].value
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"JOB_VERTEX_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
WORKER_POOL_SPEC = f\
machine-type={MACHINE_TYPE},\
replica-count={REPLICA_COUNT},\
container-image-uri={IMAGE_URI}\
ARGS = f\
--job_dir={JOB_DIR},\
--training_dataset_path={TRAINING_FILE_PATH},\
--validation_dataset_path={VALIDATION_FILE_PATH},\
--alpha={alpha},\
--max_iter={max_iter},\
--nohptune\
!gcloud ai custom-jobs create \
--region={REGION} \
--display-name={JOB_NAME} \
--worker-pool-spec={WORKER_POOL_SPEC} \
--args={ARGS}
print("The model will be exported at:", JOB_DIR)
Explanation: Retrain the model with the best hyperparameters
You can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset.
Configure and run the training job
End of explanation
!gsutil ls $JOB_DIR
Explanation: Examine the training output
The training script saved the trained model as the 'model.pkl' in the JOB_DIR folder on GCS.
Note: We need to wait for job triggered by the cell above to complete before running the cells below.
End of explanation
MODEL_NAME = 'forest_cover_classifier_2'
SERVING_CONTAINER_IMAGE_URI = 'us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest'
SERVING_MACHINE_TYPE = "n1-standard-2"
Explanation: Deploy the model to Vertex AI Prediction
End of explanation
uploaded_model = aiplatform.Model.upload(
display_name=MODEL_NAME,
artifact_uri=JOB_DIR,
serving_container_image_uri=SERVING_CONTAINER_IMAGE_URI,
)
Explanation: Uploading the trained model
End of explanation
endpoint = uploaded_model.deploy(
machine_type=SERVING_MACHINE_TYPE,
accelerator_type=None,
accelerator_count=None,
)
Explanation: Deploying the uploaded model
End of explanation
instance = [2841.0, 45.0, 0.0, 644.0, 282.0, 1376.0, 218.0, 237.0, 156.0, 1003.0, "Commanche", "C4758"]
endpoint.predict([instance])
Explanation: Serve predictions
Prepare the input file with JSON formated instances.
End of explanation |
1,331 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Patient Data Analysis
Dataset gathered from stroke patients TODO ADD DETAILS ABOUT STROKE PATIENTS
Step2: Reaction Time & Accuracy
Here we include the reaction time and accuracy metrics from the original dataset
Step4: Does the drift rate depend on stimulus type?
Step5: Convergence Checks
Before carrying on with analysing the output of the model, we need to check that the markov chains have properly converged. There's a number of ways to do this, which the authors of the hddm library recommend$^1$. We'll begin by visually inspecting the MCMC posterior plots.
Step6: PASS - No problematic patterns, such as drifts or large jumps, can be in any of the traces above. Autocorrelation also drops to zero quite quickly when considering past samples - which is what we want.
We can also formally test for model convergence using the Gelman-Rubin R statistic$^2$, which compares the within- and between-chain variance of different runs of the same model; models converge if variables are between $0.98$ and $1.02$. A simple algorithm to check this is outlined below
Step7: PASS - Formal testing reveals no convergence problems; Gelman-Rubin R statistic values for all model variables fall within the desired range ($0.98$ to $1.02$)
Drift Rate Analysis
Here, we examine whether the type of stimulus significantly affects the drift rate of the decision-making process.
Step10: The drift rate for CP is significantly lower than both SS and US; no significant difference detected for CS
No other statistical significance detected at $p <0.05$
Step11: Does the stimulus type affect the distance between the two boundaries (threshold)?
Threshold (or a) describes the relative difference in the distance between the upper and lower response boundaries of the DDM.
We explore whether stimulus type affects the threshold / distance between the two boundaries
Step12: Convergence checks
Step13: Threshold analysis
Since models converge, we can check the posteriors for significant differences in threshold between stimuli groups as we did for drift rates.
Step14: Threshold for US is significantly larger than both SS & CS
Step15: Lumped Model | Python Code:
Environment setup
%matplotlib inline
%cd /lang_dec
import warnings; warnings.filterwarnings('ignore')
import hddm
import numpy as np
import matplotlib.pyplot as plt
from utils import model_tools
# Import patient data (as pandas dataframe)
patients_data = hddm.load_csv('/lang_dec/data/patients_clean.csv')
Explanation: Patient Data Analysis
Dataset gathered from stroke patients TODO ADD DETAILS ABOUT STROKE PATIENTS
End of explanation
us = patients_data.loc[patients_data['stim'] == 'US']
ss = patients_data.loc[patients_data['stim'] == 'SS']
cp = patients_data.loc[patients_data['stim'] == 'CP']
cs = patients_data.loc[patients_data['stim'] == 'CS']
plt.boxplot([ss.rt.values, cp.rt.values, cs.rt.values, us.rt.values],
labels=('SS', 'CP', 'CS', 'US'),)
plt.title('Comparison of Reaction Time Differences Between Stimuli Groups')
plt.show()
ss_accuracy = (len([x for x in ss.response.values if x >= 1]) / len(ss.response.values)) * 100
cp_accuracy = (len([x for x in cp.response.values if x >= 1]) / len(cp.response.values)) * 100
cs_accuracy = (len([x for x in cs.response.values if x >= 1]) / len(cs.response.values)) * 100
us_accuracy = (len([x for x in us.response.values if x >= 1]) / len(us.response.values)) * 100
print("SS Accuracy: " + str(ss_accuracy) + "%")
print("CP Accuracy: " + str(cp_accuracy) + "%")
print("CS Accuracy: " + str(cs_accuracy) + "%")
print("US Accuracy: " + str(us_accuracy) + "%")
plt.bar([1,2,3,4],
[ss_accuracy, cp_accuracy, cs_accuracy, us_accuracy])
Explanation: Reaction Time & Accuracy
Here we include the reaction time and accuracy metrics from the original dataset
End of explanation
Plot Drift Diffusion Model for controls
patients_model = hddm.HDDM(patients_data, depends_on={'v': 'stim'}, bias=True)
patients_model.find_starting_values()
patients_model.sample(9000, burn=200, dbname='language_decision/models/patients', db='txt')
Explanation: Does the drift rate depend on stimulus type?
End of explanation
patients_model.plot_posteriors()
Explanation: Convergence Checks
Before carrying on with analysing the output of the model, we need to check that the markov chains have properly converged. There's a number of ways to do this, which the authors of the hddm library recommend$^1$. We'll begin by visually inspecting the MCMC posterior plots.
End of explanation
models = []
for i in range(5):
m = hddm.HDDM(patients_data, depends_on={'v': 'stim'})
m.find_starting_values()
m.sample(6000, burn=20)
models.append(m)
model_tools.check_convergence(models)
Explanation: PASS - No problematic patterns, such as drifts or large jumps, can be in any of the traces above. Autocorrelation also drops to zero quite quickly when considering past samples - which is what we want.
We can also formally test for model convergence using the Gelman-Rubin R statistic$^2$, which compares the within- and between-chain variance of different runs of the same model; models converge if variables are between $0.98$ and $1.02$. A simple algorithm to check this is outlined below:
End of explanation
patients_stats = patients_model.gen_stats()
print("Threshold (a) Mean: " + str(patients_stats['mean']['a']) + " (std: " + str(patients_stats['std']['a']) + ")")
print("Non-Decision (t) Mean: " + str(patients_stats['mean']['t']) + " (std: " + str(patients_stats['std']['t']) + ")")
print("Bias (z) Mean: " + str(patients_stats['mean']['z']) + " (std: " + str(patients_stats['std']['z']) + ")")
print("SS Mean Drift Rate: " + str(patients_stats['mean']['v(SS)']) + " (std: " + str(patients_stats['std']['v(SS)']) + ")")
print("CP Mean Drift Rate: " + str(patients_stats['mean']['v(CP)']) + " (std: " + str(patients_stats['std']['v(CP)']) + ")")
print("CS Mean Drift Rate: " + str(patients_stats['mean']['v(CS)']) + " (std: " + str(patients_stats['std']['v(CS)']) + ")")
print("US Mean Drift Rate: " + str(patients_stats['mean']['v(US)']) + " (std: " + str(patients_stats['std']['v(US)']) + ")")
v_SS, v_CP, v_CS, v_US = patients_model.nodes_db.node[['v(SS)', 'v(CP)', 'v(CS)', 'v(US)']]
hddm.analyze.plot_posterior_nodes([v_SS, v_CP, v_CS, v_US])
print('P(SS > US) = ' + str((v_SS.trace() > v_US.trace()).mean()))
print('P(CP > SS) = ' + str((v_CP.trace() > v_SS.trace()).mean()))
print('P(CS > SS) = ' + str((v_CS.trace() > v_SS.trace()).mean()))
print('P(CP > CS) = ' + str((v_CP.trace() > v_CS.trace()).mean()))
print('P(CP > US) = ' + str((v_CP.trace() > v_US.trace()).mean()))
print('P(CS > US) = ' + str((v_CS.trace() > v_US.trace()).mean()))
Explanation: PASS - Formal testing reveals no convergence problems; Gelman-Rubin R statistic values for all model variables fall within the desired range ($0.98$ to $1.02$)
Drift Rate Analysis
Here, we examine whether the type of stimulus significantly affects the drift rate of the decision-making process.
End of explanation
Distribution for the non-decision time t
time_nondec = patients_model.nodes_db.node[['t']]
hddm.analyze.plot_posterior_nodes(time_nondec)
Distribution of bias z
z = patients_model.nodes_db.node[['z']]
hddm.analyze.plot_posterior_nodes(z)
Explanation: The drift rate for CP is significantly lower than both SS and US; no significant difference detected for CS
No other statistical significance detected at $p <0.05$
End of explanation
patients_model_threshold = hddm.HDDM(patients_data, depends_on={'v': 'stim', 'a': 'stim'}, bias=True)
patients_model_threshold.find_starting_values()
patients_model_threshold.sample(10000, burn=200, dbname='language_decision/models/patients_threshold', db='txt')
Explanation: Does the stimulus type affect the distance between the two boundaries (threshold)?
Threshold (or a) describes the relative difference in the distance between the upper and lower response boundaries of the DDM.
We explore whether stimulus type affects the threshold / distance between the two boundaries
End of explanation
models_threshold = []
for i in range(5):
m = hddm.HDDM(patients_data, depends_on={'v': 'stim', 'a': 'stim'})
m.find_starting_values()
m.sample(6000, burn=20)
models_threshold.append(m)
model_tools.check_convergence(models_threshold)
Explanation: Convergence checks
End of explanation
a_SS, a_CP, a_CS, a_US = patients_model_threshold.nodes_db.node[['a(SS)', 'a(CP)', 'a(CS)', 'a(US)']]
hddm.analyze.plot_posterior_nodes([a_SS, a_CP, a_CS, a_US])
print('P(SS > US) = ' + str((a_SS.trace() > a_US.trace()).mean()))
print('P(SS > CS) = ' + str((a_SS.trace() > a_CS.trace()).mean()))
print('P(CP > SS) = ' + str((a_CP.trace() > a_SS.trace()).mean()))
print('P(CP > CS) = ' + str((a_CP.trace() > a_CS.trace()).mean()))
print('P(CP > US) = ' + str((a_CP.trace() > a_US.trace()).mean()))
print('P(CS > US) = ' + str((a_CS.trace() > a_US.trace()).mean()))
Explanation: Threshold analysis
Since models converge, we can check the posteriors for significant differences in threshold between stimuli groups as we did for drift rates.
End of explanation
print("a(US) mean: " + str(a_US.trace().mean()))
print("a(SS) mean: " + str(a_SS.trace().mean()))
print("a(CS) mean: " + str(a_CS.trace().mean()))
print("a(CP) mean: " + str(a_CP.trace().mean()))
Explanation: Threshold for US is significantly larger than both SS & CS
End of explanation
patients_model_lumped = hddm.HDDM(patients_data)
patients_model_lumped.find_starting_values()
patients_model_lumped.sample(10000, burn=200, dbname='language_decision/models/patients_lumped', db='txt')
patients_model_lumped.plot_posteriors()
Explanation: Lumped Model
End of explanation |
1,332 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cavity flow with Navier-Stokes
The final two steps will both solve the Navier–Stokes equations in two dimensions, but with different boundary conditions.
The momentum equation in vector form for a velocity field v⃗
is
Step1: The pressure Poisson equation that's written above can be hard to write out without typos. The function build_up_b below represents the contents of the square brackets, so that the entirety of the Poisson pressure equation is slightly more manageable.
Step2: The function pressure_poisson is also defined to help segregate the different rounds of calculations. Note the presence of the pseudo-time variable nit. This sub-iteration in the Poisson calculation helps ensure a divergence-free field.
Step3: Finally, the rest of the cavity flow equations are wrapped inside the function cavity_flow, allowing us to easily plot the results of the cavity flow solver for different lengths of time.
Step4: Validation
Marchi et al (2009)$^1$ compared numerical implementations of the lid driven cavity problem with their solution on a 1024 x 1024 nodes grid. We will compare a solution using both NumPy and Devito with the results of their paper below.
https
Step5: Devito Implementation
Step6: Reminder
Step7: Validation
Step8: The Devito implementation produces results consistent with the benchmark solution. There is a small disparity in a few of the velocity values, but this is expected as the Devito 41 x 41 node grid is much coarser than the benchmark on a 1024 x 1024 node grid.
Comparison | Python Code:
import numpy as np
from matplotlib import pyplot, cm
%matplotlib inline
nx = 41
ny = 41
nt = 1000
nit = 50
c = 1
dx = 1. / (nx - 1)
dy = 1. / (ny - 1)
x = np.linspace(0, 1, nx)
y = np.linspace(0, 1, ny)
Y, X = np.meshgrid(x, y)
rho = 1
nu = .1
dt = .001
u = np.zeros((nx, ny))
v = np.zeros((nx, ny))
p = np.zeros((nx, ny))
Explanation: Cavity flow with Navier-Stokes
The final two steps will both solve the Navier–Stokes equations in two dimensions, but with different boundary conditions.
The momentum equation in vector form for a velocity field v⃗
is:
$$ \frac{\partial \overrightarrow{v}}{\partial t} + (\overrightarrow{v} \cdot \nabla ) \overrightarrow{v} = -\frac{1}{\rho}\nabla p + \nu \nabla^2 \overrightarrow{v}$$
This represents three scalar equations, one for each velocity component (u,v,w). But we will solve it in two dimensions, so there will be two scalar equations.
Remember the continuity equation? This is where the Poisson equation for pressure comes in!
Here is the system of differential equations: two equations for the velocity components u,v and one equation for pressure:
$$ \frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} + v \frac{\partial u}{\partial y}= -\frac{1}{\rho}\frac{\partial p}{\partial x} + \nu \left[ \frac{\partial^2 u}{\partial x^2} +\frac{\partial^2 u}{\partial y^2} \right] $$
$$ \frac{\partial v}{\partial t} + u \frac{\partial v}{\partial x} + v \frac{\partial v}{\partial y}= -\frac{1}{\rho}\frac{\partial p}{\partial y} + \nu \left[ \frac{\partial^2 v}{\partial x^2} +\frac{\partial^2 v}{\partial y^2} \right] $$
$$
\frac{\partial^2 p}{\partial x^2} +\frac{\partial^2 p}{\partial y^2} =
\rho \left[\frac{\partial}{\partial t} \left(\frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} \right) - \left(\frac{\partial u}{\partial x}\frac{\partial u}{\partial x}+2\frac{\partial u}{\partial y}\frac{\partial v}{\partial x}+\frac{\partial v}{\partial y}\frac{\partial v}{\partial y} \right) \right]
$$
From the previous steps, we already know how to discretize all these terms. Only the last equation is a little unfamiliar. But with a little patience, it will not be hard!
Our stencils look like this:
First the momentum equation in the u direction
$$
\begin{split}
u_{i,j}^{n+1} = u_{i,j}^{n} & - u_{i,j}^{n} \frac{\Delta t}{\Delta x} \left(u_{i,j}^{n}-u_{i-1,j}^{n}\right) - v_{i,j}^{n} \frac{\Delta t}{\Delta y} \left(u_{i,j}^{n}-u_{i,j-1}^{n}\right) \
& - \frac{\Delta t}{\rho 2\Delta x} \left(p_{i+1,j}^{n}-p_{i-1,j}^{n}\right) \
& + \nu \left(\frac{\Delta t}{\Delta x^2} \left(u_{i+1,j}^{n}-2u_{i,j}^{n}+u_{i-1,j}^{n}\right) + \frac{\Delta t}{\Delta y^2} \left(u_{i,j+1}^{n}-2u_{i,j}^{n}+u_{i,j-1}^{n}\right)\right)
\end{split}
$$
Second the momentum equation in the v direction
$$
\begin{split}
v_{i,j}^{n+1} = v_{i,j}^{n} & - u_{i,j}^{n} \frac{\Delta t}{\Delta x} \left(v_{i,j}^{n}-v_{i-1,j}^{n}\right) - v_{i,j}^{n} \frac{\Delta t}{\Delta y} \left(v_{i,j}^{n}-v_{i,j-1}^{n})\right) \
& - \frac{\Delta t}{\rho 2\Delta y} \left(p_{i,j+1}^{n}-p_{i,j-1}^{n}\right) \
& + \nu \left(\frac{\Delta t}{\Delta x^2} \left(v_{i+1,j}^{n}-2v_{i,j}^{n}+v_{i-1,j}^{n}\right) + \frac{\Delta t}{\Delta y^2} \left(v_{i,j+1}^{n}-2v_{i,j}^{n}+v_{i,j-1}^{n}\right)\right)
\end{split}
$$
Finally the pressure-Poisson equation
$$\begin{split}
p_{i,j}^{n} = & \frac{\left(p_{i+1,j}^{n}+p_{i-1,j}^{n}\right) \Delta y^2 + \left(p_{i,j+1}^{n}+p_{i,j-1}^{n}\right) \Delta x^2}{2\left(\Delta x^2+\Delta y^2\right)} \
& -\frac{\rho\Delta x^2\Delta y^2}{2\left(\Delta x^2+\Delta y^2\right)} \
& \times \left[\frac{1}{\Delta t}\left(\frac{u_{i+1,j}-u_{i-1,j}}{2\Delta x}+\frac{v_{i,j+1}-v_{i,j-1}}{2\Delta y}\right)-\frac{u_{i+1,j}-u_{i-1,j}}{2\Delta x}\frac{u_{i+1,j}-u_{i-1,j}}{2\Delta x}\right. \
& \left. -2\frac{u_{i,j+1}-u_{i,j-1}}{2\Delta y}\frac{v_{i+1,j}-v_{i-1,j}}{2\Delta x}-\frac{v_{i,j+1}-v_{i,j-1}}{2\Delta y}\frac{v_{i,j+1}-v_{i,j-1}}{2\Delta y} \right]
\end{split}
$$
The initial condition is $u,v,p=0$
everywhere, and the boundary conditions are:
$u=1$ at $y=1$ (the "lid");
$u,v=0$ on the other boundaries;
$\frac{\partial p}{\partial y}=0$ at $y=0,1$;
$\frac{\partial p}{\partial x}=0$ at $x=0,1$
$p=0$ at $(0,0)$
Interestingly these boundary conditions describe a well known problem in the Computational Fluid Dynamics realm, where it is known as the lid driven square cavity flow problem.
Numpy Implementation
End of explanation
def build_up_b(b, rho, dt, u, v, dx, dy):
b[1:-1, 1:-1] = (rho * (1 / dt *
((u[2:, 1:-1] - u[0:-2, 1:-1]) /
(2 * dx) + (v[1:-1, 2:] - v[1:-1, 0:-2]) / (2 * dy)) -
((u[2:, 1:-1] - u[0:-2, 1:-1]) / (2 * dx))**2 -
2 * ((u[1:-1, 2:] - u[1:-1, 0:-2]) / (2 * dy) *
(v[2:, 1:-1] - v[0:-2, 1:-1]) / (2 * dx))-
((v[1:-1, 2:] - v[1:-1, 0:-2]) / (2 * dy))**2))
return b
Explanation: The pressure Poisson equation that's written above can be hard to write out without typos. The function build_up_b below represents the contents of the square brackets, so that the entirety of the Poisson pressure equation is slightly more manageable.
End of explanation
def pressure_poisson(p, dx, dy, b):
pn = np.empty_like(p)
pn = p.copy()
for q in range(nit):
pn = p.copy()
p[1:-1, 1:-1] = (((pn[2:, 1:-1] + pn[0:-2, 1:-1]) * dy**2 +
(pn[1:-1, 2:] + pn[1:-1, 0:-2]) * dx**2) /
(2 * (dx**2 + dy**2)) -
dx**2 * dy**2 / (2 * (dx**2 + dy**2)) *
b[1:-1,1:-1])
p[-1, :] = p[-2, :] # dp/dx = 0 at x = 2
p[:, 0] = p[:, 1] # dp/dy = 0 at y = 0
p[0, :] = p[1, :] # dp/dx = 0 at x = 0
p[:, -1] = p[:, -2] # p = 0 at y = 2
p[0, 0] = 0
return p, pn
Explanation: The function pressure_poisson is also defined to help segregate the different rounds of calculations. Note the presence of the pseudo-time variable nit. This sub-iteration in the Poisson calculation helps ensure a divergence-free field.
End of explanation
def cavity_flow(nt, u, v, dt, dx, dy, p, rho, nu):
un = np.empty_like(u)
vn = np.empty_like(v)
b = np.zeros((nx, ny))
for n in range(0,nt):
un = u.copy()
vn = v.copy()
b = build_up_b(b, rho, dt, u, v, dx, dy)
p = pressure_poisson(p, dx, dy, b)[0]
pn = pressure_poisson(p, dx, dy, b)[1]
u[1:-1, 1:-1] = (un[1:-1, 1:-1]-
un[1:-1, 1:-1] * dt / dx *
(un[1:-1, 1:-1] - un[0:-2, 1:-1]) -
vn[1:-1, 1:-1] * dt / dy *
(un[1:-1, 1:-1] - un[1:-1, 0:-2]) -
dt / (2 * rho * dx) * (p[2:, 1:-1] - p[0:-2, 1:-1]) +
nu * (dt / dx**2 *
(un[2:, 1:-1] - 2 * un[1:-1, 1:-1] + un[0:-2, 1:-1]) +
dt / dy**2 *
(un[1:-1, 2:] - 2 * un[1:-1, 1:-1] + un[1:-1, 0:-2])))
v[1:-1,1:-1] = (vn[1:-1, 1:-1] -
un[1:-1, 1:-1] * dt / dx *
(vn[1:-1, 1:-1] - vn[0:-2, 1:-1]) -
vn[1:-1, 1:-1] * dt / dy *
(vn[1:-1, 1:-1] - vn[1:-1, 0:-2]) -
dt / (2 * rho * dy) * (p[1:-1, 2:] - p[1:-1, 0:-2]) +
nu * (dt / dx**2 *
(vn[2:, 1:-1] - 2 * vn[1:-1, 1:-1] + vn[0:-2, 1:-1]) +
dt / dy**2 *
(vn[1:-1, 2:] - 2 * vn[1:-1, 1:-1] + vn[1:-1, 0:-2])))
u[:, 0] = 0
u[0, :] = 0
u[-1, :] = 0
u[:, -1] = 1 # Set velocity on cavity lid equal to 1
v[:, 0] = 0
v[:, -1] = 0
v[0, :] = 0
v[-1, :] = 0
return u, v, p, pn
#NBVAL_IGNORE_OUTPUT
u = np.zeros((nx, ny))
v = np.zeros((nx, ny))
p = np.zeros((nx, ny))
b = np.zeros((nx, ny))
nt = 1000
# Store the output velocity and pressure fields in the variables a, b and c.
# This is so they do not clash with the devito outputs below.
a, b, c, d = cavity_flow(nt, u, v, dt, dx, dy, p, rho, nu)
fig = pyplot.figure(figsize=(11, 7), dpi=100)
pyplot.contourf(X, Y, c, alpha=0.5, cmap=cm.viridis)
pyplot.colorbar()
pyplot.contour(X, Y, c, cmap=cm.viridis)
pyplot.quiver(X[::2, ::2], Y[::2, ::2], a[::2, ::2], b[::2, ::2])
pyplot.xlabel('X')
pyplot.ylabel('Y');
Explanation: Finally, the rest of the cavity flow equations are wrapped inside the function cavity_flow, allowing us to easily plot the results of the cavity flow solver for different lengths of time.
End of explanation
# Import u values at x=L/2 (table 6, column 2 rows 12-26) in Marchi et al.
Marchi_Re10_u = np.array([[0.0625, -3.85425800e-2],
[0.125, -6.96238561e-2],
[0.1875, -9.6983962e-2],
[0.25, -1.22721979e-1],
[0.3125, -1.47636199e-1],
[0.375, -1.71260757e-1],
[0.4375, -1.91677043e-1],
[0.5, -2.05164738e-1],
[0.5625, -2.05770198e-1],
[0.625, -1.84928116e-1],
[0.6875, -1.313892353e-1],
[0.75, -3.1879308e-2],
[0.8125, 1.26912095e-1],
[0.875, 3.54430364e-1],
[0.9375, 6.50529292e-1]])
# Import v values at y=L/2 (table 6, column 2 rows 27-41) in Marchi et al.
Marchi_Re10_v = np.array([[0.0625, 9.2970121e-2],
[0.125, 1.52547843e-1],
[0.1875, 1.78781456e-1],
[0.25, 1.76415100e-1],
[0.3125, 1.52055820e-1],
[0.375, 1.121477612e-1],
[0.4375, 6.21048147e-2],
[0.5, 6.3603620e-3],
[0.5625,-5.10417285e-2],
[0.625, -1.056157259e-1],
[0.6875,-1.51622101e-1],
[0.75, -1.81633561e-1],
[0.8125,-1.87021651e-1],
[0.875, -1.59898186e-1],
[0.9375,-9.6409942e-2]])
#NBVAL_IGNORE_OUTPUT
# Check results with Marchi et al 2009.
npgrid=[nx,ny]
x_coord = np.linspace(0, 1, npgrid[0])
y_coord = np.linspace(0, 1, npgrid[1])
fig = pyplot.figure(figsize=(12, 6))
ax1 = fig.add_subplot(121)
ax1.plot(a[int(npgrid[0]/2),:],y_coord[:])
ax1.plot(Marchi_Re10_u[:,1],Marchi_Re10_u[:,0],'ro')
ax1.set_xlabel('$u$')
ax1.set_ylabel('$y$')
ax1 = fig.add_subplot(122)
ax1.plot(x_coord[:],b[:,int(npgrid[1]/2)])
ax1.plot(Marchi_Re10_v[:,0],Marchi_Re10_v[:,1],'ro')
ax1.set_xlabel('$x$')
ax1.set_ylabel('$v$')
pyplot.show()
Explanation: Validation
Marchi et al (2009)$^1$ compared numerical implementations of the lid driven cavity problem with their solution on a 1024 x 1024 nodes grid. We will compare a solution using both NumPy and Devito with the results of their paper below.
https://www.scielo.br/scielo.php?pid=S1678-58782009000300004&script=sci_arttext
End of explanation
from devito import Grid
grid = Grid(shape=(nx, ny), extent=(1., 1.))
x, y = grid.dimensions
t = grid.stepping_dim
Explanation: Devito Implementation
End of explanation
from devito import TimeFunction, Function, \
Eq, solve, Operator, configuration
# Build Required Functions and derivatives:
# --------------------------------------
# |Variable | Required Derivatives |
# --------------------------------------
# | u | dt, dx, dy, dx**2, dy**2 |
# | v | dt, dx, dy, dx**2, dy**2 |
# | p | dx, dy, dx**2, dy**2 |
# | pn | dx, dy, dx**2, dy**2 |
# --------------------------------------
u = TimeFunction(name='u', grid=grid, space_order=2)
v = TimeFunction(name='v', grid=grid, space_order=2)
p = TimeFunction(name='p', grid=grid, space_order=2)
#Variables are automatically initalized at 0.
# First order derivatives will be handled with p.dxc
eq_u =Eq(u.dt + u*u.dx + v*u.dy, -1./rho * p.dxc + nu*(u.laplace), subdomain=grid.interior)
eq_v =Eq(v.dt + u*v.dx + v*v.dy, -1./rho * p.dyc + nu*(v.laplace), subdomain=grid.interior)
eq_p =Eq(p.laplace,rho*(1./dt*(u.dxc+v.dyc)-(u.dxc*u.dxc)+2*(u.dyc*v.dxc)+(v.dyc*v.dyc)), subdomain=grid.interior)
# NOTE: Pressure has no time dependence so we solve for the other pressure buffer.
stencil_u =solve(eq_u , u.forward)
stencil_v =solve(eq_v , v.forward)
stencil_p=solve(eq_p, p)
update_u =Eq(u.forward, stencil_u)
update_v =Eq(v.forward, stencil_v)
update_p =Eq(p.forward, stencil_p)
# Boundary Conds. u=v=0 for all sides
bc_u = [Eq(u[t+1, 0, y], 0)]
bc_u += [Eq(u[t+1, nx-1, y], 0)]
bc_u += [Eq(u[t+1, x, 0], 0)]
bc_u += [Eq(u[t+1, x, ny-1], 1)] # except u=1 for y=2
bc_v = [Eq(v[t+1, 0, y], 0)]
bc_v += [Eq(v[t+1, nx-1, y], 0)]
bc_v += [Eq(v[t+1, x, ny-1], 0)]
bc_v += [Eq(v[t+1, x, 0], 0)]
bc_p = [Eq(p[t+1, 0, y],p[t+1, 1,y])] # dpn/dx = 0 for x=0.
bc_p += [Eq(p[t+1,nx-1, y],p[t+1,nx-2, y])] # dpn/dx = 0 for x=2.
bc_p += [Eq(p[t+1, x, 0],p[t+1,x ,1])] # dpn/dy = 0 at y=0
bc_p += [Eq(p[t+1, x, ny-1],p[t+1, x, ny-2])] # pn=0 for y=2
bc_p += [Eq(p[t+1, 0, 0], 0)]
bc=bc_u+bc_v
optime=Operator([update_u, update_v]+bc_u+bc_v)
oppres=Operator([update_p]+bc_p)
# Silence non-essential outputs from the solver.
configuration['log-level'] = 'ERROR'
# This is the time loop.
for step in range(0,nt):
if step>0:
oppres(time_M = nit)
optime(time_m=step, time_M=step, dt=dt)
#NBVAL_IGNORE_OUTPUT
fig = pyplot.figure(figsize=(11,7), dpi=100)
# Plotting the pressure field as a contour.
pyplot.contourf(X, Y, p.data[0], alpha=0.5, cmap=cm.viridis)
pyplot.colorbar()
# Plotting the pressure field outlines.
pyplot.contour(X, Y, p.data[0], cmap=cm.viridis)
# Plotting velocity field.
pyplot.quiver(X[::2,::2], Y[::2,::2], u.data[0,::2,::2], v.data[0,::2,::2])
pyplot.xlabel('X')
pyplot.ylabel('Y');
Explanation: Reminder: here are our equations
$$ \frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} + v \frac{\partial u}{\partial y}= -\frac{1}{\rho}\frac{\partial p}{\partial x} + \nu \left[ \frac{\partial^2 u}{\partial x^2} +\frac{\partial^2 u}{\partial y^2} \right] $$
$$ \frac{\partial v}{\partial t} + u \frac{\partial v}{\partial x} + v \frac{\partial v}{\partial y}= -\frac{1}{\rho}\frac{\partial p}{\partial y} + \nu \left[ \frac{\partial^2 v}{\partial x^2} +\frac{\partial^2 v}{\partial y^2} \right] $$
$$
\frac{\partial^2 p}{\partial x^2} +\frac{\partial^2 p}{\partial y^2} =
\rho \left[\frac{\partial}{\partial t} \left(\frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} \right) - \left(\frac{\partial u}{\partial x}\frac{\partial u}{\partial x}+2\frac{\partial u}{\partial y}\frac{\partial v}{\partial x}+\frac{\partial v}{\partial y}\frac{\partial v}{\partial y} \right) \right]
$$
Note that p has no time dependence, so we are going to solve for p in pseudotime then move to the next time step and solve for u and v. This will require two operators, one for p (using p and pn) in pseudotime and one for u and v in time.
As shown in the Poisson equation tutorial, a TimeFunction can be used despite the lack of a time-dependence. This will cause Devito to allocate two grid buffers, which we can addressed directly via the terms pn and pn.forward. The internal time loop can be controlled by supplying the number of pseudotime steps (iterations) as a time argument to the operator.
The time steps are advanced through a Python loop where a separator operator calculates u and v.
Also note that we need to use first order spatial derivatives for the velocites and these derivatives are not the maximum spatial derivative order (2nd order) in these equations. This is the first time we have seen this in this tutorial series (previously we have only used a single spatial derivate order).
To use a first order derivative of a devito function, we use the syntax function.dxc or function.dyc for the x and y derivatives respectively.
End of explanation
#NBVAL_IGNORE_OUTPUT
# Again, check results with Marchi et al 2009.
fig = pyplot.figure(figsize=(12, 6))
ax1 = fig.add_subplot(121)
ax1.plot(u.data[0,int(grid.shape[0]/2),:],y_coord[:])
ax1.plot(Marchi_Re10_u[:,1],Marchi_Re10_u[:,0],'ro')
ax1.set_xlabel('$u$')
ax1.set_ylabel('$y$')
ax1 = fig.add_subplot(122)
ax1.plot(x_coord[:],v.data[0,:,int(grid.shape[0]/2)])
ax1.plot(Marchi_Re10_v[:,0],Marchi_Re10_v[:,1],'ro')
ax1.set_xlabel('$x$')
ax1.set_ylabel('$v$')
pyplot.show()
Explanation: Validation
End of explanation
#NBVAL_IGNORE_OUTPUT
fig = pyplot.figure(figsize=(12, 6))
ax1 = fig.add_subplot(121)
ax1.plot(a[int(npgrid[0]/2),:],y_coord[:])
ax1.plot(u.data[0,int(grid.shape[0]/2),:],y_coord[:],'--')
ax1.plot(Marchi_Re10_u[:,1],Marchi_Re10_u[:,0],'ro')
ax1.set_xlabel('$u$')
ax1.set_ylabel('$y$')
ax1 = fig.add_subplot(122)
ax1.plot(x_coord[:],b[:,int(npgrid[1]/2)])
ax1.plot(x_coord[:],v.data[0,:,int(grid.shape[0]/2)],'--')
ax1.plot(Marchi_Re10_v[:,0],Marchi_Re10_v[:,1],'ro')
ax1.set_xlabel('$x$')
ax1.set_ylabel('$v$')
ax1.legend(['numpy','devito','Marchi (2009)'])
pyplot.show()
#Pressure norm check
tol = 1e-3
assert np.sum((c[:,:]-d[:,:])**2/ np.maximum(d[:,:]**2,1e-10)) < tol
assert np.sum((p.data[0]-p.data[1])**2/np.maximum(p.data[0]**2,1e-10)) < tol
Explanation: The Devito implementation produces results consistent with the benchmark solution. There is a small disparity in a few of the velocity values, but this is expected as the Devito 41 x 41 node grid is much coarser than the benchmark on a 1024 x 1024 node grid.
Comparison
End of explanation |
1,333 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right
Step1: As discussed in Computation on NumPy Arrays
Step2: But this abstraction can become less efficient when computing compound expressions.
For example, consider the following expression
Step3: Because NumPy evaluates each subexpression, this is roughly equivalent to the following
Step4: In other words, every intermediate step is explicitly allocated in memory. If the x and y arrays are very large, this can lead to significant memory and computational overhead.
The Numexpr library gives you the ability to compute this type of compound expression element by element, without the need to allocate full intermediate arrays.
The Numexpr documentation has more details, but for the time being it is sufficient to say that the library accepts a string giving the NumPy-style expression you'd like to compute
Step5: The benefit here is that Numexpr evaluates the expression in a way that does not use full-sized temporary arrays, and thus can be much more efficient than NumPy, especially for large arrays.
The Pandas eval() and query() tools that we will discuss here are conceptually similar, and depend on the Numexpr package.
pandas.eval() for Efficient Operations
The eval() function in Pandas uses string expressions to efficiently compute operations using DataFrames.
For example, consider the following DataFrames
Step6: To compute the sum of all four DataFrames using the typical Pandas approach, we can just write the sum
Step7: The same result can be computed via pd.eval by constructing the expression as a string
Step8: The eval() version of this expression is about 50% faster (and uses much less memory), while giving the same result
Step9: Operations supported by pd.eval()
As of Pandas v0.16, pd.eval() supports a wide range of operations.
To demonstrate these, we'll use the following integer DataFrames
Step10: Arithmetic operators
pd.eval() supports all arithmetic operators. For example
Step11: Comparison operators
pd.eval() supports all comparison operators, including chained expressions
Step12: Bitwise operators
pd.eval() supports the & and | bitwise operators
Step13: In addition, it supports the use of the literal and and or in Boolean expressions
Step14: Object attributes and indices
pd.eval() supports access to object attributes via the obj.attr syntax, and indexes via the obj[index] syntax
Step15: Other operations
Other operations such as function calls, conditional statements, loops, and other more involved constructs are currently not implemented in pd.eval().
If you'd like to execute these more complicated types of expressions, you can use the Numexpr library itself.
DataFrame.eval() for Column-Wise Operations
Just as Pandas has a top-level pd.eval() function, DataFrames have an eval() method that works in similar ways.
The benefit of the eval() method is that columns can be referred to by name.
We'll use this labeled array as an example
Step16: Using pd.eval() as above, we can compute expressions with the three columns like this
Step17: The DataFrame.eval() method allows much more succinct evaluation of expressions with the columns
Step18: Notice here that we treat column names as variables within the evaluated expression, and the result is what we would wish.
Assignment in DataFrame.eval()
In addition to the options just discussed, DataFrame.eval() also allows assignment to any column.
Let's use the DataFrame from before, which has columns 'A', 'B', and 'C'
Step19: We can use df.eval() to create a new column 'D' and assign to it a value computed from the other columns
Step20: In the same way, any existing column can be modified
Step21: Local variables in DataFrame.eval()
The DataFrame.eval() method supports an additional syntax that lets it work with local Python variables.
Consider the following
Step22: The @ character here marks a variable name rather than a column name, and lets you efficiently evaluate expressions involving the two "namespaces"
Step23: As with the example used in our discussion of DataFrame.eval(), this is an expression involving columns of the DataFrame.
It cannot be expressed using the DataFrame.eval() syntax, however!
Instead, for this type of filtering operation, you can use the query() method
Step24: In addition to being a more efficient computation, compared to the masking expression this is much easier to read and understand.
Note that the query() method also accepts the @ flag to mark local variables
Step25: Performance
Step26: Is roughly equivalent to this
Step27: If the size of the temporary DataFrames is significant compared to your available system memory (typically several gigabytes) then it's a good idea to use an eval() or query() expression.
You can check the approximate size of your array in bytes using this | Python Code:
import numpy as np
rng = np.random.RandomState(42)
x = rng.rand(1000000)
y = rng.rand(1000000)
%timeit x + y
Explanation: <!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.
The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!
No changes were made to the contents of this notebook from the original.
<!--NAVIGATION-->
< Working with Time Series | Contents | Further Resources >
High-Performance Pandas: eval() and query()
As we've already seen in previous sections, the power of the PyData stack is built upon the ability of NumPy and Pandas to push basic operations into C via an intuitive syntax: examples are vectorized/broadcasted operations in NumPy, and grouping-type operations in Pandas.
While these abstractions are efficient and effective for many common use cases, they often rely on the creation of temporary intermediate objects, which can cause undue overhead in computational time and memory use.
As of version 0.13 (released January 2014), Pandas includes some experimental tools that allow you to directly access C-speed operations without costly allocation of intermediate arrays.
These are the eval() and query() functions, which rely on the Numexpr package.
In this notebook we will walk through their use and give some rules-of-thumb about when you might think about using them.
Motivating query() and eval(): Compound Expressions
We've seen previously that NumPy and Pandas support fast vectorized operations; for example, when adding the elements of two arrays:
End of explanation
%timeit np.fromiter((xi + yi for xi, yi in zip(x, y)), dtype=x.dtype, count=len(x))
Explanation: As discussed in Computation on NumPy Arrays: Universal Functions, this is much faster than doing the addition via a Python loop or comprehension:
End of explanation
mask = (x > 0.5) & (y < 0.5)
Explanation: But this abstraction can become less efficient when computing compound expressions.
For example, consider the following expression:
End of explanation
tmp1 = (x > 0.5)
tmp2 = (y < 0.5)
mask = tmp1 & tmp2
Explanation: Because NumPy evaluates each subexpression, this is roughly equivalent to the following:
End of explanation
import numexpr
mask_numexpr = numexpr.evaluate('(x > 0.5) & (y < 0.5)')
np.allclose(mask, mask_numexpr)
Explanation: In other words, every intermediate step is explicitly allocated in memory. If the x and y arrays are very large, this can lead to significant memory and computational overhead.
The Numexpr library gives you the ability to compute this type of compound expression element by element, without the need to allocate full intermediate arrays.
The Numexpr documentation has more details, but for the time being it is sufficient to say that the library accepts a string giving the NumPy-style expression you'd like to compute:
End of explanation
import pandas as pd
nrows, ncols = 100000, 100
rng = np.random.RandomState(42)
df1, df2, df3, df4 = (pd.DataFrame(rng.rand(nrows, ncols))
for i in range(4))
Explanation: The benefit here is that Numexpr evaluates the expression in a way that does not use full-sized temporary arrays, and thus can be much more efficient than NumPy, especially for large arrays.
The Pandas eval() and query() tools that we will discuss here are conceptually similar, and depend on the Numexpr package.
pandas.eval() for Efficient Operations
The eval() function in Pandas uses string expressions to efficiently compute operations using DataFrames.
For example, consider the following DataFrames:
End of explanation
%timeit df1 + df2 + df3 + df4
Explanation: To compute the sum of all four DataFrames using the typical Pandas approach, we can just write the sum:
End of explanation
%timeit pd.eval('df1 + df2 + df3 + df4')
Explanation: The same result can be computed via pd.eval by constructing the expression as a string:
End of explanation
np.allclose(df1 + df2 + df3 + df4,
pd.eval('df1 + df2 + df3 + df4'))
Explanation: The eval() version of this expression is about 50% faster (and uses much less memory), while giving the same result:
End of explanation
df1, df2, df3, df4, df5 = (pd.DataFrame(rng.randint(0, 1000, (100, 3)))
for i in range(5))
Explanation: Operations supported by pd.eval()
As of Pandas v0.16, pd.eval() supports a wide range of operations.
To demonstrate these, we'll use the following integer DataFrames:
End of explanation
result1 = -df1 * df2 / (df3 + df4) - df5
result2 = pd.eval('-df1 * df2 / (df3 + df4) - df5')
np.allclose(result1, result2)
Explanation: Arithmetic operators
pd.eval() supports all arithmetic operators. For example:
End of explanation
result1 = (df1 < df2) & (df2 <= df3) & (df3 != df4)
result2 = pd.eval('df1 < df2 <= df3 != df4')
np.allclose(result1, result2)
Explanation: Comparison operators
pd.eval() supports all comparison operators, including chained expressions:
End of explanation
result1 = (df1 < 0.5) & (df2 < 0.5) | (df3 < df4)
result2 = pd.eval('(df1 < 0.5) & (df2 < 0.5) | (df3 < df4)')
np.allclose(result1, result2)
Explanation: Bitwise operators
pd.eval() supports the & and | bitwise operators:
End of explanation
result3 = pd.eval('(df1 < 0.5) and (df2 < 0.5) or (df3 < df4)')
np.allclose(result1, result3)
Explanation: In addition, it supports the use of the literal and and or in Boolean expressions:
End of explanation
result1 = df2.T[0] + df3.iloc[1]
result2 = pd.eval('df2.T[0] + df3.iloc[1]')
np.allclose(result1, result2)
Explanation: Object attributes and indices
pd.eval() supports access to object attributes via the obj.attr syntax, and indexes via the obj[index] syntax:
End of explanation
df = pd.DataFrame(rng.rand(1000, 3), columns=['A', 'B', 'C'])
df.head()
Explanation: Other operations
Other operations such as function calls, conditional statements, loops, and other more involved constructs are currently not implemented in pd.eval().
If you'd like to execute these more complicated types of expressions, you can use the Numexpr library itself.
DataFrame.eval() for Column-Wise Operations
Just as Pandas has a top-level pd.eval() function, DataFrames have an eval() method that works in similar ways.
The benefit of the eval() method is that columns can be referred to by name.
We'll use this labeled array as an example:
End of explanation
result1 = (df['A'] + df['B']) / (df['C'] - 1)
result2 = pd.eval("(df.A + df.B) / (df.C - 1)")
np.allclose(result1, result2)
Explanation: Using pd.eval() as above, we can compute expressions with the three columns like this:
End of explanation
result3 = df.eval('(A + B) / (C - 1)')
np.allclose(result1, result3)
Explanation: The DataFrame.eval() method allows much more succinct evaluation of expressions with the columns:
End of explanation
df.head()
Explanation: Notice here that we treat column names as variables within the evaluated expression, and the result is what we would wish.
Assignment in DataFrame.eval()
In addition to the options just discussed, DataFrame.eval() also allows assignment to any column.
Let's use the DataFrame from before, which has columns 'A', 'B', and 'C':
End of explanation
df.eval('D = (A + B) / C', inplace=True)
df.head()
Explanation: We can use df.eval() to create a new column 'D' and assign to it a value computed from the other columns:
End of explanation
df.eval('D = (A - B) / C', inplace=True)
df.head()
Explanation: In the same way, any existing column can be modified:
End of explanation
column_mean = df.mean(1)
result1 = df['A'] + column_mean
result2 = df.eval('A + @column_mean')
np.allclose(result1, result2)
Explanation: Local variables in DataFrame.eval()
The DataFrame.eval() method supports an additional syntax that lets it work with local Python variables.
Consider the following:
End of explanation
result1 = df[(df.A < 0.5) & (df.B < 0.5)]
result2 = pd.eval('df[(df.A < 0.5) & (df.B < 0.5)]')
np.allclose(result1, result2)
Explanation: The @ character here marks a variable name rather than a column name, and lets you efficiently evaluate expressions involving the two "namespaces": the namespace of columns, and the namespace of Python objects.
Notice that this @ character is only supported by the DataFrame.eval() method, not by the pandas.eval() function, because the pandas.eval() function only has access to the one (Python) namespace.
DataFrame.query() Method
The DataFrame has another method based on evaluated strings, called the query() method.
Consider the following:
End of explanation
result2 = df.query('A < 0.5 and B < 0.5')
np.allclose(result1, result2)
Explanation: As with the example used in our discussion of DataFrame.eval(), this is an expression involving columns of the DataFrame.
It cannot be expressed using the DataFrame.eval() syntax, however!
Instead, for this type of filtering operation, you can use the query() method:
End of explanation
Cmean = df['C'].mean()
result1 = df[(df.A < Cmean) & (df.B < Cmean)]
result2 = df.query('A < @Cmean and B < @Cmean')
np.allclose(result1, result2)
Explanation: In addition to being a more efficient computation, compared to the masking expression this is much easier to read and understand.
Note that the query() method also accepts the @ flag to mark local variables:
End of explanation
x = df[(df.A < 0.5) & (df.B < 0.5)]
Explanation: Performance: When to Use These Functions
When considering whether to use these functions, there are two considerations: computation time and memory use.
Memory use is the most predictable aspect. As already mentioned, every compound expression involving NumPy arrays or Pandas DataFrames will result in implicit creation of temporary arrays:
For example, this:
End of explanation
tmp1 = df.A < 0.5
tmp2 = df.B < 0.5
tmp3 = tmp1 & tmp2
x = df[tmp3]
Explanation: Is roughly equivalent to this:
End of explanation
df.values.nbytes
Explanation: If the size of the temporary DataFrames is significant compared to your available system memory (typically several gigabytes) then it's a good idea to use an eval() or query() expression.
You can check the approximate size of your array in bytes using this:
End of explanation |
1,334 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced
Step1: Interactivity Options
When running in an interactive Python session, PHOEBE updates all constraints and runs various checks after each command. Although this is convenient, it does take some time, and it can sometimes be advantageous to disable this to save computation time.
Interactive Checks
By default, interactive checks are enabled when PHOEBE is being run in an interactive session (either an interactive python, IPython, or Jupyter notebook session), but disabled when PHOEBE is run as a script directly from the console. When enabled, PHOEBE will re-run the system checks after every single change to the bundle, raising warnings via the logger as soon as they occur.
This default behavior can be changed via phoebe.interactive_checks_on() or phoebe.interactive_checks_off(). The current value can be accessed via phoebe.conf.interactive_checks.
Step2: If disabled, you can always manually run the checks via b.run_checks().
Step3: Interactive Constraints
By default, interactive constraints are always enabled in PHOEBE, unless explicitly disabled. Whenever a value is changed in the bundle that affects the value of a constrained value, that constraint is immediately executed and all applicable values updated. The ensures that all constrained values are "up-to-date".
If disabled, constraints are delayed and only executed when needed by PHOEBE (when calling run_compute, for example). This can save significant time, as each value that needs updating only needs to have its constraint executed once, instead of multiple times.
This default behavior can be changed via phoebe.interactive_constraints_on() or phoebe.interactive_constraints_off(). The current value can be accessed via phoebe.conf.interactive_constraints.
Let's first look at the default behavior with interactive constraints on.
Step4: Note that the mass has already updated, according to the constraint, when the value of the semi-major axes was changed. If we disable interactive constraints this will not be the case.
Step5: No need to worry though - all constraints will be run automatically before passing to the backend. If you need to access the value of a constrained parameter, you can explicitly ask for all delayed constraints to be executed via b.run_delayed_constraints().
Step6: Filtering Options
check_visible
By default, everytime you call filter or set_value, PHOEBE checks to see if the current value is visible (meaning it is relevant given the value of other parameters). Although not terribly expensive, these checks can add up... so disabling these checks can save time. Note that these are automatically temporarily disabled during run_compute. If disabling these checks, be aware that changing the value of some parameters may have no affect on the resulting computations. You can always manually check the visibility/relevance of a parameter by calling parameter.is_visible.
This default behavior can be changed via phoebe.check_visible_on() or phoebe.check_visible_off().
Let's first look at the default behavior with check_visible on.
Step7: Now if we disable check_visible, we'll see the same thing as if we passed check_visible=False to any filter call.
Step8: Now the same filter is returning additional parameters. For example, ld_coeffs_source parameters were initially hidden because ld_mode is set to 'interp'. We can see the rules that are being followed
Step9: and can still manually check to see that it shouldn't be visible (isn't currently relevant given the value of ld_func)
Step10: check_default
Similarly, PHOEBE automatically excludes any parameter which is tagged with a '_default' tag. These parameters exist to provide default values when a new component or dataset are added to the bundle, but can usually be ignored, and so are excluded from any filter calls. Although not at all expensive, this too can be disabled at the settings level or by passing check_default=False to any filter call.
This default behavior can be changed via phoebe.check_default_on() or phoebe.check_default_off().
Step11: Passband Options
PHOEBE automatically fetches necessary tables from tables.phoebe-project.org. By default, only the necessary tables for each passband are fetched (except when calling download_passband manually) and the fits files are fetched uncompressed.
For more details, see the API docs on download_passband and update_passband as well as the passband updating tutorial.
The default values mentioned in the API docs for content and gzipped can be exposed via phoebe.get_download_passband_defaults and changed via phoebe.set_download_passband_defaults. Note that setting gzipped to True will minimize file storage for the passband files and will result in faster download speeds, but take significantly longer to load by PHOEBE as they have to be uncompressed each time they are loaded. If you have a large number of installed passbands, this could significantly slow importing PHOEBE. | Python Code:
!pip install -I "phoebe>=2.2,<2.3"
import phoebe
b = phoebe.default_binary()
Explanation: Advanced: Optimizing Performance with PHOEBE
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
print(phoebe.conf.interactive_checks)
phoebe.interactive_checks_off()
print(phoebe.conf.interactive_checks)
Explanation: Interactivity Options
When running in an interactive Python session, PHOEBE updates all constraints and runs various checks after each command. Although this is convenient, it does take some time, and it can sometimes be advantageous to disable this to save computation time.
Interactive Checks
By default, interactive checks are enabled when PHOEBE is being run in an interactive session (either an interactive python, IPython, or Jupyter notebook session), but disabled when PHOEBE is run as a script directly from the console. When enabled, PHOEBE will re-run the system checks after every single change to the bundle, raising warnings via the logger as soon as they occur.
This default behavior can be changed via phoebe.interactive_checks_on() or phoebe.interactive_checks_off(). The current value can be accessed via phoebe.conf.interactive_checks.
End of explanation
print(b.run_checks())
b.set_value('requiv', component='primary', value=50)
print(b.run_checks())
Explanation: If disabled, you can always manually run the checks via b.run_checks().
End of explanation
print(phoebe.conf.interactive_constraints)
print(b.filter('mass', component='primary'))
b.set_value('sma@binary', 10)
print(b.filter('mass', component='primary'))
Explanation: Interactive Constraints
By default, interactive constraints are always enabled in PHOEBE, unless explicitly disabled. Whenever a value is changed in the bundle that affects the value of a constrained value, that constraint is immediately executed and all applicable values updated. The ensures that all constrained values are "up-to-date".
If disabled, constraints are delayed and only executed when needed by PHOEBE (when calling run_compute, for example). This can save significant time, as each value that needs updating only needs to have its constraint executed once, instead of multiple times.
This default behavior can be changed via phoebe.interactive_constraints_on() or phoebe.interactive_constraints_off(). The current value can be accessed via phoebe.conf.interactive_constraints.
Let's first look at the default behavior with interactive constraints on.
End of explanation
phoebe.interactive_constraints_off()
print(phoebe.conf.interactive_constraints)
print(b.filter('mass', component='primary'))
b.set_value('sma@binary', 15)
print(b.filter('mass', component='primary'))
Explanation: Note that the mass has already updated, according to the constraint, when the value of the semi-major axes was changed. If we disable interactive constraints this will not be the case.
End of explanation
b.run_delayed_constraints()
print(b.filter('mass', component='primary'))
phoebe.reset_settings()
Explanation: No need to worry though - all constraints will be run automatically before passing to the backend. If you need to access the value of a constrained parameter, you can explicitly ask for all delayed constraints to be executed via b.run_delayed_constraints().
End of explanation
b.add_dataset('lc')
print(b.get_dataset())
Explanation: Filtering Options
check_visible
By default, everytime you call filter or set_value, PHOEBE checks to see if the current value is visible (meaning it is relevant given the value of other parameters). Although not terribly expensive, these checks can add up... so disabling these checks can save time. Note that these are automatically temporarily disabled during run_compute. If disabling these checks, be aware that changing the value of some parameters may have no affect on the resulting computations. You can always manually check the visibility/relevance of a parameter by calling parameter.is_visible.
This default behavior can be changed via phoebe.check_visible_on() or phoebe.check_visible_off().
Let's first look at the default behavior with check_visible on.
End of explanation
phoebe.check_visible_off()
print(b.get_dataset())
Explanation: Now if we disable check_visible, we'll see the same thing as if we passed check_visible=False to any filter call.
End of explanation
print(b.get_parameter(qualifier='ld_coeffs_source', component='primary').visible_if)
Explanation: Now the same filter is returning additional parameters. For example, ld_coeffs_source parameters were initially hidden because ld_mode is set to 'interp'. We can see the rules that are being followed:
End of explanation
print(b.get_parameter(qualifier='ld_coeffs_source', component='primary').is_visible)
phoebe.reset_settings()
Explanation: and can still manually check to see that it shouldn't be visible (isn't currently relevant given the value of ld_func):
End of explanation
print(b.get_dataset())
print(b.get_dataset(check_default=False))
phoebe.check_default_off()
print(b.get_dataset())
phoebe.reset_settings()
Explanation: check_default
Similarly, PHOEBE automatically excludes any parameter which is tagged with a '_default' tag. These parameters exist to provide default values when a new component or dataset are added to the bundle, but can usually be ignored, and so are excluded from any filter calls. Although not at all expensive, this too can be disabled at the settings level or by passing check_default=False to any filter call.
This default behavior can be changed via phoebe.check_default_on() or phoebe.check_default_off().
End of explanation
phoebe.get_download_passband_defaults()
Explanation: Passband Options
PHOEBE automatically fetches necessary tables from tables.phoebe-project.org. By default, only the necessary tables for each passband are fetched (except when calling download_passband manually) and the fits files are fetched uncompressed.
For more details, see the API docs on download_passband and update_passband as well as the passband updating tutorial.
The default values mentioned in the API docs for content and gzipped can be exposed via phoebe.get_download_passband_defaults and changed via phoebe.set_download_passband_defaults. Note that setting gzipped to True will minimize file storage for the passband files and will result in faster download speeds, but take significantly longer to load by PHOEBE as they have to be uncompressed each time they are loaded. If you have a large number of installed passbands, this could significantly slow importing PHOEBE.
End of explanation |
1,335 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loops
Loops are a way to repeatedly execute some code. Here's an example
Step1: The for loop specifies
- the variable name to use (in this case, planet)
- the set of values to loop over (in this case, planets)
You use the word "in" to link them together.
The object to the right of the "in" can be any object that supports iteration. Basically, if it can be thought of as a group of things, you can probably loop over it. In addition to lists, we can iterate over the elements of a tuple
Step2: You can even loop through each character in a string
Step3: range()
range() is a function that returns a sequence of numbers. It turns out to be very useful for writing loops.
For example, if we want to repeat some action 5 times
Step4: while loops
The other type of loop in Python is a while loop, which iterates until some condition is met
Step5: The argument of the while loop is evaluated as a boolean statement, and the loop is executed until the statement evaluates to False.
List comprehensions
List comprehensions are one of Python's most beloved and unique features. The easiest way to understand them is probably to just look at a few examples
Step6: Here's how we would do the same thing without a list comprehension
Step7: We can also add an if condition
Step8: (If you're familiar with SQL, you might think of this as being like a "WHERE" clause)
Here's an example of filtering with an if condition and applying some transformation to the loop variable
Step9: People usually write these on a single line, but you might find the structure clearer when it's split up over 3 lines
Step10: (Continuing the SQL analogy, you could think of these three lines as SELECT, FROM, and WHERE)
The expression on the left doesn't technically have to involve the loop variable (though it'd be pretty unusual for it not to). What do you think the expression below will evaluate to? Press the 'output' button to check.
Step12: List comprehensions combined with functions like min, max, and sum can lead to impressive one-line solutions for problems that would otherwise require several lines of code.
For example, compare the following two cells of code that do the same thing.
Step13: Here's a solution using a list comprehension
Step14: Much better, right?
Well if all we care about is minimizing the length of our code, this third solution is better still! | Python Code:
planets = ['Mercury', 'Venus', 'Earth', 'Mars', 'Jupiter', 'Saturn', 'Uranus', 'Neptune']
for planet in planets:
print(planet, end=' ') # print all on same line
Explanation: Loops
Loops are a way to repeatedly execute some code. Here's an example:
End of explanation
multiplicands = (2, 2, 2, 3, 3, 5)
product = 1
for mult in multiplicands:
product = product * mult
product
Explanation: The for loop specifies
- the variable name to use (in this case, planet)
- the set of values to loop over (in this case, planets)
You use the word "in" to link them together.
The object to the right of the "in" can be any object that supports iteration. Basically, if it can be thought of as a group of things, you can probably loop over it. In addition to lists, we can iterate over the elements of a tuple:
End of explanation
s = 'steganograpHy is the practicE of conceaLing a file, message, image, or video within another fiLe, message, image, Or video.'
msg = ''
# print all the uppercase letters in s, one at a time
for char in s:
if char.isupper():
print(char, end='')
Explanation: You can even loop through each character in a string:
End of explanation
for i in range(5):
print("Doing important work. i =", i)
Explanation: range()
range() is a function that returns a sequence of numbers. It turns out to be very useful for writing loops.
For example, if we want to repeat some action 5 times:
End of explanation
i = 0
while i < 10:
print(i, end=' ')
i += 1 # increase the value of i by 1
Explanation: while loops
The other type of loop in Python is a while loop, which iterates until some condition is met:
End of explanation
squares = [n**2 for n in range(10)]
squares
Explanation: The argument of the while loop is evaluated as a boolean statement, and the loop is executed until the statement evaluates to False.
List comprehensions
List comprehensions are one of Python's most beloved and unique features. The easiest way to understand them is probably to just look at a few examples:
End of explanation
squares = []
for n in range(10):
squares.append(n**2)
squares
Explanation: Here's how we would do the same thing without a list comprehension:
End of explanation
short_planets = [planet for planet in planets if len(planet) < 6]
short_planets
Explanation: We can also add an if condition:
End of explanation
# str.upper() returns an all-caps version of a string
loud_short_planets = [planet.upper() + '!' for planet in planets if len(planet) < 6]
loud_short_planets
Explanation: (If you're familiar with SQL, you might think of this as being like a "WHERE" clause)
Here's an example of filtering with an if condition and applying some transformation to the loop variable:
End of explanation
[
planet.upper() + '!'
for planet in planets
if len(planet) < 6
]
Explanation: People usually write these on a single line, but you might find the structure clearer when it's split up over 3 lines:
End of explanation
[32 for planet in planets]
Explanation: (Continuing the SQL analogy, you could think of these three lines as SELECT, FROM, and WHERE)
The expression on the left doesn't technically have to involve the loop variable (though it'd be pretty unusual for it not to). What do you think the expression below will evaluate to? Press the 'output' button to check.
End of explanation
def count_negatives(nums):
Return the number of negative numbers in the given list.
>>> count_negatives([5, -1, -2, 0, 3])
2
n_negative = 0
for num in nums:
if num < 0:
n_negative = n_negative + 1
return n_negative
Explanation: List comprehensions combined with functions like min, max, and sum can lead to impressive one-line solutions for problems that would otherwise require several lines of code.
For example, compare the following two cells of code that do the same thing.
End of explanation
def count_negatives(nums):
return len([num for num in nums if num < 0])
Explanation: Here's a solution using a list comprehension:
End of explanation
def count_negatives(nums):
# Reminder: in the "booleans and conditionals" exercises, we learned about a quirk of
# Python where it calculates something like True + True + False + True to be equal to 3.
return sum([num < 0 for num in nums])
Explanation: Much better, right?
Well if all we care about is minimizing the length of our code, this third solution is better still!
End of explanation |
1,336 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ausnahmen (Exceptions)
Was sind Ausnahmen?
Wir haben schon mehrfach festgestellt, dass beim Ausführen von Programmen Fehler aufgetreten sind, die zum Abbruch des Programms geführt haben. Das passiert beispielsweise, wenn wir auf ein nicht existierendes Element einer Liste zuzugreifen versuchen
Step1: Oder wenn wir versuchen eine Zahl durch 0 zu dividieren
Step2: Oder wenn wir versuchen, eine nicht vorhandene Datei zu öffnen
Step3: Ausnahme-Typen
Wenn Python auf ein Problem stößt, erzeugt es ein Ausnahme-Objekt und beendet das Programm. Bei Bedarf haben wir allerdings die Möglichkeit, auf eine solche Ausnahme anders als mit einem Programmabbruch zu reagieren.
Dazu müssen wir den Code, der zu einer Ausnahme führen kann, in ein try ... except Konstrukt einbetten. Der fehleranfällige Codeteil steht im try-Block, gefolgt von einem except-Block, in dem steht, wie das Programm auf einen allfälligen Fehler reagieren soll.
Step4: Was wir hier gemacht haben, ist jedoch ganz schlechter Stil
Step5: Hier noch einmal die drei Fragmente, mit denen wir am Anfang dieses Notebooks Fehler ausgelöst haben
Step6: Wenn wir genau hinsehen, stellen wir fest, dass es sich um drei unterschiedliche Arten von Fehlern handelt
Step7: Wenn wir hingegen eine Nicht-Zahl (z.B. 'abc') eingeben, wird das Programm nach wie vor abgebrochen, weil hier ein ValueError ausgelöst wird, den wir nicht explizit abfangen.
Step8: Überlegen Sie, warum das Programm immer noch abgebrochen wird und versuchen Sie, das Problem zu lösen!
Wir können alternativ auch unterschiedliche except-Blöcke verwenden um auf unterschiedliche Fehler unterschiedlich zu reagieren
Step9: Falls wir ein Code-Fragment in jedem Fall ausführen wollen, also unabhängig davon, ob ein Fehler aufgetreten ist oder nicht, können wir einen finally-Block definieren. Da macht zum Beispiel Sinn, wenn wir irgendwelche Ressourcen wie Filehandles freigeben wollen.
~~~
try
Step10: Wir können uns diese Hierarchie zunutze machen, um gleichartige Fehler gemeinsam zu behandeln. Wollen wir alle ArithmeticError-Subtypen (OverflowError, ZeroDivisionError, FloatingPointError) gesammelt abfangen wollen, prüfen wir beim except auf diesen gemeinsamen Basistyp
Step11: Die komplette Hierarchie der eingebauten Exceptions findet sich hier
Step12: Im obigen Code tritt (wenn wir z.B. 'abc' eingeben) der ValueError in der Funktion ask_for_int() auf, wird aber im Hauptprogramm abgefangen. Das kann den Code unnötig komplex machen, weil wir beim Lesen des Codes immer feststellen müssen, wo der Fehler überhaupt herkommt, ermöglicht aber andererseits ein zentrales Ausnahme-Management.
Ausnahme-Objekte
Wenn eine Ausnahme auftritt, erzeugt Python ein Ausnahmeobjekt, das, wie wir gesehen haben, durchgereicht wird. Falls benötigt, können wir dieses Objekt sogar genauer untersuchen.
Step13: Ausnahmen auslösen
Bei Bedarf können wir sogar selbst Ausnahmen auslösen.
Step14: Eigene Ausnahmen definieren
Manchmal ist es sehr nützlich, eigene Ausnahmen oder ganze Ausnahmehierarchien zu definieren um gezielt auf solche Ausnahmen reagieren zu können. | Python Code:
names = ['Otto', 'Hugo', 'Maria']
names[3]
Explanation: Ausnahmen (Exceptions)
Was sind Ausnahmen?
Wir haben schon mehrfach festgestellt, dass beim Ausführen von Programmen Fehler aufgetreten sind, die zum Abbruch des Programms geführt haben. Das passiert beispielsweise, wenn wir auf ein nicht existierendes Element einer Liste zuzugreifen versuchen:
End of explanation
user_input = 0
2335 / user_input
Explanation: Oder wenn wir versuchen eine Zahl durch 0 zu dividieren:
End of explanation
with open('hudriwudri.txt') as fh:
print(fh.read())
Explanation: Oder wenn wir versuchen, eine nicht vorhandene Datei zu öffnen:
End of explanation
divisor = int(input('Divisor: '))
try:
print(6543 / divisor)
except:
print('Bei der Division ist ein Fehler aufgetreten')
Explanation: Ausnahme-Typen
Wenn Python auf ein Problem stößt, erzeugt es ein Ausnahme-Objekt und beendet das Programm. Bei Bedarf haben wir allerdings die Möglichkeit, auf eine solche Ausnahme anders als mit einem Programmabbruch zu reagieren.
Dazu müssen wir den Code, der zu einer Ausnahme führen kann, in ein try ... except Konstrukt einbetten. Der fehleranfällige Codeteil steht im try-Block, gefolgt von einem except-Block, in dem steht, wie das Programm auf einen allfälligen Fehler reagieren soll.
End of explanation
divisor = int(input('Divisor: '))
some_names = ['Otto', 'Anna']
try:
print(some_names[divisor])
print(6543 / divisor)
except:
print('Bei der Division ist ein Fehler aufgetreten')
Explanation: Was wir hier gemacht haben, ist jedoch ganz schlechter Stil: Wir haben jede Art von Fehler oder Warnung abgefangen. Damit werden unter Umständen auch Fehler abgefangen, die wir gar nicht abfangen wollten, und die unter Umständen entscheidende Hinweise zur Fehlersuche geben könnten. Hier ein sehr konstruiertes Beispiel um das zu verdeutlichen:
End of explanation
names = ['Otto', 'Hugo', 'Maria']
names[3]
user_input = 0
2335 / user_input
with open('hudriwudri.txt') as fh:
print(fh.read())
Explanation: Hier noch einmal die drei Fragmente, mit denen wir am Anfang dieses Notebooks Fehler ausgelöst haben:
End of explanation
divisor = int(input('Divisor: '))
try:
print(6543 / divisor)
except ZeroDivisionError:
print('Division durch 0 ist nicht erlaubt.')
Explanation: Wenn wir genau hinsehen, stellen wir fest, dass es sich um drei unterschiedliche Arten von Fehlern handelt:
IndexError (names[3])
ZeroDivisionError (2335 / user_input)
FileNotFoundError (open('hudriwudri.txt'))
Python generiert also, abhängig davon, welche Art von Fehler aufgetreten ist, ein entsprechendes Ausnahme-Objekt. Alle diese Fehler sind abgeleitet vom allgemeinsten Ausnahme-Objekt (Exception) und damit Spezialfälle dieser Ausnahme. Diese speziellen Ausnahme-Objekte haben den Vorteil, dass wir abhängig vom Fehlertyp unterschiedlich darauf reagieren können. Wenn wir nur die Division durch 0 abfangen wollen, sieht der Code so aus:
End of explanation
print(6543 / divisor)
try:
divisor = int(input('Divisor: '))
except (ZeroDivisionError, ValueError):
print('Bei der Division ist ein Fehler aufgetreten.')
Explanation: Wenn wir hingegen eine Nicht-Zahl (z.B. 'abc') eingeben, wird das Programm nach wie vor abgebrochen, weil hier ein ValueError ausgelöst wird, den wir nicht explizit abfangen.
End of explanation
try:
divisor = int(input('Divisor: '))
print(6543 / divisor)
except ZeroDivisionError:
print('Division durch 0 ist nicht erlaubt.')
except ValueError:
print('Sie müssen eine Zahl eingeben!')
Explanation: Überlegen Sie, warum das Programm immer noch abgebrochen wird und versuchen Sie, das Problem zu lösen!
Wir können alternativ auch unterschiedliche except-Blöcke verwenden um auf unterschiedliche Fehler unterschiedlich zu reagieren:
End of explanation
import inspect
inspect.getmro(ZeroDivisionError)
Explanation: Falls wir ein Code-Fragment in jedem Fall ausführen wollen, also unabhängig davon, ob ein Fehler aufgetreten ist oder nicht, können wir einen finally-Block definieren. Da macht zum Beispiel Sinn, wenn wir irgendwelche Ressourcen wie Filehandles freigeben wollen.
~~~
try:
f = open('data.txt')
# make some computations on data
except ZeroDivisionError:
print('Warning: Division by Zero in data.txt')
finally:
f.close()
~~~
Exceptions bilden eine Hierarchie
Wir haben oben schon festgestellt, dass bestimmte Ausnahmen Spezialfälle von anderen Ausnahmen sind. Die allgemeinste Ausnahme ist Exception, von der alle anderen Ausnahmen abgeleitet sind. Ein ZeroDivisionError ist ein Spezialfall von ArithmethicError, was wiederum eine Spezialfall von Exception ist.
Diese Hierarchie von Ausnahmen können wir so sichtbar machen (das dient nur zur Illustration und muss nicht verstanden werden):
End of explanation
try:
divisor = int(input('Divisor: '))
print(6543 / divisor)
except ArithmeticError:
print('Kann mit der eingegebenen Zahl nicht rechnen.')
Explanation: Wir können uns diese Hierarchie zunutze machen, um gleichartige Fehler gemeinsam zu behandeln. Wollen wir alle ArithmeticError-Subtypen (OverflowError, ZeroDivisionError, FloatingPointError) gesammelt abfangen wollen, prüfen wir beim except auf diesen gemeinsamen Basistyp:
End of explanation
def ask_for_int(msg):
divisor = input('{}: '.format(msg))
return int(divisor)
try:
print(6543 / ask_for_int('Divisor eingeben'))
except ValueError:
print('Ich kann nur durch eine Zahl dividieren!')
Explanation: Die komplette Hierarchie der eingebauten Exceptions findet sich hier: https://docs.python.org/3/library/exceptions.html#exception-hierarchy
Exceptions wandern durch den Stack
Der große Vorteil von Exceptions ist, dass wir sie nicht zwingend dort abfangen müssen, wo sie auftreten, weil sie durch die Programmhierarchie durchgereicht werden, bis sie irgendwo behandelt werden (oder auch nicht, was dann zum Programmabbruch führt).
End of explanation
def ask_for_int(msg):
divisor = input('{}: '.format(msg))
return int(divisor)
try:
print(6543 / ask_for_int('Divisor eingeben'))
except ValueError as e:
print('Ich kann nur durch eine Zahl dividieren!')
print('Das Problem war: {}'.format(e.args))
Explanation: Im obigen Code tritt (wenn wir z.B. 'abc' eingeben) der ValueError in der Funktion ask_for_int() auf, wird aber im Hauptprogramm abgefangen. Das kann den Code unnötig komplex machen, weil wir beim Lesen des Codes immer feststellen müssen, wo der Fehler überhaupt herkommt, ermöglicht aber andererseits ein zentrales Ausnahme-Management.
Ausnahme-Objekte
Wenn eine Ausnahme auftritt, erzeugt Python ein Ausnahmeobjekt, das, wie wir gesehen haben, durchgereicht wird. Falls benötigt, können wir dieses Objekt sogar genauer untersuchen.
End of explanation
def ask_for_divisor():
divisor = input('Divisor eingeben: ')
if divisor == '0':
raise ValueError('Divisor must not be 0!')
return int(divisor)
try:
print(6543 / ask_for_divisor())
except ValueError:
print('Ungültige Eingabe')
Explanation: Ausnahmen auslösen
Bei Bedarf können wir sogar selbst Ausnahmen auslösen.
End of explanation
class MyAppException(Exception): pass
class MyAppWarning(MyAppException): pass
class MyAppError(MyAppException): pass
class GradeValueException(MyAppError): pass
Explanation: Eigene Ausnahmen definieren
Manchmal ist es sehr nützlich, eigene Ausnahmen oder ganze Ausnahmehierarchien zu definieren um gezielt auf solche Ausnahmen reagieren zu können.
End of explanation |
1,337 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Visualization with Python
The following notebook serves as an introduction to data visualization with Python for the course "Data Mining".
For any comments or suggestions you can contact charlotte[dot]laclau[at]univ-grenoble-alpes[dot]fr or parantapa[dot]goswami[at]viseo[dot]com
Introduction - Why Data Visualization?
Data visualization (DataViz) is an essential tool for exploring and and find insight in the data. Before jumping to complex machine learning or multivariate models, one should always take a first look at the data through simple visualization techniques. Indeed, visualization provides a unique perspective on the dataset that might in some cases allow you to detect potential challenge or specifities in your data that should be taken into account for future and in depth analysis.
Data can be visualized in lots of different ways depending on the nature of the features of the data (continuous, discrete or categorical). Also, levels of visualization depends on the number of dimensions that is needed to be represented
Step1: Note
Step2: Question
Step3: Remark
Step4: <font color=red>Warning</font>
Step5: Density Plot
The Density Plot gives interesting informations w.r.t. the type of probability distribution that you can assume for the feature.
Step6: Question
Step7: Question
Step8: Subplots
For the box plot and histogram, to visualize each feature in its own scale, it is better to draw one plot per feature. All these plots can be arranged in a grid.
Task
Step9: Visualizing discrete features
Bar Chart or Bar Graph presents discrete data with rectangular bars with heights proportional to the values that they represent.
In this exerise, we will visualize "Pregnancy" feature.
Step10: Let us now get some other information from the "Pregnancy" feature. We will now visualize the distribution of number of females for different "Pregnancy" values. For that
Step11: Visualizing categorical features
Most common way to visualize the distribution of categorical feature is Pie Chart. It is a circular statistical graphic which is divided into slices to illustrate numerical proportion of different posible values of the categorical feature. The arc length of each slice is proportional to the quantity it represents.
As we are visualizing the count distribution, the first step is to use DataFrame.value_counts()
Step12: Remark
Step13: Questions
Step14: Note
Step15: Question
Step16: Visualizing continuous vs. categorical (also discrete) features
In order to cross continuous and categorical features, you can again use box plot. It allows you to visualize distribution of a continuous variable for each possible value of the categorical variable. One common application is to visualize the output of a clustering algorithm.
Here, we will visualize box plot between continuous "BodyMass" and categorical "Class" features. We will use seaborn boxplot module.
Step17: Violin Plot
The Violin Plot is similar to box plots, except that they also show the probability density of the data at different values. Like box plots, violin plots also include a marker for the median of the data and a box indicating the interquartile range. Violin plots are used to represent comparison of a variable distribution (or sample distribution) across different categories.
Step18: Advanced
Step19: Visualizing 3D Data
3D plots lets you visualize 3 features together. Like, 2D plots, 3D plots are used to analyze potential dependencies in the data (colinearity, linearity etc.).
Here also, based on the nature of the features you can choose the type of visualization. However, in this exercise we will explore to visualize 3 continuous features together.
In this part, we will use Axes3D module from matplotlib library. We will explore 3D scatter plot. For other kinds of 3D plots you can refer here.
Import Libraries
Step20: 3D Scatter Plot
You will write code to generate a 3D scatter plot for 3 continuous features namely "BodyMass", "Fold" and "Glucose". Note that the created plot is an interactive one. You can rotate the plot to visualize it from different angles.
Step21: Visualizing Multidimensional Data
For visualizing data with more than 3 features, we have to rely on additional tools. One such tool is Principal Component Analysis (PCA).
PCA uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called Principal Components (PC). The number of distinct principal components is equal to the smaller of the number of original variables or the number of observations minus one.
We will do PCA to transform our data having 7 numerical features into 2 principal components. We will use sklearn.decomposition.PCA package.
Step22: As you can discover from PCA documention (link above), PCA returns principal components as a numpy array. For the ease of plotting with seaborn, we will create a pandas DataFrame from the principal components.
use pandas.DataFrame() to convert numpy array
mark the columns as "PC1" and "PC2"
update the dateframe by adding the "Class" column of our original dataframe using DataFrame.join()
Step23: Now, we will create a scatter plot to visualize our 7D data transformed into 2 principal components.
For creating scatter plots using seaborn we will use lmplot module with fit_reg=False.
Step24: Now, you have to do PCA on the data as before but for 3 principal components. Then plot 3 principal components in a 3D scatter plot.
Hint | Python Code:
# Import all three librairies
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
# For displaying the plots inside Notebook
%matplotlib inline
Explanation: Data Visualization with Python
The following notebook serves as an introduction to data visualization with Python for the course "Data Mining".
For any comments or suggestions you can contact charlotte[dot]laclau[at]univ-grenoble-alpes[dot]fr or parantapa[dot]goswami[at]viseo[dot]com
Introduction - Why Data Visualization?
Data visualization (DataViz) is an essential tool for exploring and and find insight in the data. Before jumping to complex machine learning or multivariate models, one should always take a first look at the data through simple visualization techniques. Indeed, visualization provides a unique perspective on the dataset that might in some cases allow you to detect potential challenge or specifities in your data that should be taken into account for future and in depth analysis.
Data can be visualized in lots of different ways depending on the nature of the features of the data (continuous, discrete or categorical). Also, levels of visualization depends on the number of dimensions that is needed to be represented: univariate (1D), bivariate (2D) or multivariate (ND).
Objective of the session
The goal of this session is to discover how to make 1D, 2D, 3D and eventually multidimensional data visualization with Python. We will see different methods, which can help you in real life to choose an appropriate visualization best suited for the data at hand.
We will explore three different librairies:
* Matplotlib (very similar to Matlab's syntax): Classic Python library for data visualization.
* Pandas: Its main purpose is to handle data frames. It also provides basic visualization modules.
* Seaborn: It provides a high-level interface to draw statistical graphics.
Basics: Import Libraries
End of explanation
# We start by importing the data using pandas
# Hint: use "read_csv" method, Note that comma (",") is the field separator, and we have no "header"
df = pd.read_csv('pima.txt', sep=",", header=None)
# We name the columns based on above features
df.columns = ["Pregnancy","Glucose","BloodPress", "Fold", "Insulin","BodyMass",'Diabetes','Age','Class']
# We sneak peek into the data
# Hint: use dataframe "head" method with "n" parameter
df.head(n=5)
Explanation: Note: Both pandas visulisation modules and seaborn are based on matlotlib, therefore a lot of command related to the customization of plot can be found in tutorials on matplotlib.
Pima Indian Diabetes dataset
Description of the data
The Pima Indian Diabetes Dataset consists of 768 females, who are at least 21 years old, of Pima Indian heritage. They are described by following 8 features which take numerical values, and the class:
1. Number of times pregnant
2. Plasma glucose concentration a 2 hours in an oral glucose tolerance test
3. Diastolic blood pressure (mm Hg)
4. Triceps skin fold thickness (mm)
5. 2-Hour serum insulin (mu U/ml)
6. Body mass index (weight in kg/(height in m)^2)
7. Diabetes pedigree function
8. Age (years)
9. Class variable: 0 or 1
Import the data
End of explanation
# Write code to draw a Box Plot of "BadyMass" feature
# Hint 1: use "DataFrame.boxplot" from pandas on the dataframe df
# Hint 2: choose the column properly
# Hint 3: you can control "grid" parameter as True or False
df.boxplot(column = 'BodyMass', grid=False)
Explanation: Question: for each of the 8 features and the class, define their nature
Continuous:
Discrete:
Categorical:
Visualizing 1D Data
1D plots are a good way to detect outliers, level of variability in the features etc.
Here is a non-exhaustive list of possible methods:
For continuous features: Box Plot, Histogram, Density Representation
For discrete features: Bar Chart, Dot Chart
For categorical features: Pie Chart is most common. There exist a lot of newer representations, but in the end they all provide the exact same information.
Visualizing continuous features
Box Plot
The Box Plot is an interesting tool for data visualisation as it contains multiple statistics informations about the feature: the minimum and maximum values, the first and third quartiles (bottom and top line of the box),the median value (middle line in the box) and the range (dispersion).
We will use "BodyMass" as the example here.
End of explanation
# Write code to draw a Histogram of "BadyMass" feature
# Hint 1: use "DataFrame.hist" from pandas on the dataframe df
# Hint 2: choose the column properly
# Hint 3: you can control "grid" parameter as True or False
# Hint 4: for this plot choose "bins" as 10, "alpha" as 0.5 and "ec" as "black"
df.hist(column = 'BodyMass', grid='off', bins=10, alpha=0.5, ec='black')
Explanation: Remark: The box plot highlight the presence of outliers (circles on the figure). One of the individuals has a BodyMass of 0, which is impossible. In this case, 0 is the code for missing values.
Histogram
The histogram groups data into intervals and is good to look at the distribution of the values.
End of explanation
# Write code to draw a histogram for "BodyMass" feature with 3 bins
df.hist(column = 'BodyMass', grid='off', bins=3, alpha=0.5, ec='black')
Explanation: <font color=red>Warning</font>: The number of bins/intervals that you choose can strongly impact the representation. To check it, change the value in the option bins to 3 for instance.
End of explanation
# Write code to draw a Density Plot of "BadyMass" feature
# Hint: use "DataFrame.plot.density" from pandas on the dataframe df
df['BodyMass'].plot.density()
Explanation: Density Plot
The Density Plot gives interesting informations w.r.t. the type of probability distribution that you can assume for the feature.
End of explanation
# Write code to draw Box Plot for "BodyMass" and "Glucose" together
# Hint: you can pass a list of features for "column" parameter
df.boxplot(column = ['BodyMass','Glucose'], grid='off')
Explanation: Question: Draw box plots of "BodyMass" and "Glucose" together. Why is the visualization is misleading?
End of explanation
# Write code to draw Density Plot for all 4 continuous features together
# Hint: you can filter dataframe by a list of columns and then use plot.density
df[['BodyMass','Glucose','Fold']].plot.density()
Explanation: Question: Draw density plot for all continuous features. Why this visualization do not have above problem?
End of explanation
# Write code to create subplots to vizualize four continous features
# Hint: use plt.pyplot() in a 2 by 2 grid. You can adjust these using "nrows" and "ncols"
fig,axes = plt.subplots(nrows = 2,ncols = 2) # TO DELETE
df.hist('BodyMass', bins=10, grid='off', alpha=0.5, ec='black', ax = axes[0, 0])
df.hist('Glucose', bins=10, grid='off', alpha=0.5, ec='black', ax = axes[0, 1])
df.hist('Fold', bins=10, grid='off', alpha=0.5, ec='black', ax = axes[1, 0])
df.hist('BloodPress', bins=10, grid='off', alpha=0.5, ec='black', ax = axes[1, 1])
# For a neat display of plots
plt.tight_layout()
Explanation: Subplots
For the box plot and histogram, to visualize each feature in its own scale, it is better to draw one plot per feature. All these plots can be arranged in a grid.
Task: See the usage of plt.subplot(). Then draw:
1. One figure containing box plots of all 4 continuous features
2. One figure containing histograms of all 4 continuous features
Note: You can also play with basic customization for each figure (labeling, title, colors etc.)
End of explanation
# Write code to create a Bar Chart of the "Pregnancy" feature.
df.Pregnancy.plot.bar() # TO DELETE
# For a neat display of plots: plating with the X-tick labels
_ = plt.xticks(list(df.index)[::100], list(df.index)[::100])
Explanation: Visualizing discrete features
Bar Chart or Bar Graph presents discrete data with rectangular bars with heights proportional to the values that they represent.
In this exerise, we will visualize "Pregnancy" feature.
End of explanation
# Step 1: Write code to generate the count distrubution
df["Pregnancy"].value_counts()
# Step 2: Write code to create Bar Chart for the count distrubution
df["Pregnancy"].value_counts().plot.bar()
Explanation: Let us now get some other information from the "Pregnancy" feature. We will now visualize the distribution of number of females for different "Pregnancy" values. For that:
1. First count the number of samples for each possible "Pregnancy" value (hint: DataFrame.value_counts())
2. Plot the distribution using Bar Chart
End of explanation
# Write code to create a Pie Chart of the "Class" feature.
# Hint 1: plot the count distrubution, NOT the data itself
# Hint 2: use plot.pie() on the count distrubution
# Hint 3: use autopct="%1.1f%%" to display percentage values and following colors
colors = ['gold', 'lightcoral']
df['Class'].value_counts().plot.pie(autopct='%1.1f%%', colors=colors)
# For a neat display of the plot
_ = plt.axis('equal')
Explanation: Visualizing categorical features
Most common way to visualize the distribution of categorical feature is Pie Chart. It is a circular statistical graphic which is divided into slices to illustrate numerical proportion of different posible values of the categorical feature. The arc length of each slice is proportional to the quantity it represents.
As we are visualizing the count distribution, the first step is to use DataFrame.value_counts()
End of explanation
# Write code to create Scatter Plot between "BodyMass" and "Fold" features
# Hint: use "DataFrame.plot.scatter()" on our dataframe df, and mention the "x" and "y" axis features
df.plot.scatter(x="BodyMass", y="Fold")
Explanation: Remark: Pie charts are very effective to visualize distribution of classes on the training data. It helps to discover if there exist a stong imbalance of classes in the data.
<font color=red>Warning</font>: Pie charts cannot show more than a few values as the slices become too small. This makes them unsuitable for use with categorical features with a larger number of possible values.
Visualizing 2D Data
2D plots (or in multi-D plots in general) are important to detect potential dependencies in the data (colinearity, linearity etc.).
Again, the nature of the features will guide you to choose the good representation.
* 1 continuous vs. 1 continuous: Scatter Plot, Pair Plot
* 1 continous vs. 1 categorical: Box Plot (yes again!), Violin Plot (very similar to boxplot)
* 1 categorical vs. 1 categorical: Heatmap (to visualize counts per category or mean per category)
Visualizing continuous vs. continuous features
Scatter Plot
Scatter Plot is used to display values for typically two variables for a set of data on Cartesian coordinates. The data are displayed as a collection of points, each having the value of one variable determining the position on the horizontal axis and the value of the other variable determining the position on the vertical axis. A scatter plot can suggest various kinds of correlations between features.
Let's use "BodyMass" and "Fold" features together.
End of explanation
# Write code to create a jointplot using "BodyMass" and "Fold" features
# Hint 1: mention the "x" and "y" axis features, and out dataframe df as "data"
# Hint 2: "size" parameter controls the size of the plot. Here try size=6
sns.jointplot(x="BodyMass", y="Fold", data=df, size=6)
Explanation: Questions:
1. Can you detect values?
2. Can you visualize any correlation between "BodyMass" and "Fold?
Remark: Pandas plot module is very useful for basic visualization techniques (more details here).
Now we will use explore seaborn library to create more advanced visualizations.
First, we will see seaborn jointplot(). It shows bivariate scatter plots and univariate histograms in the same figure.
End of explanation
# Write code to create a seaborn pairplot using all 4 continuous features
# Hint 1: use our dataframe df
# Hint 2: give the list of features to "vars" parameters.
# Hint 3: use markers=["o", "s"] and palette="husl" for better display
sns.pairplot(df, vars=["BodyMass","Fold", "Glucose", "Diabetes"])
Explanation: Note: The legend refers to a correlation test (Pearson $\rho$) which indicate a significant correlation between these features (p-value below .05).
Question: Does the Pearson $\rho$ calculated correspond to your interpretation of correlation from the previous scatter plot?
Pair Plot
Pair Plots are useful for more than 2 continuous variables. It is an extension of scatter plot. It shows the bivariate scatter plots between each pair of features. By doing so, it allows to avoid the use of subplots.
End of explanation
# Write code to create a seaborn pairplot using all 4 continuous features and the "Class"
# Hint: use "hue" option with the "Class" variable
sns.pairplot(df, hue='Class', vars=["BodyMass","Fold", "Glucose", "Diabetes"])
Explanation: Question: Can you explain the nature of the diagonal plots?
Note: It is possible to project the class onto the pair plot (one color for each class) using the option hue in the pairplot function.
End of explanation
# Write code to create a Box Plot between "BodyMass" and "Class"
# Hint: mention the "x" and "y" axis features, and out dataframe df as "data"
sns.boxplot(x="Class", y="BodyMass", data=df)
Explanation: Visualizing continuous vs. categorical (also discrete) features
In order to cross continuous and categorical features, you can again use box plot. It allows you to visualize distribution of a continuous variable for each possible value of the categorical variable. One common application is to visualize the output of a clustering algorithm.
Here, we will visualize box plot between continuous "BodyMass" and categorical "Class" features. We will use seaborn boxplot module.
End of explanation
# Write code to create a Violin Plot between "BodyMass" and "Class"
# Hint: mention the "x" and "y" axis features, and out dataframe df as "data"
sns.violinplot(x="Class", y="BodyMass", data=df)
Explanation: Violin Plot
The Violin Plot is similar to box plots, except that they also show the probability density of the data at different values. Like box plots, violin plots also include a marker for the median of the data and a box indicating the interquartile range. Violin plots are used to represent comparison of a variable distribution (or sample distribution) across different categories.
End of explanation
# Write code to create a Heat Map using the above steps
df2 = df[["Pregnancy", "Class"]].groupby(["Pregnancy", "Class"]).size().reset_index(name="count") # TO DELETE
sns.heatmap(df2.pivot("Pregnancy", "Class", "count"))
Explanation: Advanced: Visualizing two categorical (also discrete) features
Heat Map
A Heat Map is a two-dimensional representation of data in which values are represented by colors. A simple heat map provides an immediate visual summary of information.
In this exercise, we will use "Pregnancy" and "Class" features:
1. Use only "Pregnancy" and "Class" columns: you may create a new dataframe on which we will work
2. We will use groupby to group the new two column dataframe based on both "Pregnancy" and "Class" features
3. On top of that use size() function to get the count of every possible pair of values of "Pregnancy" and "Class"
4. Then the new dataframe is reindexed using reset_index() with name="count" argument to set a name for the count column
5. A dataframe pivot table is generated using all three columns in the following order: "Pregnancy", "Class", "count"
6. Finally, this pivot table is used to generate the seaborn.heatmap
Note: This is an advanced piece of code. So you may require to consult different places before you get this right. Do not hesitate to ask for help.
End of explanation
from mpl_toolkits.mplot3d import Axes3D
# For interactive 3D plots
%matplotlib notebook
Explanation: Visualizing 3D Data
3D plots lets you visualize 3 features together. Like, 2D plots, 3D plots are used to analyze potential dependencies in the data (colinearity, linearity etc.).
Here also, based on the nature of the features you can choose the type of visualization. However, in this exercise we will explore to visualize 3 continuous features together.
In this part, we will use Axes3D module from matplotlib library. We will explore 3D scatter plot. For other kinds of 3D plots you can refer here.
Import Libraries
End of explanation
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.mplot3d.art3d import Path3DCollection
%matplotlib notebook
# Write code to create a 3D Scatter Plot for "BodyMass", "Fold" and "Glucose" features
# Hint 1: follow the basic steps mentioned in the link above.
# Hint 2: pass three desired columns of our dataframe as "xs", "ys" and "zs" in the scatter plot function
fig_scatter = plt.figure() # TO DELETE
ax = fig_scatter.add_subplot(111, projection='3d')
ax.scatter(df["BodyMass"], df["Fold"], df["Glucose"])
# Write code to display the feature names as axes labels
# Hint: set_xlabel etc. methods to set the labels
ax.set_xlabel("BodyMass")
ax.set_ylabel("Fold")
ax.set_zlabel("Glucose")
Explanation: 3D Scatter Plot
You will write code to generate a 3D scatter plot for 3 continuous features namely "BodyMass", "Fold" and "Glucose". Note that the created plot is an interactive one. You can rotate the plot to visualize it from different angles.
End of explanation
# Write code to import libraries
from sklearn.decomposition import PCA # TO DELETE
# We will use following columns of our dataframe:
columns_pca = ["Pregnancy","Glucose","BloodPress", "Fold", "Insulin","BodyMass",'Diabetes','Age']
# Write code to fit a PCA with the dataframe using above columns.
# Hint 1: first create a PCA instance with "n_components=2"
# as we are atttempting to generate 2 principal components.
# Hint 2: fit_transform PCA with the required dataframe
pca2 = PCA(n_components = 2)
array2PC = pca2.fit_transform(df[columns_pca])
Explanation: Visualizing Multidimensional Data
For visualizing data with more than 3 features, we have to rely on additional tools. One such tool is Principal Component Analysis (PCA).
PCA uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called Principal Components (PC). The number of distinct principal components is equal to the smaller of the number of original variables or the number of observations minus one.
We will do PCA to transform our data having 7 numerical features into 2 principal components. We will use sklearn.decomposition.PCA package.
End of explanation
# Write code to convert array2PC to a DataFrame with columns "PC1" and "PC2"
df2PC = pd.DataFrame(array2PC, columns=["PC1", "PC2"])
# Write code to update df2PC by appending "Class" column from orginal dataframe df
# Hint: using "join" on df2PC
df2PC = df2PC.join(df["Class"])
Explanation: As you can discover from PCA documention (link above), PCA returns principal components as a numpy array. For the ease of plotting with seaborn, we will create a pandas DataFrame from the principal components.
use pandas.DataFrame() to convert numpy array
mark the columns as "PC1" and "PC2"
update the dateframe by adding the "Class" column of our original dataframe using DataFrame.join()
End of explanation
# For displaying the plots inside Notebook
%matplotlib inline
# Write code to create scatter plot for 2 PCs.
# Hint 1; use seaborn.lmplot and set fit_reg=False
# Hint 2: use hue option to visualize the "Class" labels in the plot
sns.lmplot("PC1", "PC2", df2PC, hue="Class", fit_reg=False)
Explanation: Now, we will create a scatter plot to visualize our 7D data transformed into 2 principal components.
For creating scatter plots using seaborn we will use lmplot module with fit_reg=False.
End of explanation
# Write code to do PCA with 3 principal components on the dataframe with columns_pca
pca3 = PCA(n_components = 3)
array3PC = pca3.fit_transform(df[columns_pca])
df3PC = pd.DataFrame(array3PC, columns=["PC1", "PC2", "PC3"]) # TO DELETE
df3PC = df3PC.join(df["Class"])
fig_pca = plt.figure()
ax = fig_pca.add_subplot(111, projection='3d')
ax.scatter(df3PC["PC1"], df3PC["PC2"], df3PC["PC3"], c=df3PC["Class"])
Explanation: Now, you have to do PCA on the data as before but for 3 principal components. Then plot 3 principal components in a 3D scatter plot.
Hint: The hue equivalent for 3D scatter plot is c, and you have to pass the entire "Class" column, not just the name.
End of explanation |
1,338 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-3', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: CNRM-CERFACS
Source ID: SANDBOX-3
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:52
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
1,339 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cell Cycle genes
Using Gene Ontologies (GO), create an up-to-date list of all human protein-coding genes that are know to be associated with cell cycle.
1. Download Ontologies, if necessary
Step1: 2. Download Associations, if necessary
Step2: 3. Read associations
Normally, when reading associations, GeneID2GOs are returned. We get the reverse, GO2GeneIDs, by adding the key-word arg, "go2geneids=True" to the call to read_ncbi_gene2go.
Step3: 4. Initialize Gene-Search Helper
Step4: 5. Find human all genes related to "cell cycle"
5a. Prepare "cell cycle" text searches
We will need to search for both cell cycle and cell cycle-independent. Those GOs that contain the text cell cycle-independent are specifically not related to cell cycle and must be removed from our list of cell cycle GO terms.
Step5: 5b. Find NCBI Entrez GeneIDs related to "cell cycle"
Step6: 6. Print the "cell cycle" protein-coding gene Symbols | Python Code:
# Get http://geneontology.org/ontology/go-basic.obo
from goatools.base import download_go_basic_obo
obo_fname = download_go_basic_obo()
Explanation: Cell Cycle genes
Using Gene Ontologies (GO), create an up-to-date list of all human protein-coding genes that are know to be associated with cell cycle.
1. Download Ontologies, if necessary
End of explanation
# Get ftp://ftp.ncbi.nlm.nih.gov/gene/DATA/gene2go.gz
from goatools.base import download_ncbi_associations
gene2go = download_ncbi_associations()
Explanation: 2. Download Associations, if necessary
End of explanation
from goatools.associations import read_ncbi_gene2go
go2geneids_human = read_ncbi_gene2go("gene2go", taxids=[9606], go2geneids=True)
print("{N} GO terms associated with human NCBI Entrez GeneIDs".format(N=len(go2geneids_human)))
Explanation: 3. Read associations
Normally, when reading associations, GeneID2GOs are returned. We get the reverse, GO2GeneIDs, by adding the key-word arg, "go2geneids=True" to the call to read_ncbi_gene2go.
End of explanation
from goatools.go_search import GoSearch
srchhelp = GoSearch("go-basic.obo", go2items=go2geneids_human)
Explanation: 4. Initialize Gene-Search Helper
End of explanation
import re
# Compile search pattern for 'cell cycle'
cell_cycle_all = re.compile(r'cell cycle', flags=re.IGNORECASE)
cell_cycle_not = re.compile(r'cell cycle.independent', flags=re.IGNORECASE)
Explanation: 5. Find human all genes related to "cell cycle"
5a. Prepare "cell cycle" text searches
We will need to search for both cell cycle and cell cycle-independent. Those GOs that contain the text cell cycle-independent are specifically not related to cell cycle and must be removed from our list of cell cycle GO terms.
End of explanation
# Find ALL GOs and GeneIDs associated with 'cell cycle'.
# Details of search are written to a log file
fout_allgos = "cell_cycle_gos_human.log"
with open(fout_allgos, "w") as log:
# Search for 'cell cycle' in GO terms
gos_cc_all = srchhelp.get_matching_gos(cell_cycle_all, prt=log)
# Find any GOs matching 'cell cycle-independent' (e.g., "lysosome")
gos_no_cc = srchhelp.get_matching_gos(cell_cycle_not, gos=gos_cc_all, prt=log)
# Remove GO terms that are not "cell cycle" GOs
gos = gos_cc_all.difference(gos_no_cc)
# Add children GOs of cell cycle GOs
gos_all = srchhelp.add_children_gos(gos)
# Get Entrez GeneIDs for cell cycle GOs
geneids = srchhelp.get_items(gos_all)
print("{N} human NCBI Entrez GeneIDs related to 'cell cycle' found.".format(N=len(geneids)))
Explanation: 5b. Find NCBI Entrez GeneIDs related to "cell cycle"
End of explanation
from goatools.test_data.genes_NCBI_9606_ProteinCoding import GeneID2nt
for geneid in geneids: # geneids associated with cell-cycle
nt = GeneID2nt.get(geneid, None)
if nt is not None:
print("{Symbol:<10} {desc}".format(
Symbol = nt.Symbol,
desc = nt.description))
Explanation: 6. Print the "cell cycle" protein-coding gene Symbols
End of explanation |
1,340 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SC 1 through 5 Model Comparisons
Previous analyses have been pairwise classifications (linear vs. lineage, early/late vs split/coalescence, rectangular vs. square neighbor models). Here, we perform a multiclass analysis, and compare the performance of GBM with random forest classifiers.
Step1: NOTE
Step2: Feature Engineering
The goal here is to construct a standard training and test data matrix of numeric values, which will contain the sorted Laplacian eigenvalues of the graphs in each data set. One feature will thus represent the largest eigenvalue for each graph, a second feature will represent the second largest eigenvalue, and so on.
Step3: First Classifier
We're going to be using a gradient boosted classifier, which has some of best accuracy of any of the standard classifier methods. Ultimately we'll figure out the best hyperparameters using cross-validation, but first we just want to see whether the approach gets us anywhere in the right ballpark -- remember, we can 80% accuracy with just eigenvalue distance, so we have to be in that neighborhood or higher to be worth the effort of switching to a more complex model.
Step4: Finding Optimal Hyperparameters
Step5: NOTE
Step6: Interestingly, there seems to be a vanishingly small probability that the PFG seriation is the result of a pure nearest neighbor model, although the square space seems to fare marginally better than the long, thin space. But overwhelmingly, the predictive weight is on the lineage splitting model, which makes sense given Carl Lipo's dissertation work. Less predictive probability comes from the complete graph/fully connected model, although it's still about 0.223, so there's clearly something about densely connected graphs that resonates in the PFG situation (and the lineage splitting model is essentially complete graphs within each lineage too).
Clearly, however, we need a larger catalog of model classes and variants from which to select. That's next on the todo list.
PFG Data Analysis
Step7: Dimensionality Reduction
In this set of analyses, I have the sense that we have good discrimination between models, but that the PFG empirical data set is possibly outside the border of any of the network models and thus the train/test split is crucial in determining how it classifies. I'm wondering if we can visualize that, perhaps by doing dimensionality reduction on the eigenvalue data set and then seeing where the PFG data lies in the reduced manifold.
Step8: Graphics for Presentation | Python Code:
import numpy as np
import networkx as nx
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import cPickle as pickle
from copy import deepcopy
from sklearn.utils import shuffle
import sklearn_mmadsen.graphs as skmg
import sklearn_mmadsen.graphics as skmplt
%matplotlib inline
# plt.style.use("fivethirtyeight")
custom_style = {'axes.labelcolor': 'white',
'xtick.color': 'white',
'ytick.color': 'white'}
sns.set_style("darkgrid", rc=custom_style)
sc_1_3_graphs = pickle.load(open("train-cont-sc-1-3-graphs.pkl",'r'))
sc_1_3_labels = pickle.load(open("train-cont-sc-1-3-labels.pkl",'r'))
print "sc_1_3 cases: ", len(sc_1_3_graphs)
# sc_2_graphs = pickle.load(open("train-cont-sc-2-graphs.pkl",'r'))
# sc_2_labels = pickle.load(open("train-cont-sc-2-labels.pkl",'r'))
# print "sc_2 cases: ", len(sc_2_graphs)
sc_4_5_graphs = pickle.load(open("train-sc-4-5-cont-graphs.pkl",'r'))
sc_4_5_labels = pickle.load(open("train-sc-4-5-cont-labels.pkl",'r'))
print "sc_4_5 cases: ", len(sc_4_5_graphs)
Explanation: SC 1 through 5 Model Comparisons
Previous analyses have been pairwise classifications (linear vs. lineage, early/late vs split/coalescence, rectangular vs. square neighbor models). Here, we perform a multiclass analysis, and compare the performance of GBM with random forest classifiers.
End of explanation
text_labels = ['complete', 'lineage-split', 'rect-nn', 'square-nn']
full_train_graphs = []
full_train_labels = []
full_test_graphs = []
full_test_labels = []
def add_to_dataset(graphs, labels):
train_graphs, train_labels, test_graphs, test_labels = skmg.graph_train_test_split(graphs, labels, test_fraction=0.1)
print "train size: %s" % len(train_graphs)
print "test size: %s" % len(test_graphs)
full_train_graphs.extend(train_graphs)
full_train_labels.extend(train_labels)
full_test_graphs.extend(test_graphs)
full_test_labels.extend(test_labels)
add_to_dataset(sc_1_3_graphs, sc_1_3_labels)
#add_to_dataset(sc_2_graphs, sc_2_labels)
add_to_dataset(sc_4_5_graphs, sc_4_5_labels)
Explanation: NOTE: Removing sc-2 for the moment because the sample sizes are very small and it's hard to get a reliable test set compared to the other experiments. Will run more simulations
Now we need to construct a single data set with a 10% test set split, and we'd like it to be fairly even among
the class labels.
Label Catalog
0 = Linear
1 = Lineage
2 = Rectangular nearest neighbor
3 = Square nearest neighbor
End of explanation
train_matrix = skmg.graphs_to_eigenvalue_matrix(full_train_graphs, num_eigenvalues=20)
test_matrix = skmg.graphs_to_eigenvalue_matrix(full_test_graphs, num_eigenvalues=20)
print train_matrix.shape
print test_matrix.shape
Explanation: Feature Engineering
The goal here is to construct a standard training and test data matrix of numeric values, which will contain the sorted Laplacian eigenvalues of the graphs in each data set. One feature will thus represent the largest eigenvalue for each graph, a second feature will represent the second largest eigenvalue, and so on.
End of explanation
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
clf = GradientBoostingClassifier(n_estimators = 250)
clf.fit(train_matrix, full_train_labels)
pred_label = clf.predict(test_matrix)
cm = confusion_matrix(full_test_labels, pred_label)
cmdf = pd.DataFrame(cm)
cmdf.columns = map(lambda x: 'predicted {}'.format(x), cmdf.columns)
cmdf.index = map(lambda x: 'actual {}'.format(x), cmdf.index)
print cmdf
print classification_report(full_test_labels, pred_label)
print "Accuracy on test: %0.3f" % accuracy_score(full_test_labels, pred_label)
Explanation: First Classifier
We're going to be using a gradient boosted classifier, which has some of best accuracy of any of the standard classifier methods. Ultimately we'll figure out the best hyperparameters using cross-validation, but first we just want to see whether the approach gets us anywhere in the right ballpark -- remember, we can 80% accuracy with just eigenvalue distance, so we have to be in that neighborhood or higher to be worth the effort of switching to a more complex model.
End of explanation
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
pipeline = Pipeline([
('clf', GradientBoostingClassifier())
])
params = {
'clf__learning_rate': [5.0,2.0,1.0, 0.75, 0.5, 0.25, 0.1, 0.05, 0.01],
'clf__n_estimators': [10,25,50,100,250,500]
}
grid_search = GridSearchCV(pipeline, params, n_jobs = -1, verbose = 1)
grid_search.fit(train_matrix, full_train_labels)
print("Best score: %0.3f" % grid_search.best_score_)
print("Best parameters:")
best_params = grid_search.best_estimator_.get_params()
for param in sorted(params.keys()):
print("param: %s: %r" % (param, best_params[param]))
pred_label = grid_search.predict(test_matrix)
cm = confusion_matrix(full_test_labels, pred_label)
cmdf = pd.DataFrame(cm)
cmdf.columns = map(lambda x: 'predicted {}'.format(x), cmdf.columns)
cmdf.index = map(lambda x: 'actual {}'.format(x), cmdf.index)
print cmdf
print classification_report(full_test_labels, pred_label)
print "Accuracy on test: %0.3f" % accuracy_score(full_test_labels, pred_label)
axis_labs = ['Predicted Model', 'Actual Model']
hmap = skmplt.confusion_heatmap(full_test_labels, pred_label, text_labels,
axis_labels = axis_labs, transparent = True,
reverse_color = True, filename = "confusion-heatmap-sc1345.png")
Explanation: Finding Optimal Hyperparameters
End of explanation
pfg_graph = nx.read_gml("../../data/pfg-cpl-minmax-by-weight-continuity.gml")
pfg_test_mat = skmg.graphs_to_eigenvalue_matrix([pfg_graph], num_eigenvalues=10)
pfg_predict = grid_search.predict(pfg_test_mat)
print pfg_predict
np.set_printoptions(precision=4)
np.set_printoptions(suppress=True)
probs = grid_search.predict_proba(pfg_test_mat)
probs
Explanation: NOTE: the above figure is transparent and white-on-dark for presentation purposes. The axis labels are there....
PFG Data Analysis: Laplacian Eigenvalue Classifier
I'm not at all sure we have a good catalog of models that would approximate the regional interaction network of the Lower Mississippi River Valley yet, but we do have three model-classes (the two NN models are essentially indistinguishable given our analysis yere) that are easily distinguishable.
So let's see what model the optimized model fit chooses. We do this simply by reading in the GML for the minmax-by-weight seriation solution from IDSS, converting it to the same kind of eigenvalue matrix as the training data, with the same number of eigenvalues as the training data (even though this PFG subset has 20 assemblages), and then using the fitted and optimized gradient boosted tree model to predict the model class, and the probability of class assignment.
End of explanation
gclf = skmg.GraphEigenvalueNearestNeighbors(n_neighbors=5)
gclf.fit(full_train_graphs, full_train_labels)
gclf.predict([pfg_graph])[0]
distances = gclf.predict_distance_to_train(pfg_graph)
distances.head()
g = sns.FacetGrid(distances, col="model", margin_titles=True)
bins = np.linspace(0, 10, 20)
g.map(sns.distplot, "distance", color="steelblue")
Explanation: Interestingly, there seems to be a vanishingly small probability that the PFG seriation is the result of a pure nearest neighbor model, although the square space seems to fare marginally better than the long, thin space. But overwhelmingly, the predictive weight is on the lineage splitting model, which makes sense given Carl Lipo's dissertation work. Less predictive probability comes from the complete graph/fully connected model, although it's still about 0.223, so there's clearly something about densely connected graphs that resonates in the PFG situation (and the lineage splitting model is essentially complete graphs within each lineage too).
Clearly, however, we need a larger catalog of model classes and variants from which to select. That's next on the todo list.
PFG Data Analysis: Laplacian Distance Similarity
End of explanation
from sklearn import decomposition
from sklearn import manifold
tsne = manifold.TSNE(n_components=2, init='pca', random_state=0)
X_tsne = tsne.fit_transform(train_matrix)
plt.figure(figsize=(11,8.5))
plt.scatter(X_tsne[:, 0], X_tsne[:, 1], c=full_train_labels, cmap=plt.cm.Spectral)
tsne = manifold.TSNE(n_components=3, init='pca', random_state=0)
X_tsne = tsne.fit_transform(train_matrix)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(1, figsize=(8, 8))
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=25, azim=100)
plt.scatter(X_tsne[:, 0], X_tsne[:, 1], X_tsne[:, 2], c=full_train_labels, cmap=plt.cm.Spectral)
Explanation: Dimensionality Reduction
In this set of analyses, I have the sense that we have good discrimination between models, but that the PFG empirical data set is possibly outside the border of any of the network models and thus the train/test split is crucial in determining how it classifies. I'm wondering if we can visualize that, perhaps by doing dimensionality reduction on the eigenvalue data set and then seeing where the PFG data lies in the reduced manifold.
End of explanation
test_g = full_test_graphs[16]
plt.figure(figsize=(12,8))
label_map = dict()
for n,d in test_g.nodes_iter(data=True):
label_map[n] = test_g.node[n]['label'].replace("assemblage-", "")
pos = nx.graphviz_layout(test_g, prog="neato")
nx.draw_networkx(test_g, pos, with_labels=True, labels = label_map)
plt.savefig("test_graph_sc1_5.png", transparent=False)
full_test_labels[16]
Explanation: Graphics for Presentation
End of explanation |
1,341 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kecerdasan Buatan
Tugas 2
Step1: 1. Eksplorasi Awal Data (10 poin)
Pada bagian ini, Anda diminta untuk mengeksplorasi data latih yang diberikan. Selalu gunakan data ini kecuali diberitahukan sebaliknya. | Python Code:
from __future__ import print_function, division # Gunakan print(...) dan bukan print ...
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.datasets import load_digits
from sklearn.cluster import KMeans
from sklearn.metrics import accuracy_score, confusion_matrix, mean_squared_error
from sklearn.model_selection import cross_val_score, train_test_split
from sklearn.neighbors import KNeighborsClassifier, KNeighborsRegressor
%matplotlib inline
RANDOM_STATE = 1337
np.random.seed(RANDOM_STATE)
Explanation: Kecerdasan Buatan
Tugas 2: k-Nearest Neighbours & k-Means
Mekanisme
Anda hanya diwajibkan untuk mengumpulkan file ini saja ke uploader yang disediakan di http://elearning2.uai.ac.id/. Ganti nama file ini saat pengumpulan menjadi tugas2_NIM.ipynb.
Keterlambatan: Pengumpulan tugas yang melebihi tenggat yang telah ditentukan tidak akan diterima. Keterlambatan akan berakibat pada nilai nol untuk tugas ini.
Kolaborasi: Anda diperbolehkan untuk berdiskusi dengan teman Anda, tetapi dilarang keras menyalin kode maupun tulisan dari teman Anda.
Petunjuk
Packages yang Anda akan gunakan dalam mengerjakan tugas ini antara lain:
matplotlib
numpy
pandas
pillow
scipy
seaborn
Anda diperbolehkan (jika dirasa perlu) untuk mengimpor modul tambahan untuk tugas ini. Namun, seharusnya modul yang tersedia sudah cukup untuk memenuhi kebutuhan Anda. Untuk kode yang Anda ambil dari sumber lain, cantumkan URL menuju referensi tersebut jika diambil dari internet!
Perhatikan poin untuk tiap soal! Semakin kecil poinnya, berarti kode yang diperlukan untuk menjawab soal tersebut seharusnya semakin sedikit!
NIM:
Nilai akhir: XX/50
Deskripsi Dataset
Pada tugas kali ini, Anda akan melihat penggunaan algoritma yang berbasis pada jarak antarobjek. Anda diberikan dataset berupa gambar angka-angka yang ditulis tangan. Dataset ini adalah versi yang lebih sederhana dari MNIST yang sering digunakan. Ukuran tiap gambar angka dalam dataset ini dalah 8x8 pixels. Deskripsi lengkapnya dapat Anda lihat di sini. Tugas Anda kali ini adalah menerapkan algoritma k-NN dan k-Means untuk melakukan prediksi dan pengelompokan 10 angka tersebut dan mengevaluasi hasilnya.
Mengimpor Modul dan Dataset
End of explanation
X, y = load_digits(return_X_y=True)
Explanation: 1. Eksplorasi Awal Data (10 poin)
Pada bagian ini, Anda diminta untuk mengeksplorasi data latih yang diberikan. Selalu gunakan data ini kecuali diberitahukan sebaliknya.
End of explanation |
1,342 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Project-1
Step1: The data
The data consists of total population and total number of deaths due to TB (excluding HIV) in 2013 in each of the BRICS (Brazil, Russia, India, China, South Africa) and Portuguese-speaking countries.
The data was taken in July 2015 from http
Step2: The range of the problem
The column of interest is the last one.
Step3: The total number of deaths in 2013 is
Step4: The largest and smallest number of deaths in a single country are
Step5: From less than 20 to almost a quarter of a million deaths is a huge range. The average number of deaths, over all countries in the data, can give a better idea of the seriousness of the problem in each country.
The average can be computed as the mean or the median. Given the wide range of deaths, the median is probably a more sensible average measure.
Step6: The median is far lower than the mean. This indicates that some of the countries had a very high number of TB deaths in 2013, pushing the value of the mean up.
The most affected
To see the most affected countries, the table is sorted in ascending order by the last column, which puts those countries in the last rows.
Step7: The table raises the possibility that a large number of deaths may be partly due to a large population. To compare the countries on an equal footing, the death rate per 100,000 inhabitants is computed. | Python Code:
# Print platform info of Python exec env.
import sys
sys.version
import warnings
warnings.simplefilter('ignore', FutureWarning)
from pandas import *
show_versions()
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Project-1:-Deaths-by-tuberculosis" data-toc-modified-id="Project-1:-Deaths-by-tuberculosis-1"><span class="toc-item-num">1 </span>Project 1: Deaths by tuberculosis</a></div><div class="lev2 toc-item"><a href="#Env" data-toc-modified-id="Env-11"><span class="toc-item-num">1.1 </span>Env</a></div><div class="lev2 toc-item"><a href="#The-data" data-toc-modified-id="The-data-12"><span class="toc-item-num">1.2 </span>The data</a></div><div class="lev2 toc-item"><a href="#The-range-of-the-problem" data-toc-modified-id="The-range-of-the-problem-13"><span class="toc-item-num">1.3 </span>The range of the problem</a></div><div class="lev2 toc-item"><a href="#The-most-affected" data-toc-modified-id="The-most-affected-14"><span class="toc-item-num">1.4 </span>The most affected</a></div><div class="lev2 toc-item"><a href="#Conclusions" data-toc-modified-id="Conclusions-15"><span class="toc-item-num">1.5 </span>Conclusions</a></div>
# Project 1: Deaths by tuberculosis
by Michel Wermelinger, 14 July 2015, with minor edits on 5 April 2016<br>
This is the project notebook for Week 1 of The Open University's [_Learn to code for Data Analysis_](http://futurelearn.com/courses/learn-to-code) course.
In 2000, the United Nations set eight Millenium Development Goals (MDGs) to reduce poverty and diseases, improve gender equality and environmental sustainability, etc. Each goal is quantified and time-bound, to be achieved by the end of 2015. Goal 6 is to have halted and started reversing the spread of HIV, malaria and tuberculosis (TB).
TB doesn't make headlines like Ebola, SARS (severe acute respiratory syndrome) and other epidemics, but is far deadlier. For more information, see the World Health Organisation (WHO) page <http://www.who.int/gho/tb/en/>.
Given the population and number of deaths due to TB in some countries during one year, the following questions will be answered:
- What is the total, maximum, minimum and average number of deaths in that year?
- Which countries have the most and the least deaths?
- What is the death rate (deaths per 100,000 inhabitants) for each country?
- Which countries have the lowest and highest death rate?
The death rate allows for a better comparison of countries with widely different population sizes.
## Env
End of explanation
data = read_excel('WHO POP TB some.xls')
data.head()
data.tail()
data.info()
data.describe()
Explanation: The data
The data consists of total population and total number of deaths due to TB (excluding HIV) in 2013 in each of the BRICS (Brazil, Russia, India, China, South Africa) and Portuguese-speaking countries.
The data was taken in July 2015 from http://apps.who.int/gho/data/node.main.POP107?lang=en (population) and http://apps.who.int/gho/data/node.main.593?lang=en (deaths). The uncertainty bounds of the number of deaths were ignored.
The data was collected into an Excel file which should be in the same folder as this notebook.
End of explanation
tbColumn = data['TB deaths']
Explanation: The range of the problem
The column of interest is the last one.
End of explanation
tbColumn.sum()
Explanation: The total number of deaths in 2013 is:
End of explanation
tbColumn.max()
tbColumn.min()
Explanation: The largest and smallest number of deaths in a single country are:
End of explanation
tbColumn.mean()
tbColumn.median()
Explanation: From less than 20 to almost a quarter of a million deaths is a huge range. The average number of deaths, over all countries in the data, can give a better idea of the seriousness of the problem in each country.
The average can be computed as the mean or the median. Given the wide range of deaths, the median is probably a more sensible average measure.
End of explanation
data.sort_values('TB deaths').head()
Explanation: The median is far lower than the mean. This indicates that some of the countries had a very high number of TB deaths in 2013, pushing the value of the mean up.
The most affected
To see the most affected countries, the table is sorted in ascending order by the last column, which puts those countries in the last rows.
End of explanation
populationColumn = data['Population (1000s)']
data['TB deaths (per 100,000)'] = tbColumn * 100 / populationColumn
data.head()
Explanation: The table raises the possibility that a large number of deaths may be partly due to a large population. To compare the countries on an equal footing, the death rate per 100,000 inhabitants is computed.
End of explanation |
1,343 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'sandbox-1', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: INM
Source ID: SANDBOX-1
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:05
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
1,344 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matrix generation
Init symbols for sympy
Step1: Lame params
Step2: Metric tensor
${\displaystyle \hat{G}=\sum_{i,j} g^{ij}\vec{R}_i\vec{R}_j}$
Step3: ${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$
Step4: Christoffel symbols
Step5: Gradient of vector
$
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)
=
B \cdot
\left(
\begin{array}{c}
u_1 \
\frac { \partial u_1 } { \partial \alpha_1} \
\frac { \partial u_1 } { \partial \alpha_2} \
\frac { \partial u_1 } { \partial \alpha_3} \
u_2 \
\frac { \partial u_2 } { \partial \alpha_1} \
\frac { \partial u_2 } { \partial \alpha_2} \
\frac { \partial u_2 } { \partial \alpha_3} \
u_3 \
\frac { \partial u_3 } { \partial \alpha_1} \
\frac { \partial u_3 } { \partial \alpha_2} \
\frac { \partial u_3 } { \partial \alpha_3} \
\end{array}
\right)
= B \cdot D \cdot
\left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right)
$
Step6: Strain tensor
$
\left(
\begin{array}{c}
\varepsilon_{11} \
\varepsilon_{22} \
\varepsilon_{33} \
2\varepsilon_{12} \
2\varepsilon_{13} \
2\varepsilon_{23} \
\end{array}
\right)
=
\left(E + E_{NL} \left( \nabla \vec{u} \right) \right) \cdot
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)$
Step7: Physical coordinates
$u_i=u_{[i]} H_i$
Step8: Tymoshenko theory
$u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $
$u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $
$ \left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right) = T \cdot
\left(
\begin{array}{c}
u \
\frac { \partial u } { \partial \alpha_1} \
\gamma \
\frac { \partial \gamma } { \partial \alpha_1} \
w \
\frac { \partial w } { \partial \alpha_1} \
\end{array}
\right) $
Step9: Square theory
$u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{10}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{11}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{12}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{30}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{31}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{32}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$ \left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right) = L \cdot
\left(
\begin{array}{c}
u_{10} \
\frac { \partial u_{10} } { \partial \alpha_1} \
u_{11} \
\frac { \partial u_{11} } { \partial \alpha_1} \
u_{12} \
\frac { \partial u_{12} } { \partial \alpha_1} \
u_{30} \
\frac { \partial u_{30} } { \partial \alpha_1} \
u_{31} \
\frac { \partial u_{31} } { \partial \alpha_1} \
u_{32} \
\frac { \partial u_{32} } { \partial \alpha_1} \
\end{array}
\right) $
Step10: Mass matrix | Python Code:
from sympy import *
from geom_util import *
from sympy.vector import CoordSys3D
N = CoordSys3D('N')
alpha1, alpha2, alpha3 = symbols("alpha_1 alpha_2 alpha_3", real = True, positive=True)
init_printing()
%matplotlib inline
%reload_ext autoreload
%autoreload 2
%aimport geom_util
Explanation: Matrix generation
Init symbols for sympy
End of explanation
# h1 = Function("H1")
# h2 = Function("H2")
# h3 = Function("H3")
# H1 = h1(alpha1, alpha2, alpha3)
# H2 = h2(alpha1, alpha2, alpha3)
# H3 = h3(alpha1, alpha2, alpha3)
H1,H2,H3=symbols('H1,H2,H3')
H=[H1, H2, H3]
DIM=3
dH = zeros(DIM,DIM)
for i in range(DIM):
for j in range(DIM):
dH[i,j]=Symbol('H_{{{},{}}}'.format(i+1,j+1))
dH
Explanation: Lame params
End of explanation
G_up = getMetricTensorUpLame(H1, H2, H3)
Explanation: Metric tensor
${\displaystyle \hat{G}=\sum_{i,j} g^{ij}\vec{R}_i\vec{R}_j}$
End of explanation
G_down = getMetricTensorDownLame(H1, H2, H3)
Explanation: ${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$
End of explanation
DIM=3
G_down_diff = MutableDenseNDimArray.zeros(DIM, DIM, DIM)
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
G_down_diff[i,i,k]=2*H[i]*dH[i,k]
GK = getChristoffelSymbols2(G_up, G_down_diff, (alpha1, alpha2, alpha3))
GK
Explanation: Christoffel symbols
End of explanation
def row_index_to_i_j_grad(i_row):
return i_row // 3, i_row % 3
B = zeros(9, 12)
B[0,1] = S(1)
B[1,2] = S(1)
B[2,3] = S(1)
B[3,5] = S(1)
B[4,6] = S(1)
B[5,7] = S(1)
B[6,9] = S(1)
B[7,10] = S(1)
B[8,11] = S(1)
for row_index in range(9):
i,j=row_index_to_i_j_grad(row_index)
B[row_index, 0] = -GK[i,j,0]
B[row_index, 4] = -GK[i,j,1]
B[row_index, 8] = -GK[i,j,2]
B
Explanation: Gradient of vector
$
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)
=
B \cdot
\left(
\begin{array}{c}
u_1 \
\frac { \partial u_1 } { \partial \alpha_1} \
\frac { \partial u_1 } { \partial \alpha_2} \
\frac { \partial u_1 } { \partial \alpha_3} \
u_2 \
\frac { \partial u_2 } { \partial \alpha_1} \
\frac { \partial u_2 } { \partial \alpha_2} \
\frac { \partial u_2 } { \partial \alpha_3} \
u_3 \
\frac { \partial u_3 } { \partial \alpha_1} \
\frac { \partial u_3 } { \partial \alpha_2} \
\frac { \partial u_3 } { \partial \alpha_3} \
\end{array}
\right)
= B \cdot D \cdot
\left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right)
$
End of explanation
E=zeros(6,9)
E[0,0]=1
E[1,4]=1
E[2,8]=1
E[3,1]=1
E[3,3]=1
E[4,2]=1
E[4,6]=1
E[5,5]=1
E[5,7]=1
E
def E_NonLinear(grad_u):
N = 3
du = zeros(N, N)
# print("===Deformations===")
for i in range(N):
for j in range(N):
index = i*N+j
du[j,i] = grad_u[index]
# print("========")
I = eye(3)
a_values = S(1)/S(2) * du * G_up
E_NL = zeros(6,9)
E_NL[0,0] = a_values[0,0]
E_NL[0,3] = a_values[0,1]
E_NL[0,6] = a_values[0,2]
E_NL[1,1] = a_values[1,0]
E_NL[1,4] = a_values[1,1]
E_NL[1,7] = a_values[1,2]
E_NL[2,2] = a_values[2,0]
E_NL[2,5] = a_values[2,1]
E_NL[2,8] = a_values[2,2]
E_NL[3,1] = 2*a_values[0,0]
E_NL[3,4] = 2*a_values[0,1]
E_NL[3,7] = 2*a_values[0,2]
E_NL[4,0] = 2*a_values[2,0]
E_NL[4,3] = 2*a_values[2,1]
E_NL[4,6] = 2*a_values[2,2]
E_NL[5,2] = 2*a_values[1,0]
E_NL[5,5] = 2*a_values[1,1]
E_NL[5,8] = 2*a_values[1,2]
return E_NL
%aimport geom_util
u=getUHat3D(alpha1, alpha2, alpha3)
# u=getUHatU3Main(alpha1, alpha2, alpha3)
gradu=B*u
E_NL = E_NonLinear(gradu)*B
E_NL
Explanation: Strain tensor
$
\left(
\begin{array}{c}
\varepsilon_{11} \
\varepsilon_{22} \
\varepsilon_{33} \
2\varepsilon_{12} \
2\varepsilon_{13} \
2\varepsilon_{23} \
\end{array}
\right)
=
\left(E + E_{NL} \left( \nabla \vec{u} \right) \right) \cdot
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)$
End of explanation
P=zeros(12,12)
P[0,0]=H[0]
P[1,0]=dH[0,0]
P[1,1]=H[0]
P[2,0]=dH[0,1]
P[2,2]=H[0]
P[3,0]=dH[0,2]
P[3,3]=H[0]
P[4,4]=H[1]
P[5,4]=dH[1,0]
P[5,5]=H[1]
P[6,4]=dH[1,1]
P[6,6]=H[1]
P[7,4]=dH[1,2]
P[7,7]=H[1]
P[8,8]=H[2]
P[9,8]=dH[2,0]
P[9,9]=H[2]
P[10,8]=dH[2,1]
P[10,10]=H[2]
P[11,8]=dH[2,2]
P[11,11]=H[2]
P=simplify(P)
P
B_P = zeros(9,9)
for i in range(3):
for j in range(3):
ratio=1
if (i==0):
ratio = ratio*H1
elif (i==1):
ratio = ratio*H2
elif (i==2):
ratio = ratio*H3
if (j==0):
ratio = ratio*H1
elif (j==1):
ratio = ratio*H2
elif (j==2):
ratio = ratio*H3
row_index = i*3+j
B_P[row_index, row_index] = ratio
Grad_U_P = simplify(B_P*B*P)
B_P
StrainL=simplify(E*Grad_U_P)
StrainL
%aimport geom_util
u=getUHatU3Main(alpha1, alpha2, alpha3)
gradup=B_P*B*P*u
E_NLp = E_NonLinear(gradup)*B*P*u
simplify(E_NLp)
Explanation: Physical coordinates
$u_i=u_{[i]} H_i$
End of explanation
T=zeros(12,6)
T[0,0]=1
T[0,2]=alpha3
T[1,1]=1
T[1,3]=alpha3
T[3,2]=1
T[8,4]=1
T[9,5]=1
T
D_p_T = StrainL*T
simplify(D_p_T)
u = Function("u")
t = Function("theta")
w = Function("w")
u1=u(alpha1)+alpha3*t(alpha1)
u3=w(alpha1)
gu = zeros(12,1)
gu[0] = u1
gu[1] = u1.diff(alpha1)
gu[3] = u1.diff(alpha3)
gu[8] = u3
gu[9] = u3.diff(alpha1)
gradup=Grad_U_P*gu
# o20=(K*u(alpha1)-w(alpha1).diff(alpha1)+t(alpha1))/2
# o21=K*t(alpha1)
# O=1/2*o20*o20+alpha3*o20*o21-alpha3*K/2*o20*o20
# O=expand(O)
# O=collect(O,alpha3)
# simplify(O)
StrainNL = E_NonLinear(gradup)*gradup
simplify(StrainNL)
Explanation: Tymoshenko theory
$u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $
$u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $
$ \left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right) = T \cdot
\left(
\begin{array}{c}
u \
\frac { \partial u } { \partial \alpha_1} \
\gamma \
\frac { \partial \gamma } { \partial \alpha_1} \
w \
\frac { \partial w } { \partial \alpha_1} \
\end{array}
\right) $
End of explanation
L=zeros(12,12)
h=Symbol('h')
p0=1/2-alpha3/h
p1=1/2+alpha3/h
p2=1-(2*alpha3/h)**2
L[0,0]=p0
L[0,2]=p1
L[0,4]=p2
L[1,1]=p0
L[1,3]=p1
L[1,5]=p2
L[3,0]=p0.diff(alpha3)
L[3,2]=p1.diff(alpha3)
L[3,4]=p2.diff(alpha3)
L[8,6]=p0
L[8,8]=p1
L[8,10]=p2
L[9,7]=p0
L[9,9]=p1
L[9,11]=p2
L[11,6]=p0.diff(alpha3)
L[11,8]=p1.diff(alpha3)
L[11,10]=p2.diff(alpha3)
L
B_General = zeros(9, 12)
B_General[0,1] = S(1)
B_General[1,2] = S(1)
B_General[2,3] = S(1)
B_General[3,5] = S(1)
B_General[4,6] = S(1)
B_General[5,7] = S(1)
B_General[6,9] = S(1)
B_General[7,10] = S(1)
B_General[8,11] = S(1)
for row_index in range(9):
i,j=row_index_to_i_j_grad(row_index)
B_General[row_index, 0] = -Symbol("G_{{{}{}}}^1".format(i+1,j+1))
B_General[row_index, 4] = -Symbol("G_{{{}{}}}^2".format(i+1,j+1))
B_General[row_index, 8] = -Symbol("G_{{{}{}}}^3".format(i+1,j+1))
B_General
simplify(B_General*L)
D_p_L = StrainL*L
simplify(D_p_L)
h = 0.5
exp=(0.5-alpha3/h)*(1-(2*alpha3/h)**2)#/(1+alpha3*0.8)
p02=integrate(exp, (alpha3, -h/2, h/2))
integral = expand(simplify(p02))
integral
Explanation: Square theory
$u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{10}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{11}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{12}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{30}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{31}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{32}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$ \left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right) = L \cdot
\left(
\begin{array}{c}
u_{10} \
\frac { \partial u_{10} } { \partial \alpha_1} \
u_{11} \
\frac { \partial u_{11} } { \partial \alpha_1} \
u_{12} \
\frac { \partial u_{12} } { \partial \alpha_1} \
u_{30} \
\frac { \partial u_{30} } { \partial \alpha_1} \
u_{31} \
\frac { \partial u_{31} } { \partial \alpha_1} \
u_{32} \
\frac { \partial u_{32} } { \partial \alpha_1} \
\end{array}
\right) $
End of explanation
rho=Symbol('rho')
B_h=zeros(3,12)
B_h[0,0]=1
B_h[1,4]=1
B_h[2,8]=1
M=simplify(rho*P.T*B_h.T*G_up*B_h*P)
M
e1 = L.T*M*L/(1+alpha3*0.8)
e2 = L.T*M*L
e1r=integrate(e1, (alpha3, -thick/2, thick/2))
e2r=integrate(e2, (alpha3, -thick/2, thick/2))
thick=0.1
e1r.subs(h, thick)-e2r.subs(h, thick)
Explanation: Mass matrix
End of explanation |
1,345 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="http
Step1: VSTOXX Futures & Options Data
We start by loading VSTOXX data from a pandas HDFStore into DataFrame objects (source
Step2: VSTOXX index for the first quarter of 2014.
Step3: The VSTOXX futures data (8 futures maturities/quotes per day).
Step4: The VSTOXX options data. This data set is quite large due to the large number of European put and call options on the VSTOXX.
Step5: As a helper function we need a function to calculate all relevant third Fridays for all relevant maturity months of the data sets.
Step6: Implied Volatilities from Market Quotes
Often calibration efforts are undertaken to replicate the market implied volatilities or the so-called volatility surface as good as possible. With DX Analytics and the BSM_european_option class, you can efficiently calculate (i.e. numerically estimate) implied volatilities. For the example, we use the VSTOXX futures and call options data from 31. March 2014.
Some definitions, the pre-selection of option data and the pre-definition of the market environment needed.
Step7: The following loop now calculates the implied volatilities for all those options whose strike lies within the defined tolerance level.
Step8: A selection of the results.
Step9: And the complete results visualized.
Step10: Market Modeling
This sub-section now implements the model calibration based on selected options data. In particular, we choose, for a given pricing date, the following options data
Step11: Options Modeling
Given the options and their respective quotes to which to calibrate the model, the function get_option_models returns the DX Analytics option models for all relevant options. As risk factor model we use the square_root_diffusion class.
Step12: The function calculate_model_values estimates and returns model value estimates for all relevant options given a parameter set for the square_root_diffusion risk factor model.
Step13: Calibration Functions
Mean-Squared Error Calculation
The calibration of the pricing model is based on the minimization of the mean-squared error (MSE) of the model values vs. the market quotes. The MSE calculation is implemented by the function mean_squared_error which also penalizes economically implausible parameter values.
Step14: Implementing the Calibration Procedure
The function get_parameter_series calibrates the model to the market data for every date contained in the pricing_date_list object for all maturities contained in the maturity_list object.
Step15: The Calibration Itself
This completes the set of necessary function to implement such a larger calibration effort. The following code defines the dates for which a calibration shall be conducted and for which maturities the calibration is required.
Step16: Calibration Results
The results are now stored in the pandas DataFrame called parameters. We set a new index and inspect the last results. Throughout the MSE is pretty low indicated a good fit of the model to the market quotes.
Step17: This is also illustrated by the visualization of the time series data for the calibrated/optimal parameter values. The MSE is below 0.01 throughout.
Step18: The following generates a plot of the calibration results for the last calibration day. The absolute price differences are below 0.10 EUR for all options. | Python Code:
from dx import *
import numpy as np
import pandas as pd
from pylab import plt
plt.style.use('seaborn')
Explanation: <img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="45%" align="right" border="4">
Implied Volatilities and Model Calibration
This setion of the documentation illustrates how to calculate implied volatilities and how to calibrate a model to VSTOXX volatility index call option quotes. The example implements the calibration for a total of one month worth of data.
End of explanation
h5 = pd.HDFStore('./data/vstoxx_march_2014.h5', 'r')
vstoxx_index = h5['vstoxx_index']
vstoxx_futures = h5['vstoxx_futures']
vstoxx_options = h5['vstoxx_options']
h5.close()
Explanation: VSTOXX Futures & Options Data
We start by loading VSTOXX data from a pandas HDFStore into DataFrame objects (source: Eurex, cf. http://www.eurexchange.com/advanced-services/).
End of explanation
%matplotlib inline
vstoxx_index['V2TX'].plot(figsize=(10, 6))
Explanation: VSTOXX index for the first quarter of 2014.
End of explanation
vstoxx_futures.info()
vstoxx_futures.tail()
Explanation: The VSTOXX futures data (8 futures maturities/quotes per day).
End of explanation
vstoxx_options.info()
vstoxx_options.tail()
Explanation: The VSTOXX options data. This data set is quite large due to the large number of European put and call options on the VSTOXX.
End of explanation
import datetime as dt
import calendar
def third_friday(date):
day = 21 - (calendar.weekday(date.year, date.month, 1) + 2) % 7
return dt.datetime(date.year, date.month, day)
third_fridays = {}
for month in set(vstoxx_futures['EXP_MONTH']):
third_fridays[month] = third_friday(dt.datetime(2014, month, 1))
third_fridays
Explanation: As a helper function we need a function to calculate all relevant third Fridays for all relevant maturity months of the data sets.
End of explanation
V0 = 17.6639 # VSTOXX level on 31.03.2014
futures_data = vstoxx_futures[vstoxx_futures.DATE == '2014/3/31'].copy()
options_data = vstoxx_options[(vstoxx_options.DATE == '2014/3/31')
& (vstoxx_options.TYPE == 'C')].copy()
me = market_environment('me', dt.datetime(2014, 3, 31))
me.add_constant('initial_value', 17.6639) # index on 31.03.2014
me.add_constant('volatility', 2.0) # for initialization
me.add_curve('discount_curve', constant_short_rate('r', 0.01)) # assumption
options_data['IMP_VOL'] = 0.0 # initialization new iv column
Explanation: Implied Volatilities from Market Quotes
Often calibration efforts are undertaken to replicate the market implied volatilities or the so-called volatility surface as good as possible. With DX Analytics and the BSM_european_option class, you can efficiently calculate (i.e. numerically estimate) implied volatilities. For the example, we use the VSTOXX futures and call options data from 31. March 2014.
Some definitions, the pre-selection of option data and the pre-definition of the market environment needed.
End of explanation
%%time
tol = 0.3 # tolerance level for moneyness
for option in options_data.index:
# iterating over all option quotes
forward = futures_data[futures_data['MATURITY'] == \
options_data.loc[option]['MATURITY']]['PRICE'].values
# picking the right futures value
if (forward * (1 - tol) < options_data.loc[option]['STRIKE']
< forward * (1 + tol)):
# only for options with moneyness within tolerance
call = options_data.loc[option]
me.add_constant('strike', call['STRIKE'])
me.add_constant('maturity', call['MATURITY'])
call_option = BSM_european_option('call', me)
options_data.loc[option, 'IMP_VOL'] = \
call_option.imp_vol(call['PRICE'], 'call', volatility_est=0.6)
Explanation: The following loop now calculates the implied volatilities for all those options whose strike lies within the defined tolerance level.
End of explanation
options_data[60:70]
Explanation: A selection of the results.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plot_data = options_data[options_data.IMP_VOL > 0]
plt.figure(figsize=(10, 6))
for maturity in sorted(set(options_data['MATURITY'])):
data = plot_data.isin({'MATURITY': [maturity,]})
data = plot_data[plot_data.MATURITY == maturity]
# select data for this maturity
plt.plot(data['STRIKE'], data['IMP_VOL'],
label=maturity.date(), lw=1.5)
plt.plot(data['STRIKE'], data['IMP_VOL'], 'r.')
plt.xlabel('strike')
plt.ylabel('implied volatility of volatility')
plt.legend()
plt.show()
Explanation: And the complete results visualized.
End of explanation
tol = 0.2
def get_option_selection(pricing_date, maturity, tol=tol):
''' Function selects relevant options data. '''
forward = vstoxx_futures[(vstoxx_futures.DATE == pricing_date)
& (vstoxx_futures.MATURITY == maturity)]['PRICE'].values[0]
option_selection = \
vstoxx_options[(vstoxx_options.DATE == pricing_date)
& (vstoxx_options.MATURITY == maturity)
& (vstoxx_options.TYPE == 'C')
& (vstoxx_options.STRIKE > (1 - tol) * forward)
& (vstoxx_options.STRIKE < (1 + tol) * forward)]
return option_selection, forward
Explanation: Market Modeling
This sub-section now implements the model calibration based on selected options data. In particular, we choose, for a given pricing date, the following options data:
for a single maturity only
call options only
for a certain moneyness of the options
Relevant Market Data
The following following returns the relevant market data per calibration date:
End of explanation
def get_option_models(pricing_date, maturity, option_selection):
''' Models and returns traded options for given option_selection object. '''
me_vstoxx = market_environment('me_vstoxx', pricing_date)
initial_value = vstoxx_index['V2TX'][pricing_date]
me_vstoxx.add_constant('initial_value', initial_value)
me_vstoxx.add_constant('final_date', maturity)
me_vstoxx.add_constant('currency', 'EUR')
me_vstoxx.add_constant('frequency', 'W')
me_vstoxx.add_constant('paths', 10000)
csr = constant_short_rate('csr', 0.01)
# somewhat arbitrarily chosen here
me_vstoxx.add_curve('discount_curve', csr)
# parameters to be calibrated later
me_vstoxx.add_constant('kappa', 1.0)
me_vstoxx.add_constant('theta', 1.2 * initial_value)
me_vstoxx.add_constant('volatility', 1.0)
vstoxx_model = square_root_diffusion('vstoxx_model', me_vstoxx)
# square-root diffusion for volatility modeling
# mean-reverting, positive process
# option parameters and payoff
me_vstoxx.add_constant('maturity', maturity)
payoff_func = 'np.maximum(maturity_value - strike, 0)'
option_models = {}
for option in option_selection.index:
strike = option_selection['STRIKE'].ix[option]
me_vstoxx.add_constant('strike', strike)
option_models[option] = \
valuation_mcs_european_single(
'eur_call_%d' % strike,
vstoxx_model,
me_vstoxx,
payoff_func)
return vstoxx_model, option_models
Explanation: Options Modeling
Given the options and their respective quotes to which to calibrate the model, the function get_option_models returns the DX Analytics option models for all relevant options. As risk factor model we use the square_root_diffusion class.
End of explanation
def calculate_model_values(p0):
''' Returns all relevant option values.
Parameters
===========
p0 : tuple/list
tuple of kappa, theta, volatility
Returns
=======
model_values : dict
dictionary with model values
'''
kappa, theta, volatility = p0
vstoxx_model.update(kappa=kappa,
theta=theta,
volatility=volatility)
model_values = {}
for option in option_models:
model_values[option] = \
option_models[option].present_value(fixed_seed=True)
return model_values
Explanation: The function calculate_model_values estimates and returns model value estimates for all relevant options given a parameter set for the square_root_diffusion risk factor model.
End of explanation
i = 0
def mean_squared_error(p0):
''' Returns the mean-squared error given
the model and market values.
Parameters
===========
p0 : tuple/list
tuple of kappa, theta, volatility
Returns
=======
MSE : float
mean-squared error
'''
if p0[0] < 0 or p0[1] < 5. or p0[2] < 0 or p0[2] > 10.:
return 100
global i, option_selection, vstoxx_model, option_models, first, last
pd = dt.datetime.strftime(
option_selection['DATE'].iloc[0].to_pydatetime(),
'%d-%b-%Y')
mat = dt.datetime.strftime(
option_selection['MATURITY'].iloc[0].to_pydatetime(),
'%d-%b-%Y')
model_values = calculate_model_values(p0)
option_diffs = {}
for option in model_values:
option_diffs[option] = (model_values[option]
- option_selection['PRICE'].loc[option])
MSE = np.sum(np.array(list(option_diffs.values())) ** 2) / len(option_diffs)
if i % 150 == 0:
# output every 0th and 100th iteration
if i == 0:
print('%12s %13s %4s %6s %6s %6s --> %6s' % \
('pricing_date', 'maturity_date', 'i', 'kappa',
'theta', 'vola', 'MSE'))
print('%12s %13s %4d %6.3f %6.3f %6.3f --> %6.3f' % \
(pd, mat, i, p0[0], p0[1], p0[2], MSE))
i += 1
return MSE
Explanation: Calibration Functions
Mean-Squared Error Calculation
The calibration of the pricing model is based on the minimization of the mean-squared error (MSE) of the model values vs. the market quotes. The MSE calculation is implemented by the function mean_squared_error which also penalizes economically implausible parameter values.
End of explanation
import scipy.optimize as spo
def get_parameter_series(pricing_date_list, maturity_list):
global i, option_selection, vstoxx_model, option_models, first, last
# collects optimization results for later use (eg. visualization)
parameters = pd.DataFrame()
for maturity in maturity_list:
first = True
for pricing_date in pricing_date_list:
option_selection, forward = \
get_option_selection(pricing_date, maturity)
vstoxx_model, option_models = \
get_option_models(pricing_date, maturity, option_selection)
if first is True:
# use brute force for the first run
i = 0
opt = spo.brute(mean_squared_error,
((0.5, 2.51, 1.), # range for kappa
(10., 20.1, 5.), # range for theta
(0.5, 10.51, 5.0)), # range for volatility
finish=None)
i = 0
opt = spo.fmin(mean_squared_error, opt,
maxiter=200, maxfun=350, xtol=0.0000001, ftol=0.0000001)
parameters = parameters.append(
pd.DataFrame(
{'pricing_date' : pricing_date,
'maturity' : maturity,
'initial_value' : vstoxx_model.initial_value,
'kappa' : opt[0],
'theta' : opt[1],
'sigma' : opt[2],
'MSE' : mean_squared_error(opt)}, index=[0,]),
ignore_index=True)
first = False
last = opt
return parameters
Explanation: Implementing the Calibration Procedure
The function get_parameter_series calibrates the model to the market data for every date contained in the pricing_date_list object for all maturities contained in the maturity_list object.
End of explanation
%%time
pricing_date_list = pd.date_range('2014/3/1', '2014/3/31', freq='B')
maturity_list = [third_fridays[7]]
parameters = get_parameter_series(pricing_date_list, maturity_list)
Explanation: The Calibration Itself
This completes the set of necessary function to implement such a larger calibration effort. The following code defines the dates for which a calibration shall be conducted and for which maturities the calibration is required.
End of explanation
paramet = parameters.set_index('pricing_date')
paramet.tail()
Explanation: Calibration Results
The results are now stored in the pandas DataFrame called parameters. We set a new index and inspect the last results. Throughout the MSE is pretty low indicated a good fit of the model to the market quotes.
End of explanation
%matplotlib inline
paramet[['kappa', 'theta', 'sigma', 'MSE']].plot(subplots=True, color='b', figsize=(10, 12))
plt.tight_layout()
Explanation: This is also illustrated by the visualization of the time series data for the calibrated/optimal parameter values. The MSE is below 0.01 throughout.
End of explanation
index = paramet.index[-1]
opt = np.array(paramet[['kappa', 'theta', 'sigma']].loc[index])
option_selection = get_option_selection(index, maturity_list[0], tol=tol)[0]
model_values = np.sort(np.array(list(calculate_model_values(opt).values())))[::-1]
import matplotlib.pyplot as plt
%matplotlib inline
fix, (ax1, ax2) = plt.subplots(2, sharex=True, figsize=(10, 8))
strikes = option_selection['STRIKE'].values
ax1.plot(strikes, option_selection['PRICE'], label='market quotes')
ax1.plot(strikes, model_values, 'ro', label='model values')
ax1.set_ylabel('option values')
ax1.grid(True)
ax1.legend(loc=0)
wi = 0.25
ax2.bar(strikes - wi / 2., model_values - option_selection['PRICE'],
label='market quotes', width=wi)
ax2.grid(True)
ax2.set_ylabel('differences')
ax2.set_xlabel('strikes')
Explanation: The following generates a plot of the calibration results for the last calibration day. The absolute price differences are below 0.10 EUR for all options.
End of explanation |
1,346 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: 使用内置方法进行训练和评估
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: 简介
本指南涵盖使用内置 API 进行训练和验证时的训练、评估和预测(推断)模型(例如 Model.fit()、Model.evaluate() 和 Model.predict())。
如果您有兴趣在指定自己的训练步骤函数时利用 fit(),请参阅<a href="https
Step3: 下面是典型的端到端工作流,包括:
训练
根据从原始训练数据生成的预留集进行验证
对测试数据进行评估
在此示例中,我们使用 MNIST 数据。
Step4: 我们指定训练配置(优化器、损失、指标):
Step5: 我们调用 fit(),它会通过将数据切分成大小为 batch_size 的“批次”,然后在给定数量的 epochs 内重复遍历整个数据集来训练模型。
Step6: 返回的 history 对象保存训练期间的损失值和指标值记录:
Step7: 我们通过 evaluate() 在测试数据上评估模型:
Step8: 现在,我们来详细查看此工作流的每一部分。
compile() 方法:指定损失、指标和优化器
要使用 fit() 训练模型,您需要指定损失函数、优化器以及一些要监视的指标(可选)。
将它们作为 compile() 方法的参数传递给模型:
Step9: metrics 参数应当为列表 - 您的模型可以具有任意数量的指标。
如果您的模型具有多个输出,则可以为每个输出指定不同的损失和指标,并且可以调整每个输出对模型总损失的贡献。您可以在将数据传递到多输入、多输出模型部分中找到有关此问题的更多详细信息。
请注意,如果您对默认设置感到满意,那么在许多情况下,都可以通过字符串标识符将优化器、损失和指标指定为捷径:
Step10: 为方便以后重用,我们将模型定义和编译步骤放入函数中;我们将在本指南的不同示例中多次调用它们。
Step11: 提供许多内置优化器、损失和指标
通常,您不必从头开始创建自己的损失、指标或优化器,因为您需要的可能已经是 Keras API 的一部分:
优化器:
SGD()(有或没有动量)
RMSprop()
Adam()
等等
损失:
MeanSquaredError()
KLDivergence()
CosineSimilarity()
等等
指标:
AUC()
Precision()
Recall()
等等
自定义损失
如果您需要创建自定义损失,Keras 提供了两种方式。
第一种方式涉及创建一个接受输入 y_true 和 y_pred 的函数。下面的示例显示了一个计算实际数据与预测值之间均方误差的损失函数:
Step12: 如果您需要一个使用除 y_true 和 y_pred 之外的其他参数的损失函数,则可以将 tf.keras.losses.Loss 类子类化,并实现以下两种方法:
__init__(self):接受要在调用损失函数期间传递的参数
call(self, y_true, y_pred):使用目标 (y_true) 和模型预测 (y_pred) 来计算模型的损失
假设您要使用均方误差,但存在一个会抑制预测值远离 0.5(我们假设分类目标采用独热编码,且取值介于 0 和 1 之间)的附加项。这会为模型创建一个激励,使其不会对预测值过于自信,这可能有助于减轻过拟合(在尝试之前,我们不知道它是否有效!)。
您可以按以下方式处理:
Step13: 自定义指标
如果您需要不属于 API 的指标,则可以通过将 tf.keras.metrics.Metric 类子类化来轻松创建自定义指标。您将需要实现 4 个方法:
__init__(self),您将在其中为指标创建状态变量。
update_state(self, y_true, y_pred, sample_weight=None),使用目标 y_true 和模型预测 y_pred 更新状态变量。
result(self),使用状态变量来计算最终结果。
reset_states(self),用于重新初始化指标的状态。
状态更新和结果计算分开处理(分别在 update_state() 和 result() 中),因为在某些情况下,结果计算的开销可能非常大,只能定期执行。
下面是一个展示如何实现 CategoricalTruePositives 指标的简单示例,该指标可以计算有多少样本被正确分类为属于给定类:
Step14: 处理不适合标准签名的损失和指标
绝大多数损失和指标都可以通过 y_true 和 y_pred 计算得出,其中 y_pred 是模型的输出,但不是全部。例如,正则化损失可能仅需要激活层(在这种情况下没有目标),并且这种激活可能不是模型输出。
在此类情况下,您可以从自定义层的调用方法内部调用 self.add_loss(loss_value)。以这种方式添加的损失会在训练期间添加到“主要”损失中(传递给 compile() 的损失)。下面是一个添加激活正则化的简单示例(请注意,激活正则化内置于所有 Keras 层中 - 此层只是为了提供一个具体示例):
Step15: 您可以使用 add_metric() 对记录指标值执行相同的操作:
Step16: 在函数式 API 中,您还可以调用 model.add_loss(loss_tensor) 或 model.add_metric(metric_tensor, name, aggregation)。
下面是一个简单的示例:
Step17: 请注意,当您通过 add_loss() 传递损失时,可以在没有损失函数的情况下调用 compile(),因为模型已经有损失要最小化。
考虑以下 LogisticEndpoint 层:它以目标和 logits 作为输入,并通过 add_loss() 跟踪交叉熵损失。另外,它还通过 add_metric() 跟踪分类准确率。
Step18: 您可以在具有两个输入(输入数据和目标)的模型中使用它,编译时无需 loss 参数,如下所示:
Step19: 有关训练多输入模型的更多信息,请参阅将数据传递到多输入、多输出模型部分。
自动分离验证预留集
在您看到的第一个端到端示例中,我们使用了 validation_data 参数将 NumPy 数组 (x_val, y_val) 的元组传递给模型,用于在每个周期结束时评估验证损失和验证指标。
这是另一个选项:参数 validation_split 允许您自动保留部分训练数据以供验证。参数值表示要保留用于验证的数据比例,因此应将其设置为大于 0 且小于 1 的数字。例如,validation_split=0.2 表示“使用 20% 的数据进行验证”,而 validation_split=0.6 表示“使用 60% 的数据进行验证”。
验证的计算方法是在进行任何打乱顺序之前,获取 fit() 调用接收到的数组的最后 x% 个样本。
请注意,仅在使用 NumPy 数据进行训练时才能使用 validation_split。
Step20: 通过 tf.data 数据集进行训练和评估
在上面的几个段落中,您已经了解了如何处理损失、指标和优化器,并且已经了解当数据作为 NumPy 数组传递时,如何在 fit() 中使用 validation_data 和 validation_split 参数。
现在,让我们看一下数据以 tf.data.Dataset 对象形式出现的情况。
tf.data API 是 TensorFlow 2.0 中的一组实用工具,用于以快速且可扩展的方式加载和预处理数据。
有关创建 Datasets 的完整指南,请参阅 tf.data 文档。
您可以将 Dataset 实例直接传递给方法 fit()、evaluate() 和 predict():
Step21: 请注意,数据集会在每个周期结束时重置,因此可以在下一个周期重复使用。
如果您只想在来自此数据集的特定数量批次上进行训练,则可以传递 steps_per_epoch 参数,此参数可以指定在继续下一个周期之前,模型应使用此数据集运行多少训练步骤。
如果执行此操作,则不会在每个周期结束时重置数据集,而是会继续绘制接下来的批次。数据集最终将用尽数据(除非它是无限循环的数据集)。
Step22: 使用验证数据集
您可以在 fit() 中将 Dataset 实例作为 validation_data 参数传递:
Step23: 在每个周期结束时,模型将迭代验证数据集并计算验证损失和验证指标。
如果只想对此数据集中的特定数量批次运行验证,则可以传递 validation_steps 参数,此参数可以指定在中断验证并进入下一个周期之前,模型应使用验证数据集运行多少个验证步骤:
Step24: 请注意,验证数据集将在每次使用后重置(这样您就可以在不同周期中始终根据相同的样本进行评估)。
通过 Dataset 对象进行训练时,不支持参数 validation_split(从训练数据生成预留集),因为此功能需要为数据集样本编制索引的能力,而 Dataset API 通常无法做到这一点。
支持的其他输入格式
除 NumPy 数组、Eager 张量和 TensorFlow Datasets 外,还可以使用 Pandas 数据帧或通过产生批量数据和标签的 Python 生成器训练 Keras 模型。
特别是,keras.utils.Sequence 类提供了一个简单的接口来构建可感知多处理并且可以打乱顺序的 Python 数据生成器。
通常,我们建议您使用:
NumPy 输入数据,前提是您的数据很小且适合装入内存
Dataset 对象,前提是您有大型数据集,且需要执行分布式训练
Sequence 对象,前提是您具有大型数据集,且需要执行很多无法在 TensorFlow 中完成的自定义 Python 端处理(例如,如果您依赖外部库进行数据加载或预处理)。
使用 keras.utils.Sequence 对象作为输入
keras.utils.Sequence 是一个实用工具,您可以将其子类化以获得具有两个重要属性的 Python 生成器:
它适用于多处理。
可以打乱它的顺序(例如,在 fit() 中传递 shuffle=True 时)。
Sequence 必须实现两个方法:
__getitem__
__len__
__getitem__ 方法应返回完整的批次。如果要在各个周期之间修改数据集,可以实现 on_epoch_end。
下面是一个简单的示例:
```python
from skimage.io import imread
from skimage.transform import resize
import numpy as np
Here, filenames is list of path to the images
and labels are the associated labels.
class CIFAR10Sequence(Sequence)
Step25: 样本权重
对于细粒度控制,或者如果您不构建分类器,则可以使用“样本权重”。
通过 NumPy 数据进行训练时:将 sample_weight 参数传递给 Model.fit()。
通过 tf.data 或任何其他类型的迭代器进行训练时:产生 (input_batch, label_batch, sample_weight_batch) 元组。
“样本权重”数组是一个由数字组成的数组,这些数字用于指定批次中每个样本在计算总损失时应当具有的权重。它通常用于不平衡的分类问题(理念是将更多权重分配给罕见类)。
当使用的权重为 1 和 0 时,此数组可用作损失函数的掩码(完全丢弃某些样本对总损失的贡献)。
Step26: 下面是一个匹配的 Dataset 示例:
Step27: 将数据传递到多输入、多输出模型
在前面的示例中,我们考虑的是具有单个输入(形状为 (764,) 的张量)和单个输出(形状为 (10,) 的预测张量)的模型。但具有多个输入或输出的模型呢?
考虑以下模型,该模型具有形状为 (32, 32, 3) 的图像输入(即 (height, width, channels))和形状为 (None, 10) 的时间序列输入(即 (timesteps, features))。我们的模型将具有根据这些输入的组合计算出的两个输出:“得分”(形状为 (1,))和在五个类上的概率分布(形状为 (5,))。
Step28: 我们来绘制这个模型,以便您可以清楚地看到我们在这里执行的操作(请注意,图中显示的形状是批次形状,而不是每个样本的形状)。
Step29: 在编译时,通过将损失函数作为列表传递,我们可以为不同的输出指定不同的损失:
Step30: 如果我们仅将单个损失函数传递给模型,则相同的损失函数将应用于每个输出(此处不合适)。
对于指标同样如此:
Step31: 由于我们已为输出层命名,我们还可以通过字典指定每个输出的损失和指标:
Step32: 如果您的输出超过 2 个,我们建议使用显式名称和字典。
可以使用 loss_weights 参数为特定于输出的不同损失赋予不同的权重(例如,在我们的示例中,我们可能希望通过为类损失赋予 2 倍重要性来向“得分”损失赋予特权):
Step33: 如果这些输出用于预测而不是用于训练,也可以选择不计算某些输出的损失:
Step34: 将数据传递给 fit() 中的多输入或多输出模型的工作方式与在编译中指定损失函数的方式类似:您可以传递 NumPy 数组的列表(1
Step35: 下面是 Dataset 的用例:与我们对 NumPy 数组执行的操作类似,Dataset 应返回一个字典元组。
Step36: 使用回调
Keras 中的回调是在训练过程中的不同时间点(在某个周期开始时、在批次结束时、在某个周期结束时等)调用的对象。它们可用于实现特定行为,例如:
在训练期间的不同时间点进行验证(除了内置的按周期验证外)
定期或在超过一定准确率阈值时为模型设置检查点
当训练似乎停滞不前时,更改模型的学习率
当训练似乎停滞不前时,对顶层进行微调
在训练结束或超出特定性能阈值时发送电子邮件或即时消息通知
等等
回调可以作为列表传递给您对 fit() 的调用:
Step37: 提供多个内置回调
Keras 中已经提供多个内置回调,例如:
ModelCheckpoint:定期保存模型。
EarlyStopping:当训练不再改善验证指标时,停止训练。
TensorBoard:定期编写可在 TensorBoard 中可视化的模型日志(更多详细信息,请参阅“可视化”部分)。
CSVLogger:将损失和指标数据流式传输到 CSV 文件。
等等
有关完整列表,请参阅回调文档。
编写您自己的回调
您可以通过扩展基类 keras.callbacks.Callback 来创建自定义回调。回调可以通过类属性 self.model 访问其关联的模型。
确保阅读编写自定义回调的完整指南。
下面是一个简单的示例,在训练期间保存每个批次的损失值列表:
Step38: 为模型设置检查点
根据相对较大的数据集训练模型时,经常保存模型的检查点至关重要。
实现此目标的最简单方式是使用 ModelCheckpoint 回调:
Step39: ModelCheckpoint 回调可用于实现容错:在训练随机中断的情况下,从模型的最后保存状态重新开始训练的能力。下面是一个基本示例:
Step40: 您还可以编写自己的回调来保存和恢复模型。
有关序列化和保存的完整指南,请参阅保存和序列化模型指南。
使用学习率时间表
训练深度学习模型的常见模式是随着训练的进行逐渐减少学习。这通常称为“学习率衰减”。
学习衰减时间表可以是静态的(根据当前周期或当前批次索引预先确定),也可以是动态的(响应模型的当前行为,尤其是验证损失)。
将时间表传递给优化器
通过将时间表对象作为优化器中的 learning_rate 参数传递,您可以轻松使用静态学习率衰减时间表:
Step41: 提供了几个内置时间表:ExponentialDecay、PiecewiseConstantDecay、PolynomialDecay 和 InverseTimeDecay。
使用回调实现动态学习率时间表
由于优化器无法访问验证指标,因此无法使用这些时间表对象来实现动态学习率时间表(例如,当验证损失不再改善时降低学习率)。
但是,回调确实可以访问所有指标,包括验证指标!因此,您可以通过使用可修改优化器上的当前学习率的回调来实现此模式。实际上,它甚至以 ReduceLROnPlateau 回调的形式内置。
可视化训练期间的损失和指标
在训练期间密切关注模型的最佳方式是使用 TensorBoard,这是一个基于浏览器的应用,它可以在本地运行,为您提供:
训练和评估的实时损失和指标图
(可选)层激活直方图的可视化
(可选)Embedding 层学习的嵌入向量空间的 3D 可视化
如果您已通过 pip 安装了 TensorFlow,则应当能够从命令行启动 TensorBoard:
tensorboard --logdir=/full_path_to_your_logs
使用 TensorBoard 回调
将 TensorBoard 与 Keras 模型和 fit 方法一起使用的最简单方式是 TensorBoard 回调。
在最简单的情况下,只需指定您希望回调写入日志的位置即可: | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
Explanation: 使用内置方法进行训练和评估
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://tensorflow.google.cn/guide/keras/train_and_evaluate"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a> </td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/keras/train_and_evaluate.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/keras/train_and_evaluate.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/keras/train_and_evaluate.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a> </td>
</table>
设置
End of explanation
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
Explanation: 简介
本指南涵盖使用内置 API 进行训练和验证时的训练、评估和预测(推断)模型(例如 Model.fit()、Model.evaluate() 和 Model.predict())。
如果您有兴趣在指定自己的训练步骤函数时利用 fit(),请参阅<a href="https://tensorflow.google.cn/guide/keras/customizing_what_happens_in_fit/" data-md-type="link">自定义 fit() 的功能</a>指南。
如果您有兴趣从头开始编写自己的训练和评估循环,请参阅从头开始编写训练循环指南。
一般而言,无论您使用内置循环还是编写自己的循环,模型训练和评估都会在每种 Keras 模型(序贯模型、使用函数式 API 构建的模型以及通过模型子类化从头编写的模型)中严格按照相同的方式工作。
本指南不涉及分布式训练,这部分内容会在我们的多 GPU 和分布式训练指南中进行介绍。
API 概述:第一个端到端示例
将数据传递到模型的内置训练循环时,应当使用 NumPy 数组(如果数据很小且适合装入内存)或 tf.data Dataset 对象。在接下来的段落中,我们将 MNIST 数据集用作 NumPy 数组,以演示如何使用优化器、损失和指标。
我们考虑以下模型(在这里,我们使用函数式 API 构建了此模型,但它也可以是序贯模型或子类化模型):
End of explanation
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Preprocess the data (these are NumPy arrays)
x_train = x_train.reshape(60000, 784).astype("float32") / 255
x_test = x_test.reshape(10000, 784).astype("float32") / 255
y_train = y_train.astype("float32")
y_test = y_test.astype("float32")
# Reserve 10,000 samples for validation
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
Explanation: 下面是典型的端到端工作流,包括:
训练
根据从原始训练数据生成的预留集进行验证
对测试数据进行评估
在此示例中,我们使用 MNIST 数据。
End of explanation
model.compile(
optimizer=keras.optimizers.RMSprop(), # Optimizer
# Loss function to minimize
loss=keras.losses.SparseCategoricalCrossentropy(),
# List of metrics to monitor
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
Explanation: 我们指定训练配置(优化器、损失、指标):
End of explanation
print("Fit model on training data")
history = model.fit(
x_train,
y_train,
batch_size=64,
epochs=2,
# We pass some validation for
# monitoring validation loss and metrics
# at the end of each epoch
validation_data=(x_val, y_val),
)
Explanation: 我们调用 fit(),它会通过将数据切分成大小为 batch_size 的“批次”,然后在给定数量的 epochs 内重复遍历整个数据集来训练模型。
End of explanation
history.history
Explanation: 返回的 history 对象保存训练期间的损失值和指标值记录:
End of explanation
# Evaluate the model on the test data using `evaluate`
print("Evaluate on test data")
results = model.evaluate(x_test, y_test, batch_size=128)
print("test loss, test acc:", results)
# Generate predictions (probabilities -- the output of the last layer)
# on new data using `predict`
print("Generate predictions for 3 samples")
predictions = model.predict(x_test[:3])
print("predictions shape:", predictions.shape)
Explanation: 我们通过 evaluate() 在测试数据上评估模型:
End of explanation
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
Explanation: 现在,我们来详细查看此工作流的每一部分。
compile() 方法:指定损失、指标和优化器
要使用 fit() 训练模型,您需要指定损失函数、优化器以及一些要监视的指标(可选)。
将它们作为 compile() 方法的参数传递给模型:
End of explanation
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
Explanation: metrics 参数应当为列表 - 您的模型可以具有任意数量的指标。
如果您的模型具有多个输出,则可以为每个输出指定不同的损失和指标,并且可以调整每个输出对模型总损失的贡献。您可以在将数据传递到多输入、多输出模型部分中找到有关此问题的更多详细信息。
请注意,如果您对默认设置感到满意,那么在许多情况下,都可以通过字符串标识符将优化器、损失和指标指定为捷径:
End of explanation
def get_uncompiled_model():
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
def get_compiled_model():
model = get_uncompiled_model()
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
return model
Explanation: 为方便以后重用,我们将模型定义和编译步骤放入函数中;我们将在本指南的不同示例中多次调用它们。
End of explanation
def custom_mean_squared_error(y_true, y_pred):
return tf.math.reduce_mean(tf.square(y_true - y_pred))
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=custom_mean_squared_error)
# We need to one-hot encode the labels to use MSE
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
Explanation: 提供许多内置优化器、损失和指标
通常,您不必从头开始创建自己的损失、指标或优化器,因为您需要的可能已经是 Keras API 的一部分:
优化器:
SGD()(有或没有动量)
RMSprop()
Adam()
等等
损失:
MeanSquaredError()
KLDivergence()
CosineSimilarity()
等等
指标:
AUC()
Precision()
Recall()
等等
自定义损失
如果您需要创建自定义损失,Keras 提供了两种方式。
第一种方式涉及创建一个接受输入 y_true 和 y_pred 的函数。下面的示例显示了一个计算实际数据与预测值之间均方误差的损失函数:
End of explanation
class CustomMSE(keras.losses.Loss):
def __init__(self, regularization_factor=0.1, name="custom_mse"):
super().__init__(name=name)
self.regularization_factor = regularization_factor
def call(self, y_true, y_pred):
mse = tf.math.reduce_mean(tf.square(y_true - y_pred))
reg = tf.math.reduce_mean(tf.square(0.5 - y_pred))
return mse + reg * self.regularization_factor
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=CustomMSE())
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
Explanation: 如果您需要一个使用除 y_true 和 y_pred 之外的其他参数的损失函数,则可以将 tf.keras.losses.Loss 类子类化,并实现以下两种方法:
__init__(self):接受要在调用损失函数期间传递的参数
call(self, y_true, y_pred):使用目标 (y_true) 和模型预测 (y_pred) 来计算模型的损失
假设您要使用均方误差,但存在一个会抑制预测值远离 0.5(我们假设分类目标采用独热编码,且取值介于 0 和 1 之间)的附加项。这会为模型创建一个激励,使其不会对预测值过于自信,这可能有助于减轻过拟合(在尝试之前,我们不知道它是否有效!)。
您可以按以下方式处理:
End of explanation
class CategoricalTruePositives(keras.metrics.Metric):
def __init__(self, name="categorical_true_positives", **kwargs):
super(CategoricalTruePositives, self).__init__(name=name, **kwargs)
self.true_positives = self.add_weight(name="ctp", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.reshape(tf.argmax(y_pred, axis=1), shape=(-1, 1))
values = tf.cast(y_true, "int32") == tf.cast(y_pred, "int32")
values = tf.cast(values, "float32")
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, "float32")
values = tf.multiply(values, sample_weight)
self.true_positives.assign_add(tf.reduce_sum(values))
def result(self):
return self.true_positives
def reset_states(self):
# The state of the metric will be reset at the start of each epoch.
self.true_positives.assign(0.0)
model = get_uncompiled_model()
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[CategoricalTruePositives()],
)
model.fit(x_train, y_train, batch_size=64, epochs=3)
Explanation: 自定义指标
如果您需要不属于 API 的指标,则可以通过将 tf.keras.metrics.Metric 类子类化来轻松创建自定义指标。您将需要实现 4 个方法:
__init__(self),您将在其中为指标创建状态变量。
update_state(self, y_true, y_pred, sample_weight=None),使用目标 y_true 和模型预测 y_pred 更新状态变量。
result(self),使用状态变量来计算最终结果。
reset_states(self),用于重新初始化指标的状态。
状态更新和结果计算分开处理(分别在 update_state() 和 result() 中),因为在某些情况下,结果计算的开销可能非常大,只能定期执行。
下面是一个展示如何实现 CategoricalTruePositives 指标的简单示例,该指标可以计算有多少样本被正确分类为属于给定类:
End of explanation
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(tf.reduce_sum(inputs) * 0.1)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
# The displayed loss will be much higher than before
# due to the regularization component.
model.fit(x_train, y_train, batch_size=64, epochs=1)
Explanation: 处理不适合标准签名的损失和指标
绝大多数损失和指标都可以通过 y_true 和 y_pred 计算得出,其中 y_pred 是模型的输出,但不是全部。例如,正则化损失可能仅需要激活层(在这种情况下没有目标),并且这种激活可能不是模型输出。
在此类情况下,您可以从自定义层的调用方法内部调用 self.add_loss(loss_value)。以这种方式添加的损失会在训练期间添加到“主要”损失中(传递给 compile() 的损失)。下面是一个添加激活正则化的简单示例(请注意,激活正则化内置于所有 Keras 层中 - 此层只是为了提供一个具体示例):
End of explanation
class MetricLoggingLayer(layers.Layer):
def call(self, inputs):
# The `aggregation` argument defines
# how to aggregate the per-batch values
# over each epoch:
# in this case we simply average them.
self.add_metric(
keras.backend.std(inputs), name="std_of_activation", aggregation="mean"
)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert std logging as a layer.
x = MetricLoggingLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
Explanation: 您可以使用 add_metric() 对记录指标值执行相同的操作:
End of explanation
inputs = keras.Input(shape=(784,), name="digits")
x1 = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x2 = layers.Dense(64, activation="relu", name="dense_2")(x1)
outputs = layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
model.add_loss(tf.reduce_sum(x1) * 0.1)
model.add_metric(keras.backend.std(x1), name="std_of_activation", aggregation="mean")
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
Explanation: 在函数式 API 中,您还可以调用 model.add_loss(loss_tensor) 或 model.add_metric(metric_tensor, name, aggregation)。
下面是一个简单的示例:
End of explanation
class LogisticEndpoint(keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
self.accuracy_fn = keras.metrics.BinaryAccuracy()
def call(self, targets, logits, sample_weights=None):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
loss = self.loss_fn(targets, logits, sample_weights)
self.add_loss(loss)
# Log accuracy as a metric and add it
# to the layer using `self.add_metric()`.
acc = self.accuracy_fn(targets, logits, sample_weights)
self.add_metric(acc, name="accuracy")
# Return the inference-time prediction tensor (for `.predict()`).
return tf.nn.softmax(logits)
Explanation: 请注意,当您通过 add_loss() 传递损失时,可以在没有损失函数的情况下调用 compile(),因为模型已经有损失要最小化。
考虑以下 LogisticEndpoint 层:它以目标和 logits 作为输入,并通过 add_loss() 跟踪交叉熵损失。另外,它还通过 add_metric() 跟踪分类准确率。
End of explanation
import numpy as np
inputs = keras.Input(shape=(3,), name="inputs")
targets = keras.Input(shape=(10,), name="targets")
logits = keras.layers.Dense(10)(inputs)
predictions = LogisticEndpoint(name="predictions")(logits, targets)
model = keras.Model(inputs=[inputs, targets], outputs=predictions)
model.compile(optimizer="adam") # No loss argument!
data = {
"inputs": np.random.random((3, 3)),
"targets": np.random.random((3, 10)),
}
model.fit(data)
Explanation: 您可以在具有两个输入(输入数据和目标)的模型中使用它,编译时无需 loss 参数,如下所示:
End of explanation
model = get_compiled_model()
model.fit(x_train, y_train, batch_size=64, validation_split=0.2, epochs=1)
Explanation: 有关训练多输入模型的更多信息,请参阅将数据传递到多输入、多输出模型部分。
自动分离验证预留集
在您看到的第一个端到端示例中,我们使用了 validation_data 参数将 NumPy 数组 (x_val, y_val) 的元组传递给模型,用于在每个周期结束时评估验证损失和验证指标。
这是另一个选项:参数 validation_split 允许您自动保留部分训练数据以供验证。参数值表示要保留用于验证的数据比例,因此应将其设置为大于 0 且小于 1 的数字。例如,validation_split=0.2 表示“使用 20% 的数据进行验证”,而 validation_split=0.6 表示“使用 60% 的数据进行验证”。
验证的计算方法是在进行任何打乱顺序之前,获取 fit() 调用接收到的数组的最后 x% 个样本。
请注意,仅在使用 NumPy 数据进行训练时才能使用 validation_split。
End of explanation
model = get_compiled_model()
# First, let's create a training Dataset instance.
# For the sake of our example, we'll use the same MNIST data as before.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Now we get a test dataset.
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_dataset = test_dataset.batch(64)
# Since the dataset already takes care of batching,
# we don't pass a `batch_size` argument.
model.fit(train_dataset, epochs=3)
# You can also evaluate or predict on a dataset.
print("Evaluate")
result = model.evaluate(test_dataset)
dict(zip(model.metrics_names, result))
Explanation: 通过 tf.data 数据集进行训练和评估
在上面的几个段落中,您已经了解了如何处理损失、指标和优化器,并且已经了解当数据作为 NumPy 数组传递时,如何在 fit() 中使用 validation_data 和 validation_split 参数。
现在,让我们看一下数据以 tf.data.Dataset 对象形式出现的情况。
tf.data API 是 TensorFlow 2.0 中的一组实用工具,用于以快速且可扩展的方式加载和预处理数据。
有关创建 Datasets 的完整指南,请参阅 tf.data 文档。
您可以将 Dataset 实例直接传递给方法 fit()、evaluate() 和 predict():
End of explanation
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Only use the 100 batches per epoch (that's 64 * 100 samples)
model.fit(train_dataset, epochs=3, steps_per_epoch=100)
Explanation: 请注意,数据集会在每个周期结束时重置,因此可以在下一个周期重复使用。
如果您只想在来自此数据集的特定数量批次上进行训练,则可以传递 steps_per_epoch 参数,此参数可以指定在继续下一个周期之前,模型应使用此数据集运行多少训练步骤。
如果执行此操作,则不会在每个周期结束时重置数据集,而是会继续绘制接下来的批次。数据集最终将用尽数据(除非它是无限循环的数据集)。
End of explanation
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(train_dataset, epochs=1, validation_data=val_dataset)
Explanation: 使用验证数据集
您可以在 fit() 中将 Dataset 实例作为 validation_data 参数传递:
End of explanation
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(
train_dataset,
epochs=1,
# Only run validation using the first 10 batches of the dataset
# using the `validation_steps` argument
validation_data=val_dataset,
validation_steps=10,
)
Explanation: 在每个周期结束时,模型将迭代验证数据集并计算验证损失和验证指标。
如果只想对此数据集中的特定数量批次运行验证,则可以传递 validation_steps 参数,此参数可以指定在中断验证并进入下一个周期之前,模型应使用验证数据集运行多少个验证步骤:
End of explanation
import numpy as np
class_weight = {
0: 1.0,
1: 1.0,
2: 1.0,
3: 1.0,
4: 1.0,
# Set weight "2" for class "5",
# making this class 2x more important
5: 2.0,
6: 1.0,
7: 1.0,
8: 1.0,
9: 1.0,
}
print("Fit with class weight")
model = get_compiled_model()
model.fit(x_train, y_train, class_weight=class_weight, batch_size=64, epochs=1)
Explanation: 请注意,验证数据集将在每次使用后重置(这样您就可以在不同周期中始终根据相同的样本进行评估)。
通过 Dataset 对象进行训练时,不支持参数 validation_split(从训练数据生成预留集),因为此功能需要为数据集样本编制索引的能力,而 Dataset API 通常无法做到这一点。
支持的其他输入格式
除 NumPy 数组、Eager 张量和 TensorFlow Datasets 外,还可以使用 Pandas 数据帧或通过产生批量数据和标签的 Python 生成器训练 Keras 模型。
特别是,keras.utils.Sequence 类提供了一个简单的接口来构建可感知多处理并且可以打乱顺序的 Python 数据生成器。
通常,我们建议您使用:
NumPy 输入数据,前提是您的数据很小且适合装入内存
Dataset 对象,前提是您有大型数据集,且需要执行分布式训练
Sequence 对象,前提是您具有大型数据集,且需要执行很多无法在 TensorFlow 中完成的自定义 Python 端处理(例如,如果您依赖外部库进行数据加载或预处理)。
使用 keras.utils.Sequence 对象作为输入
keras.utils.Sequence 是一个实用工具,您可以将其子类化以获得具有两个重要属性的 Python 生成器:
它适用于多处理。
可以打乱它的顺序(例如,在 fit() 中传递 shuffle=True 时)。
Sequence 必须实现两个方法:
__getitem__
__len__
__getitem__ 方法应返回完整的批次。如果要在各个周期之间修改数据集,可以实现 on_epoch_end。
下面是一个简单的示例:
```python
from skimage.io import imread
from skimage.transform import resize
import numpy as np
Here, filenames is list of path to the images
and labels are the associated labels.
class CIFAR10Sequence(Sequence):
def init(self, filenames, labels, batch_size):
self.filenames, self.labels = filenames, labels
self.batch_size = batch_size
def __len__(self):
return int(np.ceil(len(self.filenames) / float(self.batch_size)))
def __getitem__(self, idx):
batch_x = self.filenames[idx * self.batch_size:(idx + 1) * self.batch_size]
batch_y = self.labels[idx * self.batch_size:(idx + 1) * self.batch_size]
return np.array([
resize(imread(filename), (200, 200))
for filename in batch_x]), np.array(batch_y)
sequence = CIFAR10Sequence(filenames, labels, batch_size)
model.fit(sequence, epochs=10)
```
使用样本加权和类加权
在默认设置下,样本的权重由其在数据集中出现的频率决定。您可以通过两种方式独立于样本频率来加权数据:
类权重
样本权重
类权重
通过将字典传递给 Model.fit() 的 class_weight 参数来进行设置。此字典会将类索引映射到应当用于属于此类的样本的权重。
这可用于在不重采样的情况下平衡类,或者用于训练更重视特定类的模型。
例如,在您的数据中,如果类“0”表示类“1”的一半,则可以使用 Model.fit(..., class_weight={0: 1., 1: 0.5})。
下面是一个 NumPy 示例,我们在其中使用类权重或样本权重来提高对类 #5(MNIST 数据集中的数字“5”)进行正确分类的重要性。
End of explanation
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
print("Fit with sample weight")
model = get_compiled_model()
model.fit(x_train, y_train, sample_weight=sample_weight, batch_size=64, epochs=1)
Explanation: 样本权重
对于细粒度控制,或者如果您不构建分类器,则可以使用“样本权重”。
通过 NumPy 数据进行训练时:将 sample_weight 参数传递给 Model.fit()。
通过 tf.data 或任何其他类型的迭代器进行训练时:产生 (input_batch, label_batch, sample_weight_batch) 元组。
“样本权重”数组是一个由数字组成的数组,这些数字用于指定批次中每个样本在计算总损失时应当具有的权重。它通常用于不平衡的分类问题(理念是将更多权重分配给罕见类)。
当使用的权重为 1 和 0 时,此数组可用作损失函数的掩码(完全丢弃某些样本对总损失的贡献)。
End of explanation
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
# Create a Dataset that includes sample weights
# (3rd element in the return tuple).
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train, sample_weight))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model = get_compiled_model()
model.fit(train_dataset, epochs=1)
Explanation: 下面是一个匹配的 Dataset 示例:
End of explanation
image_input = keras.Input(shape=(32, 32, 3), name="img_input")
timeseries_input = keras.Input(shape=(None, 10), name="ts_input")
x1 = layers.Conv2D(3, 3)(image_input)
x1 = layers.GlobalMaxPooling2D()(x1)
x2 = layers.Conv1D(3, 3)(timeseries_input)
x2 = layers.GlobalMaxPooling1D()(x2)
x = layers.concatenate([x1, x2])
score_output = layers.Dense(1, name="score_output")(x)
class_output = layers.Dense(5, name="class_output")(x)
model = keras.Model(
inputs=[image_input, timeseries_input], outputs=[score_output, class_output]
)
Explanation: 将数据传递到多输入、多输出模型
在前面的示例中,我们考虑的是具有单个输入(形状为 (764,) 的张量)和单个输出(形状为 (10,) 的预测张量)的模型。但具有多个输入或输出的模型呢?
考虑以下模型,该模型具有形状为 (32, 32, 3) 的图像输入(即 (height, width, channels))和形状为 (None, 10) 的时间序列输入(即 (timesteps, features))。我们的模型将具有根据这些输入的组合计算出的两个输出:“得分”(形状为 (1,))和在五个类上的概率分布(形状为 (5,))。
End of explanation
keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True)
Explanation: 我们来绘制这个模型,以便您可以清楚地看到我们在这里执行的操作(请注意,图中显示的形状是批次形状,而不是每个样本的形状)。
End of explanation
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
Explanation: 在编译时,通过将损失函数作为列表传递,我们可以为不同的输出指定不同的损失:
End of explanation
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
metrics=[
[
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
[keras.metrics.CategoricalAccuracy()],
],
)
Explanation: 如果我们仅将单个损失函数传递给模型,则相同的损失函数将应用于每个输出(此处不合适)。
对于指标同样如此:
End of explanation
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
)
Explanation: 由于我们已为输出层命名,我们还可以通过字典指定每个输出的损失和指标:
End of explanation
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
loss_weights={"score_output": 2.0, "class_output": 1.0},
)
Explanation: 如果您的输出超过 2 个,我们建议使用显式名称和字典。
可以使用 loss_weights 参数为特定于输出的不同损失赋予不同的权重(例如,在我们的示例中,我们可能希望通过为类损失赋予 2 倍重要性来向“得分”损失赋予特权):
End of explanation
# List loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[None, keras.losses.CategoricalCrossentropy()],
)
# Or dict loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={"class_output": keras.losses.CategoricalCrossentropy()},
)
Explanation: 如果这些输出用于预测而不是用于训练,也可以选择不计算某些输出的损失:
End of explanation
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
# Generate dummy NumPy data
img_data = np.random.random_sample(size=(100, 32, 32, 3))
ts_data = np.random.random_sample(size=(100, 20, 10))
score_targets = np.random.random_sample(size=(100, 1))
class_targets = np.random.random_sample(size=(100, 5))
# Fit on lists
model.fit([img_data, ts_data], [score_targets, class_targets], batch_size=32, epochs=1)
# Alternatively, fit on dicts
model.fit(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
batch_size=32,
epochs=1,
)
Explanation: 将数据传递给 fit() 中的多输入或多输出模型的工作方式与在编译中指定损失函数的方式类似:您可以传递 NumPy 数组的列表(1:1 映射到接收损失函数的输出),或者通过字典将输出名称映射到 NumPy 数组。
End of explanation
train_dataset = tf.data.Dataset.from_tensor_slices(
(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
)
)
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model.fit(train_dataset, epochs=1)
Explanation: 下面是 Dataset 的用例:与我们对 NumPy 数组执行的操作类似,Dataset 应返回一个字典元组。
End of explanation
model = get_compiled_model()
callbacks = [
keras.callbacks.EarlyStopping(
# Stop training when `val_loss` is no longer improving
monitor="val_loss",
# "no longer improving" being defined as "no better than 1e-2 less"
min_delta=1e-2,
# "no longer improving" being further defined as "for at least 2 epochs"
patience=2,
verbose=1,
)
]
model.fit(
x_train,
y_train,
epochs=20,
batch_size=64,
callbacks=callbacks,
validation_split=0.2,
)
Explanation: 使用回调
Keras 中的回调是在训练过程中的不同时间点(在某个周期开始时、在批次结束时、在某个周期结束时等)调用的对象。它们可用于实现特定行为,例如:
在训练期间的不同时间点进行验证(除了内置的按周期验证外)
定期或在超过一定准确率阈值时为模型设置检查点
当训练似乎停滞不前时,更改模型的学习率
当训练似乎停滞不前时,对顶层进行微调
在训练结束或超出特定性能阈值时发送电子邮件或即时消息通知
等等
回调可以作为列表传递给您对 fit() 的调用:
End of explanation
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs):
self.per_batch_losses = []
def on_batch_end(self, batch, logs):
self.per_batch_losses.append(logs.get("loss"))
Explanation: 提供多个内置回调
Keras 中已经提供多个内置回调,例如:
ModelCheckpoint:定期保存模型。
EarlyStopping:当训练不再改善验证指标时,停止训练。
TensorBoard:定期编写可在 TensorBoard 中可视化的模型日志(更多详细信息,请参阅“可视化”部分)。
CSVLogger:将损失和指标数据流式传输到 CSV 文件。
等等
有关完整列表,请参阅回调文档。
编写您自己的回调
您可以通过扩展基类 keras.callbacks.Callback 来创建自定义回调。回调可以通过类属性 self.model 访问其关联的模型。
确保阅读编写自定义回调的完整指南。
下面是一个简单的示例,在训练期间保存每个批次的损失值列表:
End of explanation
model = get_compiled_model()
callbacks = [
keras.callbacks.ModelCheckpoint(
# Path where to save the model
# The two parameters below mean that we will overwrite
# the current checkpoint if and only if
# the `val_loss` score has improved.
# The saved model name will include the current epoch.
filepath="mymodel_{epoch}",
save_best_only=True, # Only save a model if `val_loss` has improved.
monitor="val_loss",
verbose=1,
)
]
model.fit(
x_train, y_train, epochs=2, batch_size=64, callbacks=callbacks, validation_split=0.2
)
Explanation: 为模型设置检查点
根据相对较大的数据集训练模型时,经常保存模型的检查点至关重要。
实现此目标的最简单方式是使用 ModelCheckpoint 回调:
End of explanation
import os
# Prepare a directory to store all the checkpoints.
checkpoint_dir = "./ckpt"
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
def make_or_restore_model():
# Either restore the latest model, or create a fresh one
# if there is no checkpoint available.
checkpoints = [checkpoint_dir + "/" + name for name in os.listdir(checkpoint_dir)]
if checkpoints:
latest_checkpoint = max(checkpoints, key=os.path.getctime)
print("Restoring from", latest_checkpoint)
return keras.models.load_model(latest_checkpoint)
print("Creating a new model")
return get_compiled_model()
model = make_or_restore_model()
callbacks = [
# This callback saves a SavedModel every 100 batches.
# We include the training loss in the saved model name.
keras.callbacks.ModelCheckpoint(
filepath=checkpoint_dir + "/ckpt-loss={loss:.2f}", save_freq=100
)
]
model.fit(x_train, y_train, epochs=1, callbacks=callbacks)
Explanation: ModelCheckpoint 回调可用于实现容错:在训练随机中断的情况下,从模型的最后保存状态重新开始训练的能力。下面是一个基本示例:
End of explanation
initial_learning_rate = 0.1
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True
)
optimizer = keras.optimizers.RMSprop(learning_rate=lr_schedule)
Explanation: 您还可以编写自己的回调来保存和恢复模型。
有关序列化和保存的完整指南,请参阅保存和序列化模型指南。
使用学习率时间表
训练深度学习模型的常见模式是随着训练的进行逐渐减少学习。这通常称为“学习率衰减”。
学习衰减时间表可以是静态的(根据当前周期或当前批次索引预先确定),也可以是动态的(响应模型的当前行为,尤其是验证损失)。
将时间表传递给优化器
通过将时间表对象作为优化器中的 learning_rate 参数传递,您可以轻松使用静态学习率衰减时间表:
End of explanation
keras.callbacks.TensorBoard(
log_dir="/full_path_to_your_logs",
histogram_freq=0, # How often to log histogram visualizations
embeddings_freq=0, # How often to log embedding visualizations
update_freq="epoch",
) # How often to write logs (default: once per epoch)
Explanation: 提供了几个内置时间表:ExponentialDecay、PiecewiseConstantDecay、PolynomialDecay 和 InverseTimeDecay。
使用回调实现动态学习率时间表
由于优化器无法访问验证指标,因此无法使用这些时间表对象来实现动态学习率时间表(例如,当验证损失不再改善时降低学习率)。
但是,回调确实可以访问所有指标,包括验证指标!因此,您可以通过使用可修改优化器上的当前学习率的回调来实现此模式。实际上,它甚至以 ReduceLROnPlateau 回调的形式内置。
可视化训练期间的损失和指标
在训练期间密切关注模型的最佳方式是使用 TensorBoard,这是一个基于浏览器的应用,它可以在本地运行,为您提供:
训练和评估的实时损失和指标图
(可选)层激活直方图的可视化
(可选)Embedding 层学习的嵌入向量空间的 3D 可视化
如果您已通过 pip 安装了 TensorFlow,则应当能够从命令行启动 TensorBoard:
tensorboard --logdir=/full_path_to_your_logs
使用 TensorBoard 回调
将 TensorBoard 与 Keras 模型和 fit 方法一起使用的最简单方式是 TensorBoard 回调。
在最简单的情况下,只需指定您希望回调写入日志的位置即可:
End of explanation |
1,347 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
kafkaReceiveDataPy
This notebook receives data from Kafka on the topic 'test', and stores it in the 'time_test' table of Cassandra (created by cassandra_init.script in startup_script.sh).
```
CREATE KEYSPACE test_time WITH replication = {'class'
Step1: Load modules and start SparkContext
Note that SparkContext must be started to effectively load the package dependencies. Two cores are used, since one is needed for running the Kafka receiver.
Step2: SaveToCassandra function
Takes a list of tuple (rows) and save to Cassandra
Step3: Create streaming task
Receive data from Kafka 'test' topic every five seconds
Get stream content, and add receiving time to each message
Save each RDD in the DStream to Cassandra. Also print on screen
Step4: Start streaming
Step5: Stop streaming
Step6: Get Cassandra table content
Step7: Get Cassandra table content using SQL | Python Code:
import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--conf spark.ui.port=4040 --packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.0.0,com.datastax.spark:spark-cassandra-connector_2.11:2.0.0-M3 pyspark-shell'
import time
Explanation: kafkaReceiveDataPy
This notebook receives data from Kafka on the topic 'test', and stores it in the 'time_test' table of Cassandra (created by cassandra_init.script in startup_script.sh).
```
CREATE KEYSPACE test_time WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 1};
CREATE TABLE test_time.sent_received(
time_sent TEXT,
time_received TEXT,
PRIMARY KEY (time_sent)
);
```
A message that gives the current time is received every second.
Add dependencies
End of explanation
from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext, Row
conf = SparkConf() \
.setAppName("Streaming test") \
.setMaster("local[2]") \
.set("spark.cassandra.connection.host", "127.0.0.1")
sc = SparkContext(conf=conf)
sqlContext=SQLContext(sc)
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
Explanation: Load modules and start SparkContext
Note that SparkContext must be started to effectively load the package dependencies. Two cores are used, since one is needed for running the Kafka receiver.
End of explanation
def saveToCassandra(rows):
if not rows.isEmpty():
sqlContext.createDataFrame(rows).write\
.format("org.apache.spark.sql.cassandra")\
.mode('append')\
.options(table="sent_received", keyspace="test_time")\
.save()
Explanation: SaveToCassandra function
Takes a list of tuple (rows) and save to Cassandra
End of explanation
ssc = StreamingContext(sc, 5)
kvs = KafkaUtils.createStream(ssc, "127.0.0.1:2181", "spark-streaming-consumer", {'test': 1})
data = kvs.map(lambda x: x[1])
rows= data.map(lambda x:Row(time_sent=x,time_received=time.strftime("%Y-%m-%d %H:%M:%S")))
rows.foreachRDD(saveToCassandra)
rows.pprint()
Explanation: Create streaming task
Receive data from Kafka 'test' topic every five seconds
Get stream content, and add receiving time to each message
Save each RDD in the DStream to Cassandra. Also print on screen
End of explanation
ssc.start()
Explanation: Start streaming
End of explanation
ssc.stop(stopSparkContext=False,stopGraceFully=True)
Explanation: Stop streaming
End of explanation
data=sqlContext.read\
.format("org.apache.spark.sql.cassandra")\
.options(table="sent_received", keyspace="test_time")\
.load()
data.show()
Explanation: Get Cassandra table content
End of explanation
data.registerTempTable("sent_received");
data.printSchema()
data=sqlContext.sql("select * from sent_received")
data.show()
Explanation: Get Cassandra table content using SQL
End of explanation |
1,348 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Q-learning is a reinforcement learning paradigm in which we learn a function
Step4: Let's run this on a dummy problem - a 5 state linear grid world with the rewards
Step5: Does it succesfully learn the task?
Step6: Let's look at the Q function.
Step7: Interestingly, it hasn't fully learned the task - it should learn that moving left is always a bad idea, while it's only learned this for the state closest to the left-hand side terminal state. It doesn't need to know such things - it already has a good enough policy, so there's no need to learn about hypotheticals. This is a key difference between Q-learning and dynamic programming methods, which spend a lot of energy learning about situations which may never occur.
OpenAI gym task - pole balancing
Let's try to apply this method to the cart pole balancing problem in the OpenAI gym. For reference, here's a (slightly adapted version of) the starter code at https
Step9: The random policy doesn't work - let's try to first discretize the 4d state space into positive & negative hyperplanes - this will give a 16 D state space. Let's then apply Q-learning to this space.
Step11: Display one example rollout. See here for how to do such a thing | Python Code:
%matplotlib inline
# Standard imports.
import numpy as np
import pylab
import scipy
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# Resize plots.
pylab.rcParams['figure.figsize'] = 12, 7
def qlearning(reward_transition_function,
policy_function,
gamma,
alpha,
nstates,
nactions,
niter = 1000,
state = None,
Q = None,
):
Implement a generic Q-learning algorithm.
Q(state, action) is an nstates, nactions matrix.
From Figure 6.12, Sutton & Barto.
Args:
reward_transition_function: a reward function taking arguments (action, state, step),
returning (reward, state_prime, is_terminal).
policy_function: a policy function taking arguments (Q, state), returning policy a.
gamma: a discount rate.
alpha: a learning rate parameter.
nstates: the number of states.
nactions: the number of actions.
niter: the maximum number of iterations to perform.
state: the initial state.
Q: an initial estimate of Q.
Returns:
(Q, actions, states, rewards)
if Q is None:
Q = np.zeros((nstates, nactions))
if state is None:
state = int(np.random.rand() * nstates)
actions = np.zeros(niter)
states = np.zeros(niter)
rewards = np.zeros(niter)
for i in range(niter):
action = policy_function(Q, state)
reward, state_prime, is_terminal = reward_transition_function(action, state, i)
actions[i] = action
rewards[i] = reward
states[i] = state_prime
Q[state, action] = Q[state, action] + alpha * (reward + np.max(Q[state_prime, :]) - Q[state, action])
state = state_prime
if is_terminal:
# terminal state
break
return (Q, actions, states, rewards)
def test_qlearning():
Unit test.
reward_transition_function = lambda x, y, z: (0, 0, False)
policy_function = lambda x, y: 0
gamma = 1
nstates = 2
nactions = 3
niter = 10
state = 1
alpha = 0.1
Q, actions, states, rewards = qlearning(reward_transition_function,
policy_function,
gamma,
alpha,
nstates,
nactions,
niter,
state
)
assert Q.shape[0] == nstates
assert Q.shape[1] == nactions
assert Q.sum() == 0
assert actions.size == niter
assert states.size == niter
assert rewards.size == niter
assert np.all(actions == 0)
assert np.all(states == 0)
assert np.all(rewards == 0)
test_qlearning()
Explanation: Q-learning is a reinforcement learning paradigm in which we learn a function:
$$Q^\pi(s, a)$$
Where $Q$ is the quality of an action $a$ given that we are in a (known) state $s$. The quality of an action is the expected value of the reward of a continuing the policy $\pi$ after the next step. Consider the relationship between the $Q(s, a)$ function and the value function $V(s)$:
$$Q(s, \pi(s)) = V^\pi(s)$$
If we mess with the left hand side policy $\pi$ for just the very next move, we might be able to find a better policy. Q learning leverages this simple idea: focus on just the next move, then let the chips fall where they may. Iterate.
Let's implement this.
End of explanation
def epsilon_greedy_policy(Q, state, epsilon = .1):
Epsilon-greedy policy.
if np.random.rand() > epsilon:
return np.argmax(Q[state, :])
else:
return int(np.random.rand() * Q.shape[1])
nstates = 5
def left_right_world(action, state, step):
# Implement a 5-state world in which there's a terminal reward -1,
# a state to the right with terminal reward 1, and where you start in the middle.
reward = 0
is_terminal = False
if action:
# Go the right.
next_state = state + 1
if next_state == nstates - 1:
# terminal reward!
reward = 1
is_terminal = True
else:
# Go the left.
next_state = state - 1
if next_state == 0:
# terminal reward!
reward = -1
is_terminal = True
return (reward, next_state, is_terminal)
# Do a number of episodes.
nepisodes = 100
Q = np.zeros((nstates, 2))
episodic_rewards = np.zeros(nepisodes)
for i in range(nepisodes):
Q, actions, states, rewards = qlearning(left_right_world,
epsilon_greedy_policy,
1.0,
0.1,
nstates,
2,
niter = 50,
state = int(nstates / 2),
Q = Q)
episodic_rewards[i] = rewards.sum()
Explanation: Let's run this on a dummy problem - a 5 state linear grid world with the rewards:
[-1 0 0 0 1]
Where we always start at the middle and must choose the left-right policy. We know the optimal strategy is to go right, of course.
End of explanation
plt.plot(episodic_rewards)
plt.xlabel('Episode')
plt.ylabel('Terminal reward')
# The actions should be [1, 1], i.e. go right, go right
actions[0:2]
Explanation: Does it succesfully learn the task?
End of explanation
Q
Explanation: Let's look at the Q function.
End of explanation
import gym
env = gym.make('CartPole-v0')
nepisodes = 5
episodic_rewards = np.zeros((nepisodes))
for i_episode in range(nepisodes):
observation = env.reset()
cum_reward = 0
for t in range(100):
env.render()
# print(observation)
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
cum_reward += reward
if done:
print("Episode finished after {} timesteps".format(t+1))
break
episodic_rewards[i_episode] = cum_reward
env.render(close=True)
Explanation: Interestingly, it hasn't fully learned the task - it should learn that moving left is always a bad idea, while it's only learned this for the state closest to the left-hand side terminal state. It doesn't need to know such things - it already has a good enough policy, so there's no need to learn about hypotheticals. This is a key difference between Q-learning and dynamic programming methods, which spend a lot of energy learning about situations which may never occur.
OpenAI gym task - pole balancing
Let's try to apply this method to the cart pole balancing problem in the OpenAI gym. For reference, here's a (slightly adapted version of) the starter code at https://gym.openai.com/docs that uses a random policy:
End of explanation
import functools
def observation_to_state(observation):
Project to 16 dimensional latent space
observation = (observation > 0) * 1
return np.sum(observation * np.arange(4) ** 2)
nepisodes = 1000
episodic_rewards = np.zeros((nepisodes))
Q = np.zeros((16, env.action_space.n))
epsilon = .5
frames = []
for i_episode in range(nepisodes):
observation = env.reset()
niter = 0
frames = []
def reward_transition_function(action, _, step):
observation, reward, done, _ = env.step(action)
state_prime = observation_to_state(observation)
if done:
reward = step - 195.0
if i_episode == nepisodes - 1:
frames.append(
env.render(mode = 'rgb_array')
)
return (reward, state_prime, done)
state = observation_to_state(observation)
Q, actions, states, rewards = qlearning(reward_transition_function,
functools.partial(epsilon_greedy_policy, epsilon = epsilon),
1,
.4,
16,
env.action_space.n,
niter = 400,
state = state,
Q = Q
)
# Decrease the amount of exploration when we get about halfway through the task
if (rewards == 1).sum() > 100:
epsilon *= .9
episodic_rewards[i_episode] = (rewards == 1).sum()
plt.plot(episodic_rewards)
Explanation: The random policy doesn't work - let's try to first discretize the 4d state space into positive & negative hyperplanes - this will give a 16 D state space. Let's then apply Q-learning to this space.
End of explanation
%matplotlib inline
from JSAnimation.IPython_display import display_animation
from matplotlib import animation
import matplotlib.pyplot as plt
from IPython.display import display
import gym
def display_frames_as_gif(frames):
Displays a list of frames as a gif, with controls
plt.figure(figsize=(frames[0].shape[1] / 72.0, frames[0].shape[0] / 72.0), dpi = 72)
patch = plt.imshow(frames[0])
plt.axis('off')
def animate(i):
patch.set_data(frames[i])
anim = animation.FuncAnimation(plt.gcf(), animate, frames = len(frames), interval=50)
display(display_animation(anim, default_mode='loop'))
display_frames_as_gif(frames)
Explanation: Display one example rollout. See here for how to do such a thing:
https://github.com/patrickmineault/xcorr-notebooks/blob/master/Render%20OpenAI%20gym%20as%20GIF.ipynb
End of explanation |
1,349 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CasADi demo
What is CasADi?
A tool for quick & efficient implementation of algorithms for dynamic optimization
Open source, LGPL-licensed, <a href="http
Step1: Note 1
Step2: Functions of SX graphs
Sort graph into algorithm
Step3: Note 2
Step4: Matrices of scalar expressions
Step5: Rule 1
Step6: Rule 1
Step7: Automatic differentiation (AD)
Consider an ode
Step8: Performing forward sweeps gives the columns of J
Step9: Performing adjoint sweeps gives the rows of J
Step10: Often, you can do better than slicing with unit vectors
Note 3
Step11: Construct an integrating block $x_{k+1} = \Phi(f;\Delta t;x_k,u_k)$
Step12: Rule 2
Step13: What if you don't want to expand into scalar operations? ( avoid $O(n^3)$ storage)
Step14: What if you cannot expand into matrix operations? ( numerical algorithm )
Step15: Functions of MX graphs
Step16: This shows how an integrator-call can be embedded in matrix graph.
More possibilities
Step17: X is a symbolic matrix primitive, but with fancier indexing
Step18: Demo
Step19: $ x_{k+1} - \Phi(x_k,u_k) = 0 , \quad \quad k = 0,1,\ldots, (N-1)$
Step20: Block structure in the constraint Jacobian
Step21: Recall
\begin{equation}
\begin{array}{cl}
\underset{X}{\text{minimize}} & F(X,P) \\
\text{subject to}
& \text{lbx} \le X \le \text{ubx} \\
& \text{lbg} \le G(X,P) \le \text{ubg} \\
\end{array}
\end{equation}
Step22: Wrapping up
Showcase | Python Code:
from pylab import *
from casadi import *
from casadi.tools import * # for dotdraw
%matplotlib inline
x = SX.sym("x") # scalar symbolic primitives
y = SX.sym("y")
z = x*sin(x+y) # common mathematical operators
print z
dotdraw(z,direction="BT")
J = jacobian(z,x)
print J
dotdraw(J,direction="BT")
Explanation: CasADi demo
What is CasADi?
A tool for quick & efficient implementation of algorithms for dynamic optimization
Open source, LGPL-licensed, <a href="http://casadi.org">casadi.org</a>
C++ / C++11
Interfaces to Python, Haskell, (Matlab?)
Numerical backends: <a href="https://projects.coin-or.org/Ipopt">IPOPT</a>, <a href="http://computation.llnl.gov/casc/sundials/main.html">Sundials</a>, ...
Developers in group of Moritz Diehl:
Joel Andersson
Joris Gillis
Greg Horn
Outline of demo
Scalar expression (SX) graphs
Functions of SX graphs
Matrices of scalar expressions
Automatic differentiation (AD)
Integrators
Matrix expression (MX) graphs
Functions of MX graphs
Solving an optimal control problem
Scalar expression (SX) graphs
End of explanation
print x*y/x-y
H = hessian(z,x)
print H
Explanation: Note 1: subexpressions are shared.
Graph $\leftrightarrow$ Tree
Different from Maple, Matlab symbolic, sympy, ...
A (very) little bit of Computer Algebra
End of explanation
f = Function("f",[x,y],[z])
print f
Explanation: Functions of SX graphs
Sort graph into algorithm
End of explanation
print f(1.2,3.4)
print f(1.2,x+y)
f.generate("f.c")
print file("f.c").read()
Explanation: Note 2: re-use of tape variables: live-variables
End of explanation
A = SX.sym("A",3,3)
B = SX.sym("B",3)
print A
print solve(A,B)
print trace(A) # Trace
print mtimes(A,B) # Matrix multiplication
print norm_fro(A) # Frobenius norm
print A[2,:] # Slicing
Explanation: Matrices of scalar expressions
End of explanation
print A.shape, z.shape
I = SX.eye(3)
print I
Ak = kron(I,A)
print Ak
Explanation: Rule 1: Everything is a matrix
End of explanation
Ak.sparsity().spy()
A.sparsity().spy()
z.sparsity().spy()
Explanation: Rule 1: Everything is a sparse matrix
End of explanation
t = SX.sym("t") # time
u = SX.sym("u") # control
p = SX.sym("p")
q = SX.sym("q")
c = SX.sym("c")
x = vertcat(p,q,c) # state
ode = vertcat((1 - q**2)*p - q + u, p, p**2+q**2+u**2)
print ode, ode.shape
J = jacobian(ode,x)
print J
f = Function("f",[t,u,x],[ode])
ffwd = f.forward(1)
fadj = f.reverse(1)
# side-by-side printing
print '{:*^24} || {:*^28} || {:*^28}'.format("f","ffwd","fadj")
def short(f):
import re
return re.sub(r", a\.k\.a\. \"(\w+)\"",r". \1",str(f).replace(", No description available","").replace("Input ","").replace("Output ",""))
for l in zip(short(f).split("\n"),short(ffwd).split("\n"),short(fadj).split("\n")):
print '{:<24} || {:<28} || {:<28}'.format(*l)
Explanation: Automatic differentiation (AD)
Consider an ode:
\begin{equation}
\dot{p} = (1 - q^2)p-q+u
\end{equation}
\begin{equation}
\dot{q} = p
\end{equation}
\begin{equation}
\dot{c} = p^2+q^2+u^2
\end{equation}
End of explanation
print I
for i in range(3):
print ffwd(t,u,x, ode, 0,0,I[:,i])
print J
Explanation: Performing forward sweeps gives the columns of J
End of explanation
for i in range(3):
print fadj(t,u,x, ode, I[:,i])[2]
Explanation: Performing adjoint sweeps gives the rows of J
End of explanation
f = {'x':x,'t':t,'p':u,'ode':ode}
Explanation: Often, you can do better than slicing with unit vectors
Note 3: CasADi does graph coloring for efficient sparse jacobians
Integrators
$\dot{x}=f(x,u,t)$ with $x = [p,q,c]^T$
End of explanation
tf = 10.0
N = 20
dt = tf/N
Phi = integrator("Phi","cvodes",f,{"tf":dt})
x0 = DM([0,1,0])
print Phi(x0=x0)
x = x0
xs = [x]
for i in range(N):
x = Phi(x0=x)["xf"]
xs.append(x)
plot(horzcat(*xs).T)
legend(["p","q","c"])
Explanation: Construct an integrating block $x_{k+1} = \Phi(f;\Delta t;x_k,u_k)$
End of explanation
n = 3
A = SX.sym("A",n,n)
B = SX.sym("B",n,n)
C = mtimes(A,B)
print C
dotdraw(C,direction='BT')
Explanation: Rule 2: Everything is a Function (see http://docs.casadi.org)
Matrix expression (MX) graphs
Note 4: this is what makes CasADi stand out among AD tools
Recall
End of explanation
A = MX.sym("A",n,n)
B = MX.sym("B",n,n)
C = mtimes(A,B)
print C
dotdraw(C,direction='BT')
Explanation: What if you don't want to expand into scalar operations? ( avoid $O(n^3)$ storage)
End of explanation
C = solve(A,B)
print C
dotdraw(C,direction='BT')
X0 = MX.sym("x",3)
XF = Phi(x0=X0)["xf"]
print XF
expr = sin(XF)+X0
dotdraw(expr,direction='BT')
Explanation: What if you cannot expand into matrix operations? ( numerical algorithm )
End of explanation
F = Function("F",[X0],[ expr ])
print F
print F(x0)
J = F.jacobian()
print J(x0)
Explanation: Functions of MX graphs
End of explanation
X = struct_symMX([
(
entry("x", repeat=N+1, struct=struct(["p","q","c"]) ),
entry("u", repeat=N)
)
])
Explanation: This shows how an integrator-call can be embedded in matrix graph.
More possibilities: external compiled library, a call to Matlab/Scipy
Solving an optimal control problem
\begin{equation}
\begin{array}{cl}
\underset{p(.),q(.),u(.)}{\text{minimize}} & \displaystyle \int_{0}^{T}{ p(t)^2 + q(t)^2 + u(t)^2 dt} \\
\text{subject to}
& \dot{p} = (1 - q^2)p-q+u \\
& \dot{q} = p \\
& p(0) = 0, q(0) = 1 \\
&-1 \le u(t) \le 1
\end{array}
\end{equation}
Remember, $\dot{x}=f(x,u,t)$ with $x = [p,q,c]^T$
\begin{equation}
\begin{array}{cl}
\underset{x(.),u(.)}{\text{minimize}} & c(T) \\
\text{subject to}
& \dot{x} = f(x,u) \\
& p(0) = 0, q(0) = 1, c(0)= 0 \\
&-1 \le u(t) \le 1
\end{array}
\end{equation}
Discretization with multiple shooting
\begin{equation}
\begin{array}{cl}
\underset{x_{\bullet},u_{\bullet}}{\text{minimize}} & c_N \\
\text{subject to}
& x_{k+1} - \Phi(x_k,u_k) = 0 , \quad \quad k = 0,1,\ldots, (N-1) \\
& p_0 = 0, q_0 = 1, c_0 = 0 \\
&-1 \le u_k \le 1 , \quad \quad k = 0,1,\ldots, (N-1)
\end{array}
\end{equation}
Cast as NLP
\begin{equation}
\begin{array}{cl}
\underset{X}{\text{minimize}} & F(X,P) \\
\text{subject to}
& \text{lbx} \le X \le \text{ubx} \\
& \text{lbg} \le G(X,P) \le \text{ubg} \\
\end{array}
\end{equation}
End of explanation
print X.shape
print (N+1)*3+N
Explanation: X is a symbolic matrix primitive, but with fancier indexing
End of explanation
Xf = Phi( x0=X["x",0],p=X["u",0] )["xf"]
print Xf
Explanation: Demo: $\Phi(x_0,u_0)$
End of explanation
g = [] # List of constraint expressions
for k in range(N):
Xf = Phi( x0=X["x",k],p=X["u",k] )["xf"]
g.append( X["x",k+1]-Xf )
obj = X["x",N,"c"] # c_N
nlp = dict(x=X, g=vertcat(*g),f=obj)
print nlp
Explanation: $ x_{k+1} - \Phi(x_k,u_k) = 0 , \quad \quad k = 0,1,\ldots, (N-1)$
End of explanation
jacG = jacobian(nlp["g"],nlp["x"])
S = jacG.sparsity()
print S.shape
DM.ones(S)[:20,:20].sparsity().spy()
Explanation: Block structure in the constraint Jacobian
End of explanation
solver = nlpsol("solver","ipopt",nlp)
lbx = X(-inf)
ubx = X(inf)
lbx["u",:] = -1; ubx["u",:] = 1 # -1 <= u(t) <= 1
lbx["x",0] = ubx["x",0] = x0 # Initial condition
sol_out = solver(
lbg = 0, # Equality constraints for shooting constraints
ubg = 0, # 0 <= g <= 0
lbx = lbx,
ubx = ubx)
print sol_out["x"]
sol = X(sol_out["x"])
plot(horzcat(*sol["x",:]).T)
step(range(N),sol["u",:])
Explanation: Recall
\begin{equation}
\begin{array}{cl}
\underset{X}{\text{minimize}} & F(X,P) \\
\text{subject to}
& \text{lbx} \le X \le \text{ubx} \\
& \text{lbg} \le G(X,P) \le \text{ubg} \\
\end{array}
\end{equation}
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('tmjIBpb43j0')
YouTubeVideo('SW6ZJzcMWAk')
Explanation: Wrapping up
Showcase: kite-power optimization by Greg Horn, using CasADi backend
End of explanation |
1,350 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting topographic arrowmaps of evoked data
Load evoked data and plot arrowmaps along with the topomap for selected time
points. An arrowmap is based upon the Hosaka-Cohen transformation and
represents an estimation of the current flow underneath the MEG sensors.
They are a poor man's MNE.
See [1]_ for details.
References
.. [1] D. Cohen, H. Hosaka
"Part II magnetic field produced by a current dipole",
Journal of electrocardiology, Volume 9, Number 4, pp. 409-417, 1976.
DOI
Step1: Plot magnetometer data as an arrowmap along with the topoplot at the time
of the maximum sensor space activity
Step2: Plot gradiometer data as an arrowmap along with the topoplot at the time
of the maximum sensor space activity
Step3: Since Vectorview 102 system perform sparse spatial sampling of the magnetic
field, data from the Vectorview (info_from) can be projected to the high
density CTF 272 system (info_to) for visualization
Plot gradiometer data as an arrowmap along with the topoplot at the time
of the maximum sensor space activity | Python Code:
# Authors: Sheraz Khan <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.datasets import sample
from mne.datasets.brainstorm import bst_raw
from mne import read_evokeds
from mne.viz import plot_arrowmap
print(__doc__)
path = sample.data_path()
fname = path + '/MEG/sample/sample_audvis-ave.fif'
# load evoked data
condition = 'Left Auditory'
evoked = read_evokeds(fname, condition=condition, baseline=(None, 0))
evoked_mag = evoked.copy().pick_types(meg='mag')
evoked_grad = evoked.copy().pick_types(meg='grad')
Explanation: Plotting topographic arrowmaps of evoked data
Load evoked data and plot arrowmaps along with the topomap for selected time
points. An arrowmap is based upon the Hosaka-Cohen transformation and
represents an estimation of the current flow underneath the MEG sensors.
They are a poor man's MNE.
See [1]_ for details.
References
.. [1] D. Cohen, H. Hosaka
"Part II magnetic field produced by a current dipole",
Journal of electrocardiology, Volume 9, Number 4, pp. 409-417, 1976.
DOI: 10.1016/S0022-0736(76)80041-6
End of explanation
max_time_idx = np.abs(evoked_mag.data).mean(axis=0).argmax()
plot_arrowmap(evoked_mag.data[:, max_time_idx], evoked_mag.info)
# Since planar gradiometers takes gradients along latitude and longitude,
# they need to be projected to the flatten manifold span by magnetometer
# or radial gradiometers before taking the gradients in the 2D Cartesian
# coordinate system for visualization on the 2D topoplot. You can use the
# ``info_from`` and ``info_to`` parameters to interpolate from
# gradiometer data to magnetometer data.
Explanation: Plot magnetometer data as an arrowmap along with the topoplot at the time
of the maximum sensor space activity:
End of explanation
plot_arrowmap(evoked_grad.data[:, max_time_idx], info_from=evoked_grad.info,
info_to=evoked_mag.info)
Explanation: Plot gradiometer data as an arrowmap along with the topoplot at the time
of the maximum sensor space activity:
End of explanation
path = bst_raw.data_path()
raw_fname = path + '/MEG/bst_raw/' \
'subj001_somatosensory_20111109_01_AUX-f.ds'
raw_ctf = mne.io.read_raw_ctf(raw_fname)
raw_ctf_info = mne.pick_info(
raw_ctf.info, mne.pick_types(raw_ctf.info, meg=True, ref_meg=False))
plot_arrowmap(evoked_grad.data[:, max_time_idx], info_from=evoked_grad.info,
info_to=raw_ctf_info, scale=2e-10)
Explanation: Since Vectorview 102 system perform sparse spatial sampling of the magnetic
field, data from the Vectorview (info_from) can be projected to the high
density CTF 272 system (info_to) for visualization
Plot gradiometer data as an arrowmap along with the topoplot at the time
of the maximum sensor space activity:
End of explanation |
1,351 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a name="top"></a>
<div style="width
Step1: Case Study Data
There are a number of different sites that you can utilize to access past model output analyses and even forecasts. The most robust collection is housed at the National Center for Environmental Information (NCEI, formerly NCDC) on a THREDDS server. The general website to begin your search is
https
Step2: Let's see what dimensions are in the file
Step3: Pulling Data for Calculation/Plotting
The object that we get from Siphon is netCDF-like, so we can pull data using familiar calls for all of the variables that are desired for calculations and plotting purposes.
NOTE
Step4: <button data-toggle="collapse" data-target="#sol1" class='btn btn-primary'>View Solution</button>
<div id="sol1" class="collapse">
<code><pre>
# Extract data and assign units
tmpk = gaussian_filter(data.variables['Temperature_isobaric'][0],
sigma=1.0) * units.K
hght = gaussian_filter(data.variables['Geopotential_height_isobaric'][0],
sigma=1.0) * units.meter
uwnd = gaussian_filter(data.variables['u-component_of_wind_isobaric'][0], sigma=1.0) * units('m/s')
vwnd = gaussian_filter(data.variables['v-component_of_wind_isobaric'][0], sigma=1.0) * units('m/s')
\# Extract coordinate data for plotting
lat = data.variables['lat'][
Step5: Finally, we need to calculate the spacing of the grid in distance units instead of degrees using the MetPy helper function lat_lon_grid_spacing.
Step6: Finding Pressure Level Data
A robust way to parse the data for a certain pressure level is to find the index value using the np.where function. Since the NARR pressure data ('levels') is in hPa, then we'll want to search that array for our pressure levels 850, 500, and 300 hPa.
<div class="alert alert-success">
<b>EXERCISE</b>
Step7: <button data-toggle="collapse" data-target="#sol2" class='btn btn-primary'>View Solution</button>
<div id="sol2" class="collapse">
<code><pre>
# Specify 850 hPa data
ilev850 = np.where(lev == 850)[0][0]
hght_850 = hght[ilev850]
tmpk_850 = tmpk[ilev850]
uwnd_850 = uwnd[ilev850]
vwnd_850 = vwnd[ilev850]
\# Specify 500 hPa data
ilev500 = np.where(lev == 500)[0][0]
hght_500 = hght[ilev500]
uwnd_500 = uwnd[ilev500]
vwnd_500 = vwnd[ilev500]
\# Specify 300 hPa data
ilev300 = np.where(lev == 300)[0][0]
hght_300 = hght[ilev300]
uwnd_300 = uwnd[ilev300]
vwnd_300 = vwnd[ilev300]
</pre></code>
</div>
Using MetPy to Calculate Atmospheric Dynamic Quantities
MetPy has a large and growing list of functions to calculate many different atmospheric quantities. Here we want to use some classic functions to calculate wind speed, advection, planetary vorticity, relative vorticity, and divergence.
Wind Speed
Step8: <button data-toggle="collapse" data-target="#sol3" class='btn btn-primary'>View Solution</button>
<div id="sol3" class="collapse">
<code><pre>
# Temperature Advection
tmpc_adv_850 = mpcalc.advection(tmpk_850, [uwnd_850, vwnd_850],
(dx, dy), dim_order='yx').to('degC/s')
</pre></code>
</div>
Vorticity Calculations
There are a couple of different vorticities that we are interested in for various calculations, planetary vorticity, relative vorticity, and absolute vorticity. Currently MetPy has two of the three as functions within the calc module.
Planetary Vorticity (Coriolis Parameter)
coriolis_parameter(latitude in radians)
Note
Step9: <button data-toggle="collapse" data-target="#sol4" class='btn btn-primary'>View Solution</button>
<div id="sol4" class="collapse">
<code><pre>
# Vorticity and Absolute Vorticity Calculations
\# Planetary Vorticity
f = mpcalc.coriolis_parameter(np.deg2rad(lat)).to('1/s')
\# Relative Vorticity
vor_500 = mpcalc.vorticity(uwnd_500, vwnd_500, dx, dy,
dim_order='yx')
\# Abosolute Vorticity
avor_500 = vor_500 + f
</pre></code>
</div>
Vorticity Advection
We use the same MetPy function for temperature advection for our vorticity advection, we just have to change the scalar quantity (what is being advected) and have appropriate vector quantities for the level our scalar is from. So for vorticity advections well want our wind components from 500 hPa.
Step10: Divergence and Stretching Vorticity
If we want to analyze another component of the vorticity tendency equation other than advection, we might want to assess the stretching forticity term.
-(Abs. Vort.)*(Divergence)
We already have absolute vorticity calculated, so now we need to calculate the divergence of the level, which MetPy has a function
divergence(uwnd, vwnd, dx, dy)
This function computes the horizontal divergence.
Step11: Wind Speed, Geostrophic and Ageostrophic Wind
Wind Speed
Calculating wind speed is not a difficult calculation, but MetPy offers a function to calculate it easily keeping units so that it is easy to convert units for plotting purposes.
wind_speed(uwnd, vwnd)
Geostrophic Wind
The geostrophic wind can be computed from a given height gradient and coriolis parameter
geostrophic_wind(heights, coriolis parameter, dx, dy)
This function will return the two geostrophic wind components in a tuple. On the left hand side you'll be able to put two variables to save them off separately, if desired.
Ageostrophic Wind
Currently, there is not a function in MetPy for calculating the ageostrophic wind, however, it is again a simple arithmatic operation to get it from the total wind (which comes from our data input) and out calculated geostrophic wind from above.
Ageo Wind = Total Wind - Geo Wind
Step12: Maps and Projections
Step13: 850-hPa Temperature Advection
Add one contour (Temperature in Celsius with a dotted linestyle
Add one colorfill (Temperature Advection in C/hr)
<div class="alert alert-success">
<b>EXERCISE</b>
Step14: <button data-toggle="collapse" data-target="#sol5" class='btn btn-primary'>View Solution</button>
<div id="sol5" class="collapse">
<code><pre>
fig, ax = create_map_background()
\# Contour 1 - Temperature, dotted
cs2 = ax.contour(lon, lat, tmpk_850.to('degC'), range(-50, 50, 2),
colors='grey', linestyles='dotted', transform=dataproj)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=10, fmt='%i',
rightside_up=True, use_clabeltext=True)
\# Contour 2
clev850 = np.arange(0, 4000, 30)
cs = ax.contour(lon, lat, hght_850, clev850, colors='k',
linewidths=1.0, linestyles='solid', transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=10, fmt='%i',
rightside_up=True, use_clabeltext=True)
\# Filled contours - Temperature advection
contours = [-3, -2.2, -2, -1.5, -1, -0.5, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]
cf = ax.contourf(lon, lat, tmpc_adv_850*3600, contours,
cmap='bwr', extend='both', transform=dataproj)
plt.colorbar(cf, orientation='horizontal', pad=0, aspect=50,
extendrect=True, ticks=contours)
\# Vector
ax.barbs(lon, lat, uwnd_850.to('kts').m, vwnd_850.to('kts').m,
regrid_shape=15, transform=dataproj)
\# Titles
plt.title('850-hPa Geopotential Heights, Temperature (C), \
Temp Adv (C/h), and Wind Barbs (kts)', loc='left')
plt.title(f'VALID
Step15: <button data-toggle="collapse" data-target="#sol6" class='btn btn-primary'>View Solution</button>
<div id="sol6" class="collapse">
<code><pre>
fig, ax = create_map_background()
\# Contour 1
clev500 = np.arange(0, 7000, 60)
cs = ax.contour(lon, lat, hght_500, clev500, colors='k',
linewidths=1.0, linestyles='solid', transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=4,
fmt='%i', rightside_up=True, use_clabeltext=True)
\# Filled contours
\# Set contour intervals for Absolute Vorticity
clevavor500 = [-4, -3, -2, -1, 0, 7, 10, 13, 16, 19,
22, 25, 28, 31, 34, 37, 40, 43, 46]
\# Set colorfill colors for absolute vorticity
\# purple negative
\# yellow to orange positive
colorsavor500 = ('#660066', '#660099', '#6600CC', '#6600FF',
'#FFFFFF', '#ffE800', '#ffD800', '#ffC800',
'#ffB800', '#ffA800', '#ff9800', '#ff8800',
'#ff7800', '#ff6800', '#ff5800', '#ff5000',
'#ff4000', '#ff3000')
cf = ax.contourf(lon, lat, avor_500 * 10**5, clevavor500,
colors=colorsavor500, transform=dataproj)
plt.colorbar(cf, orientation='horizontal', pad=0, aspect=50)
\# Vector
ax.barbs(lon, lat, uwnd_500.to('kts').m, vwnd_500.to('kts').m,
regrid_shape=15, transform=dataproj)
\# Titles
plt.title('500-hPa Geopotential Heights, Absolute Vorticity \
(1/s), and Wind Barbs (kts)', loc='left')
plt.title(f'VALID
Step16: <button data-toggle="collapse" data-target="#sol7" class='btn btn-primary'>View Solution</button>
<div id="sol7" class="collapse">
<code><pre>
fig, ax = create_map_background()
\# Contour 1
clev300 = np.arange(0, 11000, 120)
cs2 = ax.contour(lon, lat, div_300 * 10**5, range(-10, 11, 2),
colors='grey', transform=dataproj)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=4,
fmt='%i', rightside_up=True, use_clabeltext=True)
\# Contour 2
cs = ax.contour(lon, lat, hght_300, clev300, colors='k',
linewidths=1.0, linestyles='solid', transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=4,
fmt='%i', rightside_up=True, use_clabeltext=True)
\# Filled Contours
spd300 = np.arange(50, 250, 20)
cf = ax.contourf(lon, lat, wspd_300, spd300, cmap='BuPu',
transform=dataproj, zorder=0)
plt.colorbar(cf, orientation='horizontal', pad=0.0, aspect=50)
\# Vector of 300-hPa Ageostrophic Wind Vectors
ax.quiver(lon, lat, uageo_300.m, vageo_300.m, regrid_shape=15,
pivot='mid', transform=dataproj, zorder=10)
\# Titles
plt.title('300-hPa Geopotential Heights, Divergence (1/s),\
Wind Speed (kts), Ageostrophic Wind Vector (m/s)',
loc='left')
plt.title(f'VALID
Step17: Plotting Data for Hand Calculation
Calculating dynamic quantities with a computer is great and can allow for many different educational opportunities, but there are times when we want students to calculate those quantities by hand. So can we plot values of geopotential height, u-component of the wind, and v-component of the wind on a map? Yes! And its not too hard to do.
Since we are using NARR data, we'll plot every third point to get a roughly 1 degree by 1 degree separation of grid points and thus an average grid spacing of 111 km (not exact, but close enough for back of the envelope calculations).
To do our plotting we'll be using the functionality of MetPy to plot station plot data, but we'll use our gridded data to plot around our points. To do this we'll have to make or 2D data into 1D (which is made easy by the ravel() method associated with our data objects).
First we'll want to set some bounds (so that we only plot what we want) and create a mask to make plotting easier.
Second we'll set up our figure with a projection and then set up our "stations" at the grid points we desire using the MetPy class StationPlot
https
Step18: <div class="alert alert-success">
<b>EXERCISE</b> | Python Code:
from datetime import datetime
import cartopy.crs as ccrs
import cartopy.feature as cfeature
from netCDF4 import Dataset, num2date
import numpy as np
from scipy.ndimage import gaussian_filter
from siphon.catalog import TDSCatalog
from siphon.ncss import NCSS
import matplotlib.pyplot as plt
import metpy.calc as mpcalc
from metpy.plots import StationPlot
from metpy.units import units
Explanation: <a name="top"></a>
<div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/src/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>MetPy Case Study</h1>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
This is a tutorial on building a case study map for Dynamic Meteorology courses with use of Unidata tools, specifically MetPy and Siphon. In this tutorial we will cover accessing, calculating, and plotting model output.
Let's investigate The Storm of the Century, although it would easy to change which case you wanted (please feel free to do so).
Reanalysis Output: NARR 00 UTC 13 March 1993
Data from Reanalysis on pressure surfaces:
Geopotential Heights
Temperature
u-wind component
v-wind component
Calculations:
Vertical Vorticity
Advection of Temperature and Vorticity
Horizontal Divergence
Wind Speed
End of explanation
# Case Study Date
year = 1993
month = 3
day = 13
hour = 0
dt = datetime(year, month, day, hour)
# Read NARR Data from THREDDS server
base_url = 'https://www.ncei.noaa.gov/thredds/catalog/narr-a-files/'
# Programmatically generate the URL to the day of data we want
cat = TDSCatalog(f'{base_url}{dt:%Y%m}/{dt:%Y%m%d}/catalog.xml')
# Have Siphon find the appropriate dataset
ds = cat.datasets.filter_time_nearest(dt)
# Download data using the NetCDF Subset Service
ncss = ds.subset()
query = ncss.query().lonlat_box(north=60, south=18, east=300, west=225)
query.all_times().variables('Geopotential_height_isobaric', 'Temperature_isobaric',
'u-component_of_wind_isobaric',
'v-component_of_wind_isobaric').add_lonlat().accept('netcdf')
data = ncss.get_data(query)
# Back up in case of bad internet connection.
# Uncomment the following line to read local netCDF file of NARR data
# data = Dataset('../../data/NARR_19930313_0000.nc','r')
Explanation: Case Study Data
There are a number of different sites that you can utilize to access past model output analyses and even forecasts. The most robust collection is housed at the National Center for Environmental Information (NCEI, formerly NCDC) on a THREDDS server. The general website to begin your search is
https://www.ncdc.noaa.gov/data-access
this link contains links to many different data sources (some of which we will come back to later in this tutorial). But for now, lets investigate what model output is avaiable
https://www.ncdc.noaa.gov/data-access/model-data/model-datasets
The gridded model output that are available
Reanalysis
* Climate Forecast System Reanalysis (CFSR)
* CFSR provides a global reanalysis (a best estimate of the observed state of the atmosphere) of past weather from January 1979 through March 2011 at a horizontal resolution of 0.5°.
* North American Regional Reanalysis (NARR)
* NARR is a regional reanalysis of North America containing temperatures, winds, moisture, soil data, and dozens of other parameters at 32km horizontal resolution.
* Reanalysis-1 / Reanalysis-2 (R1/R2)
* Reanalysis-1 / Reanalysis-2 are two global reanalyses of atmospheric data spanning 1948/1979 to present at a 2.5° horizontal resolution.
Numerical Weather Prediction
* Climate Forecast System (CFS)
* CFS provides a global reanalysis, a global reforecast of past weather, and an operational, seasonal forecast of weather out to nine months.
* Global Data Assimilation System (GDAS)
* GDAS is the set of assimilation data, both input and output, in various formats for the Global Forecast System model.
* Global Ensemble Forecast System (GEFS)
* GEFS is a global-coverage weather forecast model made up of 21 separate forecasts, or ensemble members, used to quantify the amount of uncertainty in a forecast. GEFS produces output four times a day with weather forecasts going out to 16 days.
* Global Forecast System (GFS)
* The GFS model is a coupled weather forecast model, composed of four separate models which work together to provide an accurate picture of weather conditions. GFS covers the entire globe down to a horizontal resolution of 28km.
* North American Mesoscale (NAM)
* NAM is a regional weather forecast model covering North America down to a horizontal resolution of 12km. Dozens of weather parameters are available from the NAM grids, from temperature and precipitation to lightning and turbulent kinetic energy.
* Rapid Refresh (RAP)
* RAP is a regional weather forecast model of North America, with separate sub-grids (with different horizontal resolutions) within the overall North America domain. RAP produces forecasts every hour with forecast lengths going out 18 hours. RAP replaced the Rapid Update Cycle (RUC) model on May 1, 2012.
* Navy Operational Global Atmospheric Prediction System (NOGAPS)
* NOGAPS analysis data are available in six-hourly increments on regularly spaced latitude-longitude grids at 1-degree and one-half-degree resolutions. Vertical resolution varies from 18 to 28 pressure levels, 34 sea level depths, the surface, and other various levels.
Ocean Models
* Hybrid Coordinate Ocean Model (HYCOM), Global
* The Navy implementation of HYCOM is the successor to Global NCOM. This site hosts regions covering U.S. coastal waters as well as a global surface model.
* Navy Coastal Ocean Model (NCOM), Global
* Global NCOM was run by the Naval Oceanographic Office (NAVOCEANO) as the Navy’s operational global ocean-prediction system prior to its replacement by the Global HYCOM system in 2013. This site hosts regions covering U.S., European, West Pacific, and Australian coastal waters as well as a global surface model.
* Navy Coastal Ocean Model (NCOM), Regional
* The Regional NCOM is a high-resolution version of NCOM for specific areas. NCEI serves the Americas Seas, U.S. East, and Alaska regions of NCOM.
* Naval Research Laboratory Adaptive Ecosystem Climatology (AEC)
* The Naval Research Laboratory AEC combines an ocean model with Earth observations to provide a synoptic view of the typical (climatic) state of the ocean for every day of the year. This dataset covers the Gulf of Mexico and nearby areas.
* National Centers for Environmental Prediction (NCEP) Real Time Ocean Forecast System (RTOFS)–Atlantic
* RTOFS–Atlantic is a data-assimilating nowcast-forecast system operated by NCEP. This dataset covers the Gulf of Mexico and most of the northern and central Atlantic.
Climate Prediction
* CM2 Global Coupled Climate Models (CM2.X)
* CM2.X consists of two climate models to model the changes in climate over the past century and into the 21st century.
* Coupled Model Intercomparison Project Phase 5 (CMIP5) (link is external)
* The U.N. Intergovernmental Panel on Climate Change (IPCC) coordinates global analysis of climate models under the Climate Model Intercomparison Project (CMIP). CMIP5 is in its fifth iteration. Data are available through the Program for Climate Model Diagnosis and Intercomparison (PCMDI) website.
Derived / Other Model Data
* Service Records Retention System (SRRS)
* SRRS is a store of weather observations, summaries, forecasts, warnings, and advisories generated by the National Weather Service for public use.
* NOMADS Ensemble Probability Tool
* The NOMADS Ensemble Probability Tool allows a user to query the Global Ensemble Forecast System (GEFS) to determine the probability that a set of forecast conditions will occur at a given location using all of the 21 separate GEFS ensemble members.
* National Digital Forecast Database (NDFD)
* NDFD are gridded forecasts created from weather data collected by National Weather Service field offices and processed through the National Centers for Environmental Prediction. NDFD data are available by WMO header or by date range.
* National Digital Guidance Database (NDGD)
* NDGD consists of forecasts, observations, model probabilities, climatological normals, and other digital data that complement the National Digital Forecast Database.
NARR Output
Lets investigate what specific NARR output is available to work with from NCEI.
https://www.ncdc.noaa.gov/data-access/model-data/model-datasets/north-american-regional-reanalysis-narr
We specifically want to look for data that has "TDS" data access, since that is short for a THREDDS server data access point. There are a total of four different GFS datasets that we could potentially use.
Choosing our data source
Let's go ahead and use the NARR Analysis data to investigate the past case we identified (The Storm of the Century).
https://www.ncei.noaa.gov/thredds/catalog/narr-a-files/199303/19930313/catalog.html?dataset=narr-a-files/199303/19930313/narr-a_221_19930313_0000_000.grb
And we will use a python package called Siphon to read this data through the NetCDFSubset (NetCDFServer) link.
https://www.ncei.noaa.gov/thredds/ncss/grid/narr-a-files/199303/19930313/narr-a_221_19930313_0000_000.grb/dataset.html
End of explanation
data.dimensions
Explanation: Let's see what dimensions are in the file:
End of explanation
# Extract data and assign units
tmpk = gaussian_filter(data.variables['Temperature_isobaric'][0], sigma=1.0) * units.K
hght = 0
uwnd = 0
vwnd = 0
# Extract coordinate data for plotting
lat = data.variables['lat'][:]
lon = data.variables['lon'][:]
lev = 0
Explanation: Pulling Data for Calculation/Plotting
The object that we get from Siphon is netCDF-like, so we can pull data using familiar calls for all of the variables that are desired for calculations and plotting purposes.
NOTE:
Due to the curvilinear nature of the NARR grid, there is a need to smooth the data that we import for calculation and plotting purposes. For more information about why, please see the following link: http://www.atmos.albany.edu/facstaff/rmctc/narr/
Additionally, we want to attach units to our values for use in MetPy calculations later and it will also allow for easy conversion to other units.
<div class="alert alert-success">
<b>EXERCISE</b>:
Replace the `0`'s in the template below with your code:
<ul>
<li>Use the `gaussian_filter` function to smooth the `Temperature_isobaric`, `Geopotential_height_isobaric`, `u-component_of_wind_isobaric`, and `v-component_of_wind_isobaric` variables from the netCDF object with a `sigma` value of 1.</li>
<li>Assign the units of `kelvin`, `meter`, `m/s`, and `m/s` resectively.</li>
<li>Extract the `lat`, `lon`, and `isobaric1` variables.</li>
</ul>
</div>
End of explanation
time = data.variables['time1']
print(time.units)
vtime = num2date(time[0], units=time.units)
print(vtime)
Explanation: <button data-toggle="collapse" data-target="#sol1" class='btn btn-primary'>View Solution</button>
<div id="sol1" class="collapse">
<code><pre>
# Extract data and assign units
tmpk = gaussian_filter(data.variables['Temperature_isobaric'][0],
sigma=1.0) * units.K
hght = gaussian_filter(data.variables['Geopotential_height_isobaric'][0],
sigma=1.0) * units.meter
uwnd = gaussian_filter(data.variables['u-component_of_wind_isobaric'][0], sigma=1.0) * units('m/s')
vwnd = gaussian_filter(data.variables['v-component_of_wind_isobaric'][0], sigma=1.0) * units('m/s')
\# Extract coordinate data for plotting
lat = data.variables['lat'][:]
lon = data.variables['lon'][:]
lev = data.variables['isobaric1'][:]
</pre></code>
</div>
Next we need to extract the time variable. It's not in very useful units, but the num2date function can be used to easily create regular datetime objects.
End of explanation
# Calcualte dx and dy for calculations
dx, dy = mpcalc.lat_lon_grid_deltas(lon, lat)
Explanation: Finally, we need to calculate the spacing of the grid in distance units instead of degrees using the MetPy helper function lat_lon_grid_spacing.
End of explanation
# Specify 850 hPa data
ilev850 = np.where(lev==850)[0][0]
hght_850 = hght[ilev850]
tmpk_850 = 0
uwnd_850 = 0
vwnd_850 = 0
# Specify 500 hPa data
ilev500 = 0
hght_500 = 0
uwnd_500 = 0
vwnd_500 = 0
# Specify 300 hPa data
ilev300 = 0
hght_300 = 0
uwnd_300 = 0
vwnd_300 = 0
Explanation: Finding Pressure Level Data
A robust way to parse the data for a certain pressure level is to find the index value using the np.where function. Since the NARR pressure data ('levels') is in hPa, then we'll want to search that array for our pressure levels 850, 500, and 300 hPa.
<div class="alert alert-success">
<b>EXERCISE</b>:
Replace the `0`'s in the template below with your code:
<ul>
<li>Find the index of the 850 hPa, 500 hPa, and 300 hPa levels.</li>
<li>Extract the heights, temperature, u, and v winds at those levels.</li>
</ul>
</div>
End of explanation
# Temperature Advection
# tmpc_adv_850 = mpcalc.advection(--Fill in this call--).to('degC/s')
Explanation: <button data-toggle="collapse" data-target="#sol2" class='btn btn-primary'>View Solution</button>
<div id="sol2" class="collapse">
<code><pre>
# Specify 850 hPa data
ilev850 = np.where(lev == 850)[0][0]
hght_850 = hght[ilev850]
tmpk_850 = tmpk[ilev850]
uwnd_850 = uwnd[ilev850]
vwnd_850 = vwnd[ilev850]
\# Specify 500 hPa data
ilev500 = np.where(lev == 500)[0][0]
hght_500 = hght[ilev500]
uwnd_500 = uwnd[ilev500]
vwnd_500 = vwnd[ilev500]
\# Specify 300 hPa data
ilev300 = np.where(lev == 300)[0][0]
hght_300 = hght[ilev300]
uwnd_300 = uwnd[ilev300]
vwnd_300 = vwnd[ilev300]
</pre></code>
</div>
Using MetPy to Calculate Atmospheric Dynamic Quantities
MetPy has a large and growing list of functions to calculate many different atmospheric quantities. Here we want to use some classic functions to calculate wind speed, advection, planetary vorticity, relative vorticity, and divergence.
Wind Speed: mpcalc.wind_speed()
Advection: mpcalc.advection()
Planetary Vorticity: mpcalc.coriolis_parameter()
Relative Vorticity: mpcalc.vorticity()
Divergence: mpcalc.divergence()
Note: For the above, MetPy Calculation module is imported in the following manner import metpy.calc as mpcalc.
Temperature Advection
A classic QG forcing term is 850-hPa temperature advection. MetPy has a function for advection
advection(scalar quantity, [advecting vector components], (grid spacing components))
So for temperature advection our scalar quantity would be the tempertaure, the advecting vector components would be our u and v components of the wind, and the grid spacing would be our dx and dy we computed in an earier cell.
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Uncomment and fill out the advection calculation below.</li>
</ul>
</div>
End of explanation
# Vorticity and Absolute Vorticity Calculations
# Planetary Vorticity
# f = mpcalc.coriolis_parameter(-- Fill in here --).to('1/s')
# Relative Vorticity
# vor_500 = mpcalc.vorticity(-- Fill in here --)
# Abosolute Vorticity
# avor_500 = vor_500 + f
Explanation: <button data-toggle="collapse" data-target="#sol3" class='btn btn-primary'>View Solution</button>
<div id="sol3" class="collapse">
<code><pre>
# Temperature Advection
tmpc_adv_850 = mpcalc.advection(tmpk_850, [uwnd_850, vwnd_850],
(dx, dy), dim_order='yx').to('degC/s')
</pre></code>
</div>
Vorticity Calculations
There are a couple of different vorticities that we are interested in for various calculations, planetary vorticity, relative vorticity, and absolute vorticity. Currently MetPy has two of the three as functions within the calc module.
Planetary Vorticity (Coriolis Parameter)
coriolis_parameter(latitude in radians)
Note: You must can convert your array of latitudes to radians...NumPy give a great function np.deg2rad() or have units attached to your latitudes in order for MetPy to convert them for you! Always check your output to make sure that your code is producing what you think it is producing.
Relative Vorticity
When atmospheric scientists talk about relative vorticity, we are really refering to the relative vorticity that is occuring about the vertical axis (the k-hat component). So in MetPy the function is
vorticity(uwind, vwind, dx, dy)
Absolute Vorticity
Currently there is no specific function for Absolute Vorticity, but this is easy for us to calculate from the previous two calculations because we just need to add them together!
ABS Vort = Rel. Vort + Coriolis Parameter
Here having units are great, becase we won't be able to add things together that don't have the same units! Its a nice safety check just in case you entered something wrong in another part of the calculation, you'll get a units error.
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Fill in the function calls below to complete the vorticity calculations.</li>
</ul>
</div>
End of explanation
# Vorticity Advection
f_adv = mpcalc.advection(f, [uwnd_500, vwnd_500], (dx, dy), dim_order='yx')
relvort_adv = mpcalc.advection(vor_500, [uwnd_500, vwnd_500], (dx, dy), dim_order='yx')
absvort_adv = mpcalc.advection(avor_500, [uwnd_500, vwnd_500], (dx, dy), dim_order='yx')
Explanation: <button data-toggle="collapse" data-target="#sol4" class='btn btn-primary'>View Solution</button>
<div id="sol4" class="collapse">
<code><pre>
# Vorticity and Absolute Vorticity Calculations
\# Planetary Vorticity
f = mpcalc.coriolis_parameter(np.deg2rad(lat)).to('1/s')
\# Relative Vorticity
vor_500 = mpcalc.vorticity(uwnd_500, vwnd_500, dx, dy,
dim_order='yx')
\# Abosolute Vorticity
avor_500 = vor_500 + f
</pre></code>
</div>
Vorticity Advection
We use the same MetPy function for temperature advection for our vorticity advection, we just have to change the scalar quantity (what is being advected) and have appropriate vector quantities for the level our scalar is from. So for vorticity advections well want our wind components from 500 hPa.
End of explanation
# Stretching Vorticity
div_500 = mpcalc.divergence(uwnd_500, vwnd_500, dx, dy, dim_order='yx')
stretch_vort = -1 * avor_500 * div_500
Explanation: Divergence and Stretching Vorticity
If we want to analyze another component of the vorticity tendency equation other than advection, we might want to assess the stretching forticity term.
-(Abs. Vort.)*(Divergence)
We already have absolute vorticity calculated, so now we need to calculate the divergence of the level, which MetPy has a function
divergence(uwnd, vwnd, dx, dy)
This function computes the horizontal divergence.
End of explanation
# Divergence 300 hPa, Ageostrophic Wind
wspd_300 = mpcalc.wind_speed(uwnd_300, vwnd_300).to('kts')
div_300 = mpcalc.divergence(uwnd_300, vwnd_300, dx, dy, dim_order='yx')
ugeo_300, vgeo_300 = mpcalc.geostrophic_wind(hght_300, f, dx, dy, dim_order='yx')
uageo_300 = uwnd_300 - ugeo_300
vageo_300 = vwnd_300 - vgeo_300
Explanation: Wind Speed, Geostrophic and Ageostrophic Wind
Wind Speed
Calculating wind speed is not a difficult calculation, but MetPy offers a function to calculate it easily keeping units so that it is easy to convert units for plotting purposes.
wind_speed(uwnd, vwnd)
Geostrophic Wind
The geostrophic wind can be computed from a given height gradient and coriolis parameter
geostrophic_wind(heights, coriolis parameter, dx, dy)
This function will return the two geostrophic wind components in a tuple. On the left hand side you'll be able to put two variables to save them off separately, if desired.
Ageostrophic Wind
Currently, there is not a function in MetPy for calculating the ageostrophic wind, however, it is again a simple arithmatic operation to get it from the total wind (which comes from our data input) and out calculated geostrophic wind from above.
Ageo Wind = Total Wind - Geo Wind
End of explanation
# Data projection; NARR Data is Earth Relative
dataproj = ccrs.PlateCarree()
# Plot projection
# The look you want for the view, LambertConformal for mid-latitude view
plotproj = ccrs.LambertConformal(central_longitude=-100., central_latitude=40.,
standard_parallels=[30, 60])
def create_map_background():
fig=plt.figure(figsize=(14, 12))
ax=plt.subplot(111, projection=plotproj)
ax.set_extent([-125, -73, 25, 50],ccrs.PlateCarree())
ax.coastlines('50m', linewidth=0.75)
ax.add_feature(cfeature.STATES, linewidth=0.5)
return fig, ax
Explanation: Maps and Projections
End of explanation
fig, ax = create_map_background()
# Contour 1 - Temperature, dotted
# Your code here!
# Contour 2
clev850 = np.arange(0, 4000, 30)
cs = ax.contour(lon, lat, hght_850, clev850, colors='k',
linewidths=1.0, linestyles='solid', transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=10, fmt='%i',
rightside_up=True, use_clabeltext=True)
# Filled contours - Temperature advection
contours = [-3, -2.2, -2, -1.5, -1, -0.5, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]
# Your code here!
# Vector
ax.barbs(lon, lat, uwnd_850.to('kts').m, vwnd_850.to('kts').m,
regrid_shape=15, transform=dataproj)
# Titles
plt.title('850-hPa Geopotential Heights, Temperature (C), \
Temp Adv (C/h), and Wind Barbs (kts)', loc='left')
plt.title(f'VALID: {vtime}', loc='right')
plt.tight_layout()
plt.show()
Explanation: 850-hPa Temperature Advection
Add one contour (Temperature in Celsius with a dotted linestyle
Add one colorfill (Temperature Advection in C/hr)
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Add one contour (Temperature in Celsius with a dotted linestyle</li>
<li>Add one filled contour (Temperature Advection in C/hr)</li>
</ul>
</div>
End of explanation
fig, ax = create_map_background()
# Contour 1
clev500 = np.arange(0, 7000, 60)
cs = ax.contour(lon, lat, hght_500, clev500, colors='k',
linewidths=1.0, linestyles='solid', transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=4,
fmt='%i', rightside_up=True, use_clabeltext=True)
# Filled contours
# Set contour intervals for Absolute Vorticity
clevavor500 = [-4, -3, -2, -1, 0, 7, 10, 13, 16, 19,
22, 25, 28, 31, 34, 37, 40, 43, 46]
# Set colorfill colors for absolute vorticity
# purple negative
# yellow to orange positive
colorsavor500 = ('#660066', '#660099', '#6600CC', '#6600FF',
'#FFFFFF', '#ffE800', '#ffD800', '#ffC800',
'#ffB800', '#ffA800', '#ff9800', '#ff8800',
'#ff7800', '#ff6800', '#ff5800', '#ff5000',
'#ff4000', '#ff3000')
# YOUR CODE HERE!
plt.colorbar(cf, orientation='horizontal', pad=0, aspect=50)
# Vector
ax.barbs(lon, lat, uwnd_500.to('kts').m, vwnd_500.to('kts').m,
regrid_shape=15, transform=dataproj)
# Titles
plt.title('500-hPa Geopotential Heights, Absolute Vorticity \
(1/s), and Wind Barbs (kts)', loc='left')
plt.title(f'VALID: {vtime}', loc='right')
plt.tight_layout()
plt.show()
Explanation: <button data-toggle="collapse" data-target="#sol5" class='btn btn-primary'>View Solution</button>
<div id="sol5" class="collapse">
<code><pre>
fig, ax = create_map_background()
\# Contour 1 - Temperature, dotted
cs2 = ax.contour(lon, lat, tmpk_850.to('degC'), range(-50, 50, 2),
colors='grey', linestyles='dotted', transform=dataproj)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=10, fmt='%i',
rightside_up=True, use_clabeltext=True)
\# Contour 2
clev850 = np.arange(0, 4000, 30)
cs = ax.contour(lon, lat, hght_850, clev850, colors='k',
linewidths=1.0, linestyles='solid', transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=10, fmt='%i',
rightside_up=True, use_clabeltext=True)
\# Filled contours - Temperature advection
contours = [-3, -2.2, -2, -1.5, -1, -0.5, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]
cf = ax.contourf(lon, lat, tmpc_adv_850*3600, contours,
cmap='bwr', extend='both', transform=dataproj)
plt.colorbar(cf, orientation='horizontal', pad=0, aspect=50,
extendrect=True, ticks=contours)
\# Vector
ax.barbs(lon, lat, uwnd_850.to('kts').m, vwnd_850.to('kts').m,
regrid_shape=15, transform=dataproj)
\# Titles
plt.title('850-hPa Geopotential Heights, Temperature (C), \
Temp Adv (C/h), and Wind Barbs (kts)', loc='left')
plt.title(f'VALID: {vtime}', loc='right')
plt.tight_layout()
plt.show()
</pre></code>
</div>
500-hPa Absolute Vorticity
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Add code for plotting vorticity as filled contours with given levels and colors.</li>
</ul>
</div>
End of explanation
fig, ax = create_map_background()
# Contour 1
clev300 = np.arange(0, 11000, 120)
cs2 = ax.contour(lon, lat, div_300 * 10**5, range(-10, 11, 2),
colors='grey', transform=dataproj)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=4,
fmt='%i', rightside_up=True, use_clabeltext=True)
# Contour 2
cs = ax.contour(lon, lat, hght_300, clev300, colors='k',
linewidths=1.0, linestyles='solid', transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=4,
fmt='%i', rightside_up=True, use_clabeltext=True)
# Filled Contours
spd300 = np.arange(50, 250, 20)
cf = ax.contourf(lon, lat, wspd_300, spd300, cmap='BuPu',
transform=dataproj, zorder=0)
plt.colorbar(cf, orientation='horizontal', pad=0.0, aspect=50)
# Vector of 300-hPa Ageostrophic Wind Vectors
# Your code goes here!
# Titles
plt.title('300-hPa Geopotential Heights, Divergence (1/s),\
Wind Speed (kts), Ageostrophic Wind Vector (m/s)',
loc='left')
plt.title(f'VALID: {vtime}', loc='right')
plt.tight_layout()
plt.show()
Explanation: <button data-toggle="collapse" data-target="#sol6" class='btn btn-primary'>View Solution</button>
<div id="sol6" class="collapse">
<code><pre>
fig, ax = create_map_background()
\# Contour 1
clev500 = np.arange(0, 7000, 60)
cs = ax.contour(lon, lat, hght_500, clev500, colors='k',
linewidths=1.0, linestyles='solid', transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=4,
fmt='%i', rightside_up=True, use_clabeltext=True)
\# Filled contours
\# Set contour intervals for Absolute Vorticity
clevavor500 = [-4, -3, -2, -1, 0, 7, 10, 13, 16, 19,
22, 25, 28, 31, 34, 37, 40, 43, 46]
\# Set colorfill colors for absolute vorticity
\# purple negative
\# yellow to orange positive
colorsavor500 = ('#660066', '#660099', '#6600CC', '#6600FF',
'#FFFFFF', '#ffE800', '#ffD800', '#ffC800',
'#ffB800', '#ffA800', '#ff9800', '#ff8800',
'#ff7800', '#ff6800', '#ff5800', '#ff5000',
'#ff4000', '#ff3000')
cf = ax.contourf(lon, lat, avor_500 * 10**5, clevavor500,
colors=colorsavor500, transform=dataproj)
plt.colorbar(cf, orientation='horizontal', pad=0, aspect=50)
\# Vector
ax.barbs(lon, lat, uwnd_500.to('kts').m, vwnd_500.to('kts').m,
regrid_shape=15, transform=dataproj)
\# Titles
plt.title('500-hPa Geopotential Heights, Absolute Vorticity \
(1/s), and Wind Barbs (kts)', loc='left')
plt.title(f'VALID: {vtime}', loc='right')
plt.tight_layout()
plt.show()
</pre></code>
</div>
300-hPa Wind Speed, Divergence, and Ageostrophic Wind
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Add code to plot 300-hPa Ageostrophic Wind vectors using matplotlib's quiver function.</li>
</ul>
</div>
End of explanation
fig=plt.figure(1,figsize=(21.,16.))
# Upper-Left Panel
ax=plt.subplot(221,projection=plotproj)
ax.set_extent([-125.,-73,25.,50.],ccrs.PlateCarree())
ax.coastlines('50m', linewidth=0.75)
ax.add_feature(cfeature.STATES,linewidth=0.5)
# Contour #1
clev500 = np.arange(0,7000,60)
cs = ax.contour(lon,lat,hght_500,clev500,colors='k',
linewidths=1.0,linestyles='solid',transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=3, fmt='%i', rightside_up=True, use_clabeltext=True)
# Contour #2
cs2 = ax.contour(lon,lat,f*10**4,np.arange(0,3,.05),colors='grey',
linewidths=1.0,linestyles='dashed',transform=dataproj)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=3, fmt='%.2f', rightside_up=True, use_clabeltext=True)
# Colorfill
cf = ax.contourf(lon,lat,f_adv*10**10,np.arange(-10,11,0.5),
cmap='PuOr_r',extend='both',transform=dataproj)
plt.colorbar(cf, orientation='horizontal',pad=0.0,aspect=50,extendrect=True)
# Vector
ax.barbs(lon,lat,uwnd_500.to('kts').m,vwnd_500.to('kts').m,regrid_shape=15,transform=dataproj)
# Titles
plt.title(r'500-hPa Geopotential Heights, Planetary Vorticity Advection ($*10^{10}$ 1/s^2)',loc='left')
plt.title('VALID: %s' %(vtime),loc='right')
# Upper-Right Panel
ax=plt.subplot(222,projection=plotproj)
ax.set_extent([-125.,-73,25.,50.],ccrs.PlateCarree())
ax.coastlines('50m', linewidth=0.75)
ax.add_feature(cfeature.STATES, linewidth=0.5)
# Contour #1
clev500 = np.arange(0,7000,60)
cs = ax.contour(lon,lat,hght_500,clev500,colors='k',
linewidths=1.0,linestyles='solid',transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=3, fmt='%i', rightside_up=True, use_clabeltext=True)
# Contour #2
cs2 = ax.contour(lon,lat,vor_500*10**5,np.arange(-40,41,4),colors='grey',
linewidths=1.0,transform=dataproj)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=3, fmt='%d', rightside_up=True, use_clabeltext=True)
# Colorfill
cf = ax.contourf(lon,lat,relvort_adv*10**8,np.arange(-5,5.5,0.5),
cmap='BrBG',extend='both',transform=dataproj)
plt.colorbar(cf, orientation='horizontal',pad=0.0,aspect=50,extendrect=True)
# Vector
ax.barbs(lon,lat,uwnd_500.to('kts').m,vwnd_500.to('kts').m,regrid_shape=15,transform=dataproj)
# Titles
plt.title(r'500-hPa Geopotential Heights, Relative Vorticity Advection ($*10^{8}$ 1/s^2)',loc='left')
plt.title('VALID: %s' %(vtime),loc='right')
# Lower-Left Panel
ax=plt.subplot(223,projection=plotproj)
ax.set_extent([-125.,-73,25.,50.],ccrs.PlateCarree())
ax.coastlines('50m', linewidth=0.75)
ax.add_feature(cfeature.STATES, linewidth=0.5)
# Contour #1
clev500 = np.arange(0,7000,60)
cs = ax.contour(lon,lat,hght_500,clev500,colors='k',
linewidths=1.0,linestyles='solid',transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=3, fmt='%i', rightside_up=True, use_clabeltext=True)
# Contour #2
cs2 = ax.contour(lon,lat,avor_500*10**5,np.arange(-5,41,4),colors='grey',
linewidths=1.0,transform=dataproj)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=3, fmt='%d', rightside_up=True, use_clabeltext=True)
# Colorfill
cf = ax.contourf(lon,lat,absvort_adv*10**8,np.arange(-5,5.5,0.5),
cmap='RdBu',extend='both',transform=dataproj)
plt.colorbar(cf, orientation='horizontal',pad=0.0,aspect=50,extendrect=True)
# Vector
ax.barbs(lon,lat,uwnd_500.to('kts').m,vwnd_500.to('kts').m,regrid_shape=15,transform=dataproj)
# Titles
plt.title(r'500-hPa Geopotential Heights, Absolute Vorticity Advection ($*10^{8}$ 1/s^2)',loc='left')
plt.title('VALID: %s' %(vtime),loc='right')
# Lower-Right Panel
ax=plt.subplot(224,projection=plotproj)
ax.set_extent([-125.,-73,25.,50.],ccrs.PlateCarree())
ax.coastlines('50m', linewidth=0.75)
ax.add_feature(cfeature.STATES, linewidth=0.5)
# Contour #1
clev500 = np.arange(0,7000,60)
cs = ax.contour(lon,lat,hght_500,clev500,colors='k',
linewidths=1.0,linestyles='solid',transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=3, fmt='%i', rightside_up=True, use_clabeltext=True)
# Contour #2
cs2 = ax.contour(lon,lat,gaussian_filter(avor_500*10**5,sigma=1.0),np.arange(-5,41,4),colors='grey',
linewidths=1.0,transform=dataproj)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=3, fmt='%d', rightside_up=True, use_clabeltext=True)
# Colorfill
cf = ax.contourf(lon,lat,gaussian_filter(stretch_vort*10**9,sigma=1.0),np.arange(-15,16,1),
cmap='PRGn',extend='both',transform=dataproj)
plt.colorbar(cf, orientation='horizontal',pad=0.0,aspect=50,extendrect=True)
# Vector
ax.barbs(lon,lat,uwnd_500.to('kts').m,vwnd_500.to('kts').m,regrid_shape=15,transform=dataproj)
# Titles
plt.title(r'500-hPa Geopotential Heights, Stretching Vorticity ($*10^{9}$ 1/s^2)',loc='left')
plt.title('VALID: %s' %(vtime),loc='right')
plt.tight_layout()
plt.show()
Explanation: <button data-toggle="collapse" data-target="#sol7" class='btn btn-primary'>View Solution</button>
<div id="sol7" class="collapse">
<code><pre>
fig, ax = create_map_background()
\# Contour 1
clev300 = np.arange(0, 11000, 120)
cs2 = ax.contour(lon, lat, div_300 * 10**5, range(-10, 11, 2),
colors='grey', transform=dataproj)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=4,
fmt='%i', rightside_up=True, use_clabeltext=True)
\# Contour 2
cs = ax.contour(lon, lat, hght_300, clev300, colors='k',
linewidths=1.0, linestyles='solid', transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=4,
fmt='%i', rightside_up=True, use_clabeltext=True)
\# Filled Contours
spd300 = np.arange(50, 250, 20)
cf = ax.contourf(lon, lat, wspd_300, spd300, cmap='BuPu',
transform=dataproj, zorder=0)
plt.colorbar(cf, orientation='horizontal', pad=0.0, aspect=50)
\# Vector of 300-hPa Ageostrophic Wind Vectors
ax.quiver(lon, lat, uageo_300.m, vageo_300.m, regrid_shape=15,
pivot='mid', transform=dataproj, zorder=10)
\# Titles
plt.title('300-hPa Geopotential Heights, Divergence (1/s),\
Wind Speed (kts), Ageostrophic Wind Vector (m/s)',
loc='left')
plt.title(f'VALID: {vtime}', loc='right')
plt.tight_layout()
plt.show()
</pre></code>
</div>
Vorticity Tendency Terms
Here is an example of a four-panel plot for a couple of terms in the Vorticity Tendency equation
Upper-left Panel: Planetary Vorticity Advection
Upper-right Panel: Relative Vorticity Advection
Lower-left Panel: Absolute Vorticity Advection
Lower-right Panel: Stretching Vorticity
End of explanation
# Set lat/lon bounds for region to plot data
LLlon = -104
LLlat = 33
URlon = -94
URlat = 38.1
# Set up mask so that you only plot what you want
skip_points = (slice(None, None, 3), slice(None, None, 3))
mask_lon = ((lon[skip_points].ravel() > LLlon + 0.05) & (lon[skip_points].ravel() < URlon + 0.01))
mask_lat = ((lat[skip_points].ravel() < URlat - 0.01) & (lat[skip_points].ravel() > LLlat - 0.01))
mask = mask_lon & mask_lat
Explanation: Plotting Data for Hand Calculation
Calculating dynamic quantities with a computer is great and can allow for many different educational opportunities, but there are times when we want students to calculate those quantities by hand. So can we plot values of geopotential height, u-component of the wind, and v-component of the wind on a map? Yes! And its not too hard to do.
Since we are using NARR data, we'll plot every third point to get a roughly 1 degree by 1 degree separation of grid points and thus an average grid spacing of 111 km (not exact, but close enough for back of the envelope calculations).
To do our plotting we'll be using the functionality of MetPy to plot station plot data, but we'll use our gridded data to plot around our points. To do this we'll have to make or 2D data into 1D (which is made easy by the ravel() method associated with our data objects).
First we'll want to set some bounds (so that we only plot what we want) and create a mask to make plotting easier.
Second we'll set up our figure with a projection and then set up our "stations" at the grid points we desire using the MetPy class StationPlot
https://unidata.github.io/MetPy/latest/api/generated/metpy.plots.StationPlot.html#metpy.plots.StationPlot
Third we'll plot our points using matplotlibs scatter() function and use our stationplot object to plot data around our "stations"
End of explanation
# Set up plot basics and use StationPlot class from MetPy to help with plotting
fig = plt.figure(figsize=(14, 8))
ax = plt.subplot(111,projection=ccrs.LambertConformal(central_latitude=50,central_longitude=-107))
ax.set_extent([LLlon,URlon,LLlat,URlat],ccrs.PlateCarree())
ax.coastlines('50m', edgecolor='grey', linewidth=0.75)
ax.add_feature(cfeature.STATES, edgecolor='grey', linewidth=0.5)
# Set up station plotting using only every third element from arrays for plotting
stationplot = StationPlot(ax, lon[skip_points].ravel()[mask],
lat[skip_points].ravel()[mask],
transform=ccrs.PlateCarree(), fontsize=12)
# Plot markers then data around marker for calculation purposes
# Your code goes here!
# Title
plt.title('Geopotential (m; top), U-wind (m/s; Lower Left), V-wind (m/s; Lower Right)')
plt.tight_layout()
plt.show()
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Plot markers and data around the markers.</li>
</ul>
</div>
End of explanation |
1,352 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading QM outputs - EDA and COVP calculations
Step1: Normally, one would want a very generalized way of reading in output files (like an argparse input argument with nargs='+' that gets looped over in a big out, but this is more to demonstrate the parsing of this specific kind of file, so we use
Step2: Normally, we might also do this, where we read the contents of the entire file in as a string. This might be a bad idea for these files, since they can grow to several megabytes.
Step3: It's more efficient to loop over the file directly, which avoids having to read the whole thing into memory. This does mean that you can't open and close it right away; you add another level of indentation.
Step4: Actually, it might be instructive to do a timing comparison between the two approaches.
Step5: It looks like there's a very slight time penalty the second way, and it might be generally true that memory-efficient algorithms usually require more CPU time. The second way also looks a little bit cleaner, and it's easier to understand what's going on.
Let's change the string we're looking for to one that's more relevant to the EDA/COVP analysis.
Step6: That's fine, but we also want some of the lines that come after the header.
Step7: Here we've used two tricks
Step8: Now we're printing the correct rows. How should we store these values? It's probably best to put them in a NumPy array, but since that array needs to be allocated beforehand, we need to know the shape (which is the number of fragments. How do we get that?
Step9: Now, combine the two and place a NumPy array allocation in the middle.
The last tricky bit will be assigning the text/string values to array elements. We're going to use the slicing syntax for both the NumPy array and the string we're splitting.
Step12: It's probably a good idea to turn these into functions, so for an arbitrary calculation, they can be run.
Step13: Now let's use it
Step15: We can write something almost identical for the decompsition of the charge transfer term, which measures the number of millielectrons that move between fragments
Step16: The easier we make it to reuse our code for new calculations, the faster we get to analysis and thinking about our data.
Since we're "delocalizing" from row to column, we should be able to get the total number of millielectrons donated by each fragment as the sum over all columns for each row. To get the total number of millielectrons accepted by a fragment, we can take the sum over all rows for a given column.
For this particular calculation, fragment 1 is a combined anion/cation ionic liquid pair, and fragment 2 is CO$_2$. Knowing this, we probably expect more charge to shift from the ionic liquid onto the CO$_2$, though it's hard to say that conclusively since the anion can just delocalize onto the cation (the whole fragment is of course charge neutral). So, it shouldn't be too surprising if the numbers aren't very different.
Step17: There's a net donation of charge density from the ionic liquid onto the CO$_2$, as expected.
What about charge accepted?
Step18: The values are almost exactly the opposite of the charge donation values. Why aren't they exactly the same?
Parsing the COVP section
There's an additional section of output that can be requested when performing calculations with only two fragments; complementary occupied-virtual pairs (COVPs) can be formed which allows for a direct assignment between a donor orbital on one fragment with an acceptor on the other. The amount of charge transferred between COVPs in both directions is calculated in terms of energy and millielectrons.
```
Complementary occupied-virtual pairs *
Delta E, kJ/mol; Delta Q, me- *
No BSSE correction *
From fragment 1 to fragment 2
# Delta E(Alpha) Delta E(Beta) Delta Q(Alpha) Delta Q(Beta)
1 -3.1119( 69.1%) -3.1119( 69.1%) 1.805( 74.7%) 1.805( 74.7%)
2 -0.9232( 20.5%) -0.9232( 20.5%) 0.415( 17.2%) 0.415( 17.2%)
3 -0.2344( 5.2%) -0.2344( 5.2%) 0.119( 4.9%) 0.119( 4.9%)
4 -0.0771( 1.7%) -0.0771( 1.7%) 0.034( 1.4%) 0.034( 1.4%)
5 -0.0536( 1.2%) -0.0536( 1.2%) 0.016( 0.7%) 0.016( 0.7%)
6 -0.0324( 0.7%) -0.0324( 0.7%) 0.010( 0.4%) 0.010( 0.4%)
7 -0.0245( 0.5%) -0.0245( 0.5%) 0.009( 0.4%) 0.009( 0.4%)
8 -0.0197( 0.4%) -0.0197( 0.4%) 0.005( 0.2%) 0.005( 0.2%)
9 -0.0111( 0.2%) -0.0111( 0.2%) 0.003( 0.1%) 0.003( 0.1%)
10 -0.0104( 0.2%) -0.0104( 0.2%) 0.002( 0.1%) 0.002( 0.1%)
11 -0.0023( 0.1%) -0.0023( 0.1%) 0.001( 0.0%) 0.001( 0.0%)
12 -0.0011( 0.0%) -0.0011( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
13 -0.0011( 0.0%) -0.0011( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
14 -0.0009( 0.0%) -0.0009( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
15 -0.0005( 0.0%) -0.0005( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
16 -0.0004( 0.0%) -0.0004( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
17 -0.0003( 0.0%) -0.0003( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
18 -0.0001( 0.0%) -0.0001( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
19 -0.0001( 0.0%) -0.0001( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
20 -0.0001( 0.0%) -0.0001( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
21 -0.0001( 0.0%) -0.0001( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
22 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
23 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
24 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
25 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
26 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
27 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
28 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
29 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
30 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
31 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
32 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
33 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
34 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
Tot -4.5052(100.0%) -4.5052(100.0%) 2.418(100.0%) 2.418(100.0%)
From fragment 2 to fragment 1
# Delta E(Alpha) Delta E(Beta) Delta Q(Alpha) Delta Q(Beta)
1 -2.2084( 72.4%) -2.2084( 72.4%) 1.532( 80.7%) 1.532( 80.7%)
2 -0.3802( 12.5%) -0.3802( 12.5%) 0.182( 9.6%) 0.182( 9.6%)
3 -0.2128( 7.0%) -0.2128( 7.0%) 0.082( 4.3%) 0.082( 4.3%)
4 -0.1511( 5.0%) -0.1511( 5.0%) 0.070( 3.7%) 0.070( 3.7%)
5 -0.0526( 1.7%) -0.0526( 1.7%) 0.020( 1.1%) 0.020( 1.1%)
6 -0.0337( 1.1%) -0.0337( 1.1%) 0.010( 0.5%) 0.010( 0.5%)
7 -0.0053( 0.2%) -0.0053( 0.2%) 0.001( 0.0%) 0.001( 0.0%)
8 -0.0027( 0.1%) -0.0027( 0.1%) 0.000( 0.0%) 0.000( 0.0%)
9 -0.0011( 0.0%) -0.0011( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
10 -0.0003( 0.0%) -0.0003( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
11 -0.0002( 0.0%) -0.0002( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
Tot -3.0482(100.0%) -3.0482(100.0%) 1.899(100.0%) 1.899(100.0%)
```
The most interesting values are the totals from each fragment to the other. Both the energy and number of millielectrons would be good to have. There's two columns for each, one each for alpha and beta spin; since we're using a spin-restricted wavefunction, they're identical, and we only care about one spin.
It's been determined that the "target" lines containing the numbers we want are
Tot -4.5052(100.0%) -4.5052(100.0%) 2.418(100.0%) 2.418(100.0%)
Tot -3.0482(100.0%) -3.0482(100.0%) 1.899(100.0%) 1.899(100.0%)
but really just
(-4.5052, 2.418)
(-3.0482, 1.899)
so what text can we search for? # Delta E(Alpha) Delta E(Beta) Delta Q(Alpha) Delta Q(Beta) is a good choice; it isn't unique within the entire block, but it only appears inside this block, and it clearly starts each section. We can also search for Tot.
Step19: Now use the trick where we stick a while loop inside the if statement and call the outputfile iterator until we hit Tot
Step20: All that each line requires is a bit of manipulation
Step21: This isn't a good idea for more complicated cases (for example, it won't work if Tot is on two consecutive lines), but it works more often than not.
The lines that we just print to the screen can now be manipulated and assigned to unique variables
Step22: Notice that the list(map(float, line.split())) trick can't be used, because we are just doing a type conversion for each element, but also a slicing operation. We could also do the slicing operation with a map and an anonymous function, but it doesn't look as nice
Step24: Maybe it looks fine; if you've never used an anonymous function before it can be a bit odd. I just tend to write the former with the explicit list comprehension.
Now turn it into a function | Python Code:
from __future__ import print_function
from __future__ import division
import numpy as np
Explanation: Reading QM outputs - EDA and COVP calculations
End of explanation
outputfilepath = "../qm_files/drop_0001_1qm_2mm_eda_covp.out"
Explanation: Normally, one would want a very generalized way of reading in output files (like an argparse input argument with nargs='+' that gets looped over in a big out, but this is more to demonstrate the parsing of this specific kind of file, so we use
End of explanation
# with open(outputfilepath) as outputfile:
# raw_contents = outputfile.read()
Explanation: Normally, we might also do this, where we read the contents of the entire file in as a string. This might be a bad idea for these files, since they can grow to several megabytes.
End of explanation
with open(outputfilepath) as outputfile:
for line in outputfile:
if "Total energy in the final basis" in line:
print(line, end='')
Explanation: It's more efficient to loop over the file directly, which avoids having to read the whole thing into memory. This does mean that you can't open and close it right away; you add another level of indentation.
End of explanation
searchstr = "Total energy in the final basis"
%%timeit -n 1000 -r 10
counter = 0
with open(outputfilepath) as outputfile:
raw_contents = outputfile.read()
for line in iter(raw_contents.splitlines()):
if searchstr in line:
counter += 1
%%timeit -n 1000 -r 10
counter = 0
with open(outputfilepath) as outputfile:
for line in outputfile:
if searchstr in line:
counter += 1
Explanation: Actually, it might be instructive to do a timing comparison between the two approaches.
End of explanation
searchstr = "Energy decomposition of the delocalization term, kJ/mol"
with open(outputfilepath) as outputfile:
for line in outputfile:
if searchstr in line:
print(line, end='')
Explanation: It looks like there's a very slight time penalty the second way, and it might be generally true that memory-efficient algorithms usually require more CPU time. The second way also looks a little bit cleaner, and it's easier to understand what's going on.
Let's change the string we're looking for to one that's more relevant to the EDA/COVP analysis.
End of explanation
with open(outputfilepath) as outputfile:
for line in outputfile:
if searchstr in line:
# print 10 lines instead
for _ in range(10):
print(line, end='')
line = next(outputfile)
Explanation: That's fine, but we also want some of the lines that come after the header.
End of explanation
searchstr = "DEL from fragment(row) to fragment(col)"
with open(outputfilepath) as outputfile:
for line in outputfile:
if searchstr in line:
# This is now the line with the dashes.
line = next(outputfile)
# This is now the line with the column indices.
line = next(outputfile)
# Skip again to get the first line we want to parse.
line = next(outputfile)
# This ensures the parsing will terminate once the block is over.
while list(set(line.strip())) != ['-']:
print(line, end='')
line = next(outputfile)
Explanation: Here we've used two tricks:
Just because we can define variables in loops (like when range() or zip() or enumerate() are used) doesn't mean we need to use them. Sometimes you'll see _ used as the loop variable when it doesn't matter what it's called, but you still need to assign a variable for a function call or something else to work properly.
Any file that's open where you have access to the handle (called outputfile in the above example), or anything that can be wrapped with an iter() to make it iterable, can have the next() function called on it to return the next item. In the case of files, you iterate over the lines one by one (separated by newlines). That's why I have the statement for line in outputfile:, where outputfile is the iterator and line is the variable that contains whatever the latest item is from the outputfile iterator.
To learn more about iterators, there's the official documentation, and I found this Stack Overflow post: http://stackoverflow.com/questions/9884132/what-exactly-are-pythons-iterator-iterable-and-iteration-protocols
Usually, we don't specify a set number of extra lines to iterate, because that number isn't fixed. Instead, we parse until we hit some other line that's a good stopping point. Here is the full block we're interested in, plus the start of the other one for some context:
```
-------------------------------------------------------------------
* Energy decomposition of the delocalization term, kJ/mol *
-------------------------------------------------------------------
DEL from fragment(row) to fragment(col)
1 2
1 0.00000 -9.01048
2 -6.09647 -0.00000
Charge transfer analysis *
R.Z.Khaliullin, A.T. Bell, M.Head-Gordon *
J. Chem. Phys., 2008, 128, 184112 *
-------------------------------------------------------------------
```
The "variable" part of parsing here is the number of rows and columns between the two lines of dashes that come after DEL from.... That's the line we should really be search for, since it's unique in the output file, and it's closer to the lines we want to extract.
Here's the idea.
Search for the line.
Make sure we skip the line with the dashes.
Make sure we skip the line with the column indices. Important note: We're going to assume that the number of columns won't overflow! This will only work for 5 or fewer fragments.
...
End of explanation
searchstr_num_fragments = "SCF on fragment 1 out of"
with open(outputfilepath) as outputfile:
for line in outputfile:
if searchstr_num_fragments in line:
nfragments = int(line.split()[-1])
print(nfragments)
Explanation: Now we're printing the correct rows. How should we store these values? It's probably best to put them in a NumPy array, but since that array needs to be allocated beforehand, we need to know the shape (which is the number of fragments. How do we get that?
End of explanation
searchstr_num_fragments = "SCF on fragment 1 out of"
with open(outputfilepath) as outputfile:
for line in outputfile:
if searchstr_num_fragments in line:
nfragments = int(line.split()[-1])
# create an empty array (where we don't initialize the elements to 0, 1, or anything else)
fragment_del_energies = np.empty(shape=(nfragments, nfragments))
searchstr = "DEL from fragment(row) to fragment(col)"
with open(outputfilepath) as outputfile:
for line in outputfile:
if searchstr in line:
line = next(outputfile)
line = next(outputfile)
line = next(outputfile)
# We need to keep track of our row index with a counter, because we can't
# use enumerate with a while loop.
# We need to keep track of our row index in the first place because we're
# indexing into a NumPy array.
row_counter = 0
while list(set(line.strip())) != ['-']:
# 'map' float() onto every element of the list
# map() returns a generator, so turn it back into a list
sline = list(map(float, line.split()[1:]))
# set all columns in a given row to
fragment_del_energies[row_counter, :] = sline
line = next(outputfile)
row_counter += 1
print(fragment_del_energies)
Explanation: Now, combine the two and place a NumPy array allocation in the middle.
The last tricky bit will be assigning the text/string values to array elements. We're going to use the slicing syntax for both the NumPy array and the string we're splitting.
End of explanation
def get_num_fragments(outputfilepath):
Given a path to an output file, figure out how many fragments are part of it.
searchstr_num_fragments = "SCF on fragment 1 out of"
with open(outputfilepath) as outputfile:
for line in outputfile:
if searchstr_num_fragments in line:
nfragments = int(line.split()[-1])
return nfragments
def get_eda_fragment_delocalization_energies(outputfilepath, nfragments):
Given a path to an output file and the number of fragments it contains, return the
delocalization energies between fragments.
fragment_del_energies = np.empty(shape=(nfragments, nfragments))
searchstr = "DEL from fragment(row) to fragment(col)"
with open(outputfilepath) as outputfile:
for line in outputfile:
if searchstr in line:
line = next(outputfile)
line = next(outputfile)
line = next(outputfile)
row_counter = 0
while list(set(line.strip())) != ['-']:
sline = list(map(float, line.split()[1:]))
fragment_del_energies[row_counter, :] = sline
line = next(outputfile)
row_counter += 1
return fragment_del_energies
Explanation: It's probably a good idea to turn these into functions, so for an arbitrary calculation, they can be run.
End of explanation
nfragments = get_num_fragments(outputfilepath)
fragment_del_energies = get_eda_fragment_delocalization_energies(outputfilepath, nfragments)
print(fragment_del_energies)
Explanation: Now let's use it:
End of explanation
def get_eda_fragment_delocalization_millielectrons(outputfilepath, nfragments):
Given a path to an output file and the number of fragments it contains,
return the number of millielectrons that delocalize between fragments.
fragment_del_millielectrons = np.empty(shape=(nfragments, nfragments))
searchstr = "Delocalization from fragment(row) to fragment(col)"
with open(outputfilepath) as outputfile:
for line in outputfile:
if searchstr in line:
line = next(outputfile)
line = next(outputfile)
line = next(outputfile)
row_counter = 0
while list(set(line.strip())) != ['-']:
sline = list(map(float, line.split()[1:]))
fragment_del_millielectrons[row_counter, :] = sline
line = next(outputfile)
row_counter += 1
return fragment_del_millielectrons
fragment_del_millielectrons = get_eda_fragment_delocalization_millielectrons(outputfilepath, nfragments)
print(fragment_del_millielectrons)
Explanation: We can write something almost identical for the decompsition of the charge transfer term, which measures the number of millielectrons that move between fragments:
End of explanation
me_donated_by_il = np.sum(fragment_del_millielectrons[0, :])
me_donated_by_co2 = np.sum(fragment_del_millielectrons[1, :])
print(me_donated_by_il, me_donated_by_co2)
Explanation: The easier we make it to reuse our code for new calculations, the faster we get to analysis and thinking about our data.
Since we're "delocalizing" from row to column, we should be able to get the total number of millielectrons donated by each fragment as the sum over all columns for each row. To get the total number of millielectrons accepted by a fragment, we can take the sum over all rows for a given column.
For this particular calculation, fragment 1 is a combined anion/cation ionic liquid pair, and fragment 2 is CO$_2$. Knowing this, we probably expect more charge to shift from the ionic liquid onto the CO$_2$, though it's hard to say that conclusively since the anion can just delocalize onto the cation (the whole fragment is of course charge neutral). So, it shouldn't be too surprising if the numbers aren't very different.
End of explanation
me_accepted_by_il = np.sum(fragment_del_millielectrons[:, 0])
me_accepted_by_co2 = np.sum(fragment_del_millielectrons[:, 1])
print(me_accepted_by_il, me_accepted_by_co2)
Explanation: There's a net donation of charge density from the ionic liquid onto the CO$_2$, as expected.
What about charge accepted?
End of explanation
searchstr = "# Delta E(Alpha) Delta E(Beta) Delta Q(Alpha) Delta Q(Beta)"
with open(outputfilepath) as outputfile:
for line in outputfile:
if searchstr in line:
print(line, end='')
Explanation: The values are almost exactly the opposite of the charge donation values. Why aren't they exactly the same?
Parsing the COVP section
There's an additional section of output that can be requested when performing calculations with only two fragments; complementary occupied-virtual pairs (COVPs) can be formed which allows for a direct assignment between a donor orbital on one fragment with an acceptor on the other. The amount of charge transferred between COVPs in both directions is calculated in terms of energy and millielectrons.
```
Complementary occupied-virtual pairs *
Delta E, kJ/mol; Delta Q, me- *
No BSSE correction *
From fragment 1 to fragment 2
# Delta E(Alpha) Delta E(Beta) Delta Q(Alpha) Delta Q(Beta)
1 -3.1119( 69.1%) -3.1119( 69.1%) 1.805( 74.7%) 1.805( 74.7%)
2 -0.9232( 20.5%) -0.9232( 20.5%) 0.415( 17.2%) 0.415( 17.2%)
3 -0.2344( 5.2%) -0.2344( 5.2%) 0.119( 4.9%) 0.119( 4.9%)
4 -0.0771( 1.7%) -0.0771( 1.7%) 0.034( 1.4%) 0.034( 1.4%)
5 -0.0536( 1.2%) -0.0536( 1.2%) 0.016( 0.7%) 0.016( 0.7%)
6 -0.0324( 0.7%) -0.0324( 0.7%) 0.010( 0.4%) 0.010( 0.4%)
7 -0.0245( 0.5%) -0.0245( 0.5%) 0.009( 0.4%) 0.009( 0.4%)
8 -0.0197( 0.4%) -0.0197( 0.4%) 0.005( 0.2%) 0.005( 0.2%)
9 -0.0111( 0.2%) -0.0111( 0.2%) 0.003( 0.1%) 0.003( 0.1%)
10 -0.0104( 0.2%) -0.0104( 0.2%) 0.002( 0.1%) 0.002( 0.1%)
11 -0.0023( 0.1%) -0.0023( 0.1%) 0.001( 0.0%) 0.001( 0.0%)
12 -0.0011( 0.0%) -0.0011( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
13 -0.0011( 0.0%) -0.0011( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
14 -0.0009( 0.0%) -0.0009( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
15 -0.0005( 0.0%) -0.0005( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
16 -0.0004( 0.0%) -0.0004( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
17 -0.0003( 0.0%) -0.0003( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
18 -0.0001( 0.0%) -0.0001( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
19 -0.0001( 0.0%) -0.0001( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
20 -0.0001( 0.0%) -0.0001( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
21 -0.0001( 0.0%) -0.0001( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
22 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
23 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
24 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
25 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
26 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
27 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
28 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
29 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
30 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
31 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
32 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
33 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
34 -0.0000( 0.0%) -0.0000( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
Tot -4.5052(100.0%) -4.5052(100.0%) 2.418(100.0%) 2.418(100.0%)
From fragment 2 to fragment 1
# Delta E(Alpha) Delta E(Beta) Delta Q(Alpha) Delta Q(Beta)
1 -2.2084( 72.4%) -2.2084( 72.4%) 1.532( 80.7%) 1.532( 80.7%)
2 -0.3802( 12.5%) -0.3802( 12.5%) 0.182( 9.6%) 0.182( 9.6%)
3 -0.2128( 7.0%) -0.2128( 7.0%) 0.082( 4.3%) 0.082( 4.3%)
4 -0.1511( 5.0%) -0.1511( 5.0%) 0.070( 3.7%) 0.070( 3.7%)
5 -0.0526( 1.7%) -0.0526( 1.7%) 0.020( 1.1%) 0.020( 1.1%)
6 -0.0337( 1.1%) -0.0337( 1.1%) 0.010( 0.5%) 0.010( 0.5%)
7 -0.0053( 0.2%) -0.0053( 0.2%) 0.001( 0.0%) 0.001( 0.0%)
8 -0.0027( 0.1%) -0.0027( 0.1%) 0.000( 0.0%) 0.000( 0.0%)
9 -0.0011( 0.0%) -0.0011( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
10 -0.0003( 0.0%) -0.0003( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
11 -0.0002( 0.0%) -0.0002( 0.0%) 0.000( 0.0%) 0.000( 0.0%)
Tot -3.0482(100.0%) -3.0482(100.0%) 1.899(100.0%) 1.899(100.0%)
```
The most interesting values are the totals from each fragment to the other. Both the energy and number of millielectrons would be good to have. There's two columns for each, one each for alpha and beta spin; since we're using a spin-restricted wavefunction, they're identical, and we only care about one spin.
It's been determined that the "target" lines containing the numbers we want are
Tot -4.5052(100.0%) -4.5052(100.0%) 2.418(100.0%) 2.418(100.0%)
Tot -3.0482(100.0%) -3.0482(100.0%) 1.899(100.0%) 1.899(100.0%)
but really just
(-4.5052, 2.418)
(-3.0482, 1.899)
so what text can we search for? # Delta E(Alpha) Delta E(Beta) Delta Q(Alpha) Delta Q(Beta) is a good choice; it isn't unique within the entire block, but it only appears inside this block, and it clearly starts each section. We can also search for Tot.
End of explanation
searchstr = "# Delta E(Alpha) Delta E(Beta) Delta Q(Alpha) Delta Q(Beta)"
with open(outputfilepath) as outputfile:
for line in outputfile:
if searchstr in line:
# Do an exact character match on the string.
while line[:4] != " Tot":
line = next(outputfile)
print(line, end='')
Explanation: Now use the trick where we stick a while loop inside the if statement and call the outputfile iterator until we hit Tot:
End of explanation
searchstr = "# Delta E(Alpha) Delta E(Beta) Delta Q(Alpha) Delta Q(Beta)"
with open(outputfilepath) as outputfile:
for line in outputfile:
if searchstr in line:
while line[:4] != " Tot":
line = next(outputfile)
print(line, end='')
line = next(outputfile)
while line[:4] != " Tot":
line = next(outputfile)
print(line, end='')
Explanation: All that each line requires is a bit of manipulation: split, take the [1::2] entries (quiz: what does this do?), get rid of the percentage stuff, map the values to floats, and return them as tuples. There's a problem though: how can we uniquely return both tuples? We could append every match to a list and return the list, but I'd rather be more explicit here since we're only dealing with two lines.
End of explanation
searchstr = "# Delta E(Alpha) Delta E(Beta) Delta Q(Alpha) Delta Q(Beta)"
with open(outputfilepath) as outputfile:
for line in outputfile:
if searchstr in line:
while line[:4] != " Tot":
line = next(outputfile)
f_1_to_2 = tuple([float(x[:-8]) for x in line.split()[1::2]])
print(f_1_to_2)
line = next(outputfile)
while line[:4] != " Tot":
line = next(outputfile)
f_2_to_1 = tuple([float(x[:-8]) for x in line.split()[1::2]])
print(f_2_to_1)
Explanation: This isn't a good idea for more complicated cases (for example, it won't work if Tot is on two consecutive lines), but it works more often than not.
The lines that we just print to the screen can now be manipulated and assigned to unique variables:
End of explanation
searchstr = "# Delta E(Alpha) Delta E(Beta) Delta Q(Alpha) Delta Q(Beta)"
with open(outputfilepath) as outputfile:
for line in outputfile:
if searchstr in line:
while line[:4] != " Tot":
line = next(outputfile)
f_1_to_2 = tuple(map(lambda x: float(x[:-8]), line.split()[1::2]))
print(f_1_to_2)
line = next(outputfile)
while line[:4] != " Tot":
line = next(outputfile)
f_2_to_1 = tuple(map(lambda x: float(x[:-8]), line.split()[1::2]))
print(f_2_to_1)
Explanation: Notice that the list(map(float, line.split())) trick can't be used, because we are just doing a type conversion for each element, but also a slicing operation. We could also do the slicing operation with a map and an anonymous function, but it doesn't look as nice:
End of explanation
def get_eda_covp_totals(outputfilepath):
Given a path to an output file, return the totals for each fragment from the COVP analysis.
The first element of the tuple is the energy contribution, the second element is the
number of millielectrons transferred.
searchstr = "# Delta E(Alpha) Delta E(Beta) Delta Q(Alpha) Delta Q(Beta)"
with open(outputfilepath) as outputfile:
for line in outputfile:
if searchstr in line:
while line[:4] != " Tot":
line = next(outputfile)
f_1_to_2 = tuple(map(lambda x: float(x[:-8]), line.split()[1::2]))
line = next(outputfile)
while line[:4] != " Tot":
line = next(outputfile)
f_2_to_1 = tuple(map(lambda x: float(x[:-8]), line.split()[1::2]))
return f_1_to_2, f_2_to_1
f_1_to_2, f_2_to_1 = get_eda_covp_totals(outputfilepath)
print(f_1_to_2)
print(f_2_to_1)
Explanation: Maybe it looks fine; if you've never used an anonymous function before it can be a bit odd. I just tend to write the former with the explicit list comprehension.
Now turn it into a function:
End of explanation |
1,353 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
UpSampling1D
[convolutional.UpSampling1D.0] size 2 upsampling on 3x5 input
Step1: [convolutional.UpSampling1D.1] size 3 upsampling on 4x4 input
Step2: export for Keras.js tests | Python Code:
data_in_shape = (3, 5)
L = UpSampling1D(size=2)
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(230)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.UpSampling1D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: UpSampling1D
[convolutional.UpSampling1D.0] size 2 upsampling on 3x5 input
End of explanation
data_in_shape = (4, 4)
L = UpSampling1D(size=3)
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(231)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.UpSampling1D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.UpSampling1D.1] size 3 upsampling on 4x4 input
End of explanation
print(json.dumps(DATA))
Explanation: export for Keras.js tests
End of explanation |
1,354 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matriks dengan Numpy
Matriks dengan Numpy
Numpy, sebagai salah satu library yang saling penting di pemrograman yang menggunakan matematika dan angka, memberikan kemudahan dalam melakukan operasi aljabar matriks. Bila deklarasi array a = [[1,0],[0,1]] memberikan array 2D biasa, maka dengan Numpy, a = np.array([[1,0],[0,1]]) memberikan objek a yang dapat dilakukan operasi aljabar matriks seperti penjumlahan, pengurangan, perkalian, transpose, dll.
Pada bab ini, tidak akan dibahas Numpy secara keseluruhan, namun hanya dibahas tentang matriks dan sedikit Aljabar Linear dengan Numpy.
Instalasi
Instalasi bisa dilakukan dengan menginstal dari
source
pip
conda
aptitude(?)
Instalasi
Untuk menginstall dengan pip, cukup buka terminal (Windows
Step1: Kalau yang dibutuhkan hanya salah satu modul dari numpy dan tidak semuanya, import bisa dilakukan dengan perintah from ... import ... sebagai berikut.
Array
Array pada Numpy berbeda dengan array pada module bawaan Python. Pada module bawaan Python, array.array hanya memiliki fungsi terbatas. Pada Numpy, array disebut ndarray (dulu) atau array (alias).
Kita bisa membuat array dengan np.array(). Beberapa matriks khusus juga bisa langsung dibuat.
Step2: Beberapa kesalahan dalam membuat array seringkali terjadi karena kesalahan pada kurung. Ingat bahwa array ini adalah suatu fungsi, sehingga membutuhkan kurung biasa atau parenthesis sedangkan array membutuhkan kurung siku atau brackets. Pada prakteknya, ketika dimensi array lebih dari satu, setelah mengambil brackets di paling luar, menggunakan parenthesis lagi di dalam tidak masalah.
Step3: Vektor
Secara matematis, vektor merupakan bentuk khusus dari matriks. Namun, di Numpy vektor bisa dideklarasikan dengan cara yang berbeda. Dua cara yang cukup sering digunakan adalah dengan arange dan dengan linspace. Perbedaan keduanya ada pendeklarasiannya. Bila arange mengambil input start, stop, dan step, maka linspace ini mengambil input start, stop, dan banyaknya entri. Ide dari linspace ini adalah membuat satu vektor yang jarak antar entrinya sama.
Vektor dibuat dengan
Step4: Manipulasi Bentuk
Sebelum memanipulasi bentuk, kita bisa mengecek 'ukuran' dari matriks dengan np.shape; panjang dengan np.dim dan banyaknya entri dengan np.size.
Step5: Karena shape, dim, size dan lain-lain merupakan fungsi di numpy, bisa juga dipanggil dengan menambahkan fungsinya di belakang objek seperti contoh berikut.
Step6: Reshape
Berikutnya, untuk mengubah bentuk matriks (reshape) kita bisa menyusun ulang matriks dengan perintah np.reshape. Seperti halnya shape, ndim, dan size, kita bisa memanggil reshape di depan sebagai fungsi dan di belakang sebagai atribut.
Step7: Resize vs Reshape
Bila reshape dan transpose ini bersifat non-destruktif (tidak mengubah objek aslinya), maka untuk mengubah bentuk matriks dengan mengubah objeknya bisa dilakukan dengan resize.
Step8: Melakukan Iterasi dengan Matrix
Pada matriks, kita bisa melakukan iterasi berdasarkan elemen-elemen matriks. Misalkan axis pertama (row) atau bahkan seluruh elemennya.
Step9: Indexing dan Slicing
Mengambil satu elemen dari matrix mirip dengan mengambil dari sequence
Step10: Slicing
Step11: Slicing
Step12: Slicing
Step13: Slicing
Step14: Slicing
Step15: Operasi Matriks
Operasi Dasar
Pada operasi matriks dengan menggunakan numpy, satu hal yang perlu diperhatikan adalah ada dua jenis operasi, yaitu operasi element-wise dan operasi matriks. Operasi element-wise menggunakan tanda operasi seperti halnya pada data berbentuk integer atau float. Operasi perkalian matriks menggunakan tanda @ untuk versi Python yang lebih tinggi dari 3.5; untuk versi sebelumnya harus menggunakan fungsi np.matmul dengan dua input.
Step16: Operasi Dasar
Step17: Perkalian Matriks
Step18: Perkalian Matriks
Step19: Perkalian Matriks dengan 'vektor' | Python Code:
import numpy as np
Explanation: Matriks dengan Numpy
Matriks dengan Numpy
Numpy, sebagai salah satu library yang saling penting di pemrograman yang menggunakan matematika dan angka, memberikan kemudahan dalam melakukan operasi aljabar matriks. Bila deklarasi array a = [[1,0],[0,1]] memberikan array 2D biasa, maka dengan Numpy, a = np.array([[1,0],[0,1]]) memberikan objek a yang dapat dilakukan operasi aljabar matriks seperti penjumlahan, pengurangan, perkalian, transpose, dll.
Pada bab ini, tidak akan dibahas Numpy secara keseluruhan, namun hanya dibahas tentang matriks dan sedikit Aljabar Linear dengan Numpy.
Instalasi
Instalasi bisa dilakukan dengan menginstal dari
source
pip
conda
aptitude(?)
Instalasi
Untuk menginstall dengan pip, cukup buka terminal (Windows: commandprompt) dan mengetikkan perintah berikut.
Apabila Anda menggunakan miniconda atau anaconda, Anda bisa menginstall dengan perintah berikut.
Apabila Anda kebetulan menggunakan Ubuntu 20.04 LTS, Anda bisa menginstall lewat aptitude.
Dasar-dasar
Untuk menggunakan Numpy kita perlu mengimportnya. Biasanya import dilakukan dengan memberi singkatan np seperti di bawah ini.
End of explanation
import numpy as np
a = np.array([[1,2,1],[1,0,1]])
b = np.arange(7)
c = np.arange(3,10)
print("a = ")
print(a)
print("b = ")
print(b)
print(c)
print()
# Special matrix
I = np.ones(3)
O = np.zeros(4)
I2 = np.ones((2,4))
O2 = np.zeros((3,3))
print("spesial matrix")
print("satu =",I)
print("nol = ",O)
print("matriks isi 1, dua dimensi")
print(I2)
print("nol dua dimensi =")
print(O2)
Explanation: Kalau yang dibutuhkan hanya salah satu modul dari numpy dan tidak semuanya, import bisa dilakukan dengan perintah from ... import ... sebagai berikut.
Array
Array pada Numpy berbeda dengan array pada module bawaan Python. Pada module bawaan Python, array.array hanya memiliki fungsi terbatas. Pada Numpy, array disebut ndarray (dulu) atau array (alias).
Kita bisa membuat array dengan np.array(). Beberapa matriks khusus juga bisa langsung dibuat.
End of explanation
# Salah
x = np.array(1,2,3)
# Benar
x = np.array([1,2,3])
y = np.array([[1,0,0],[0,1,0]])
z = np.array([(1,0,0),(0,1,0)])
y-z
Explanation: Beberapa kesalahan dalam membuat array seringkali terjadi karena kesalahan pada kurung. Ingat bahwa array ini adalah suatu fungsi, sehingga membutuhkan kurung biasa atau parenthesis sedangkan array membutuhkan kurung siku atau brackets. Pada prakteknya, ketika dimensi array lebih dari satu, setelah mengambil brackets di paling luar, menggunakan parenthesis lagi di dalam tidak masalah.
End of explanation
# deklarasi biasa dengan menuliskan inputnya satu demi satu
vektor = np.array([1,5])
print(vektor)
# deklarasi dengan arange: start, stop, step
vektor1 = np.arange(start=1, stop=10, step=1)
vektor2 = np.arange(0,9,1)
print(vektor1, vektor2)
# deklarasi dengan linspace: start, stop, banyak titik
vektor3 = np.linspace(1,10,4)
print(vektor3)
Explanation: Vektor
Secara matematis, vektor merupakan bentuk khusus dari matriks. Namun, di Numpy vektor bisa dideklarasikan dengan cara yang berbeda. Dua cara yang cukup sering digunakan adalah dengan arange dan dengan linspace. Perbedaan keduanya ada pendeklarasiannya. Bila arange mengambil input start, stop, dan step, maka linspace ini mengambil input start, stop, dan banyaknya entri. Ide dari linspace ini adalah membuat satu vektor yang jarak antar entrinya sama.
Vektor dibuat dengan:
deklarasi isinya
arange
linspace
End of explanation
np.shape(a)
np.ndim(a)
np.size(a)
Explanation: Manipulasi Bentuk
Sebelum memanipulasi bentuk, kita bisa mengecek 'ukuran' dari matriks dengan np.shape; panjang dengan np.dim dan banyaknya entri dengan np.size.
End of explanation
a.shape
a.ndim
a.size
Explanation: Karena shape, dim, size dan lain-lain merupakan fungsi di numpy, bisa juga dipanggil dengan menambahkan fungsinya di belakang objek seperti contoh berikut.
End of explanation
b = np.reshape(y,(1,6)) # reshape menjadi ukuran (1,6)
c = z.reshape(6,1) # reshape menjadi ukuran (6,1)
d = c.T # return berupa transpose dari matriks
e = np.transpose(d)
print(b)
print(c)
b.shape, c.shape, d.shape, e.shape
Explanation: Reshape
Berikutnya, untuk mengubah bentuk matriks (reshape) kita bisa menyusun ulang matriks dengan perintah np.reshape. Seperti halnya shape, ndim, dan size, kita bisa memanggil reshape di depan sebagai fungsi dan di belakang sebagai atribut.
End of explanation
print(a)
a.reshape((1,6))
print("setelah reshape")
print(a)
print(a)
a.resize((1,6))
print("setelah resize")
print(a)
Explanation: Resize vs Reshape
Bila reshape dan transpose ini bersifat non-destruktif (tidak mengubah objek aslinya), maka untuk mengubah bentuk matriks dengan mengubah objeknya bisa dilakukan dengan resize.
End of explanation
a = np.array([[1,2,3],[4,5,6],[7,8,9],[10,11,12]])
a
for row in a:
print("baris:",row)
for elemen in a.flat:
print(elemen)
Explanation: Melakukan Iterasi dengan Matrix
Pada matriks, kita bisa melakukan iterasi berdasarkan elemen-elemen matriks. Misalkan axis pertama (row) atau bahkan seluruh elemennya.
End of explanation
# Indexing
A_1D = np.array([0,1,1,2,3,5,8,13,21])
B_2D = np.array([A_1D,A_1D+2])
C_3D = np.array([B_2D,B_2D*2,B_2D*3,B_2D*4])
C_3D
print(A_1D[4],B_2D[0,0],C_3D[0,0,0])
Explanation: Indexing dan Slicing
Mengambil satu elemen dari matrix mirip dengan mengambil dari sequence: kita cuma perlu tahu koordinatnya. Apabila matriks satu dimensi, maka koordinat cuma perlu satu dimensi. Apabila matriks 2D, maka koordinatnya perlu dua input, dst. Sementara itu, mengambil beberapa elemen dari satu array sering disebut dengan slicing (memotong).
End of explanation
# Contoh slicing
B_2D
# slicing bisa diambil dengan satu koordinat
B_2D[1]
B_2D[1,:]
B_2D[:,1]
Explanation: Slicing
End of explanation
# slicing pada matriks 3D
C_3D
Explanation: Slicing
End of explanation
C_3D[1]
C_3D[1,:,:]
C_3D[:,1,:]
C_3D[:,:,1]
Explanation: Slicing
End of explanation
# slicing lebih dari satu kolom/baris
A = np.linspace(0,9,10)
B, C = A + 10, A + 20
X = np.array([A,B,C])
X
Explanation: Slicing
End of explanation
X
X[0:2,3], X[0:3,3], X[1:3,3]
X[1:3,1:5]
Explanation: Slicing
End of explanation
A = np.array([1,2,3,4,5,6])
B = np.array([[1,1,1],[1,0,1],[0,1,1]])
C = np.array([[1,2,3],[4,5,6],[7,8,9]])
D = np.array([[1,1],[1,0]])
A
Explanation: Operasi Matriks
Operasi Dasar
Pada operasi matriks dengan menggunakan numpy, satu hal yang perlu diperhatikan adalah ada dua jenis operasi, yaitu operasi element-wise dan operasi matriks. Operasi element-wise menggunakan tanda operasi seperti halnya pada data berbentuk integer atau float. Operasi perkalian matriks menggunakan tanda @ untuk versi Python yang lebih tinggi dari 3.5; untuk versi sebelumnya harus menggunakan fungsi np.matmul dengan dua input.
End of explanation
A
2*A
A+3*A + 3*A**2+A**3
A**2
C**2
A < 5
np.sin(A)
Explanation: Operasi Dasar
End of explanation
B, C
B @ C
Explanation: Perkalian Matriks
End of explanation
D
B @ D
print(B.shape,D.shape)
Explanation: Perkalian Matriks
End of explanation
x1 = np.ones(3)
x2 = np.ones((3,1))
x3 = np.ones((1,3))
# Perkalian Matriks 3x3 dengan vektor 3x1
B @ x1
B @ x2
B @ x3
Explanation: Perkalian Matriks dengan 'vektor'
End of explanation |
1,355 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
KNN
$$
a(x, X^l) = \arg \max_{y \in Y} \sum_{i = 1}^{l}[y_i = y] ~ w(i, x)
$$
Step1: Wine | Python Code:
np.random.seed(13)
n = 100
df = pd.DataFrame(
np.vstack([
np.random.normal(loc=0, scale=1, size=(n, 2)),
np.random.normal(loc=3, scale=2, size=(n, 2))
]), columns=['x1', 'x2'])
df['target'] = np.hstack([np.ones(n), np.zeros(n)]).T
plt.scatter(df.x1, df.x2, c=df.target, s=100, edgecolor='black', linewidth='1');
from sklearn.neighbors import KNeighborsClassifier as KNN
features = df.drop('target', axis=1)
answer = df['target']
def get_grid(data, border=1., step=.01):
x_min, x_max = data[:, 0].min() - border, data[:, 0].max() + border
y_min, y_max = data[:, 1].min() - border, data[:, 1].max() + border
return np.meshgrid(np.arange(x_min, x_max, step),
np.arange(y_min, y_max, step))
xx, yy = get_grid(features.values, step=0.025)
def show_knn(k=4, proba=True, weights='uniform'):
clf = KNN(n_neighbors=k, weights=weights)
clf.fit(features, answer)
if proba:
predicted = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1].reshape(xx.shape)
else:
predicted = clf.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.pcolormesh(xx, yy, predicted)
plt.scatter(df.x1, df.x2, c=answer, s=100, edgecolor='black', linewidth='1')
plt.xlabel('x1')
plt.ylabel('x2')
plt.axis([xx.min(), xx.max(), yy.min(), yy.max()]);
interact(show_knn, k=(1, len(df)), weights=['uniform', 'distance'], __manual=True);
Explanation: KNN
$$
a(x, X^l) = \arg \max_{y \in Y} \sum_{i = 1}^{l}[y_i = y] ~ w(i, x)
$$
End of explanation
data = pd.read_csv('wine/winequality-red.csv', sep=';')
print("Shape:", data.shape)
data.head(5)
from sklearn.neighbors import KNeighborsRegressor
from sklearn.metrics import mean_squared_error as score
X = data.drop('quality', axis = 1)
y = data['quality']
clf = KNeighborsRegressor(n_neighbors=1)
clf = clf.fit(X, y)
score(clf.predict(X), y)
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=sum(list(map(ord, 'shad'))))
clf = clf.fit(X_train, y_train)
score(clf.predict(X_test), y_test)
def get_scores(X_train, X_test, y_train, y_test, max_k=100, clf_class=KNeighborsRegressor):
for k in range(1, max_k):
yield score(y_test, clf_class(n_neighbors=k).fit(X_train, y_train).predict(X_test))
scores = list(get_scores(X_train, X_test, y_train, y_test))
best_k = min(range(len(scores)), key=scores.__getitem__)
start_point = best_k, scores[best_k]
plt.annotate("k = {}\nScore = {:.4}".format(best_k, scores[best_k]),
xy=start_point,
xytext=(50, -10), textcoords='offset points',
size=20,
bbox=dict(boxstyle="round", fc="1"),
va="center", ha="left",
arrowprops=dict(facecolor='red', width=4,))
plt.plot(scores, linewidth=3.0);
for idx in range(10):
parts = train_test_split(X, y, test_size=0.3, random_state=idx)
current_scores = list(get_scores(*parts))
plt.plot(current_scores, linewidth=3.0);
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import StratifiedKFold
params = {'n_neighbors': list(range(1, 100))}
grid_searcher = GridSearchCV(KNeighborsRegressor(),
params,
cv=StratifiedKFold(y, n_folds=5, random_state=sum(list(map(ord, 'knn')))),
scoring="mean_squared_error",
n_jobs=-1,)
grid_searcher.fit(X, y);
means, stds = list(map(np.array, zip(*[(
np.mean(i.cv_validation_scores),
np.std(i.cv_validation_scores))
for i in grid_searcher.grid_scores_])))
means *= -1
plot(range(len(means)), means)
best_k = grid_searcher.best_params_['n_neighbors']
start_point = best_k, -grid_searcher.best_score_
plt.annotate("k = {}\nScore = {:.4}".format(best_k, -grid_searcher.best_score_),
xy=start_point,
xytext=(10, 70), textcoords='offset points',
size=20,
bbox=dict(boxstyle="round", fc="1"),
va="center", ha="left",
arrowprops=dict(facecolor='red', width=4,))
plt.fill_between(range(len(means)), means + 2 * stds, means - 2 * stds, alpha = 0.2, facecolor='blue');
X.head(5)
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
clf = make_pipeline(StandardScaler(), KNeighborsRegressor())
params = {'kneighborsregressor__n_neighbors': list(range(1, 100))}
grid_searcher = GridSearchCV(clf,
params,
cv=StratifiedKFold(y, n_folds=5, random_state=sum(list(map(ord, 'knn')))),
scoring="mean_squared_error",
n_jobs=-1,)
grid_searcher.fit(X, y);
scaled_means = -np.array([np.mean(i.cv_validation_scores) for i in grid_searcher.grid_scores_])
plot(range(len(means)), means)
plot(range(len(scaled_means)), scaled_means)
best_point = grid_searcher.best_params_['kneighborsregressor__n_neighbors'], -grid_searcher.best_score_
plt.annotate("k = {}\nScore = {:.4}".format(*best_point),
xy=best_point,
xytext=(20, 60), textcoords='offset points',
size=20,
bbox=dict(boxstyle="round", fc="1"),
va="center", ha="left",
arrowprops=dict(facecolor='red', width=4,))
plt.legend(['Initial data', 'Scaled data'], loc='upper right');
Explanation: Wine
End of explanation |
1,356 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Registration Errors, Terminology and Interpretation <a href="https
Step1: FLE, FRE, TRE empirical experimentation
In the following cell you will use a user interface to experiment with the various concepts associated with FLE, FRE, and TRE.
Interacting with the GUI
Change the mode and interact directly with the figure
Step2: Sensitivity to outliers
The least-squares solutions to the paired-point registration task used by the SimpleITK LandmarkBasedTransformInitializer method are optimal under the assumption of isotropic homogeneous zero mean Gaussian noise. They will readily fail in the presence of outliers (breakdown point is 0, a single outlier can cause arbitrarily large errors).
The GUI above allows you to observe this qualitatively. In the following code cells we illustrate this quantitatively.
Step3: FRE is not a surrogate for TRE
In the next code cell we illustrate that FRE and TRE are not correlated. We first add the same fixed bias to
all of the moving fiducials. This results in a large TRE, but the FRE remains zero. We then add a fixed bias to half of the moving fiducials and the negative of that bias to the other half. This results in a small TRE, but the FRE is large.
Step4: Fiducial Configuration
Even when our model of the world is correct, no outliers and FLE is isotropic and homogeneous, the fiducial configuration
has a significant effect on the TRE. Ideally you want the targets to be at the centroid of your fiducial configuration.
This is illustrated in the code cell below. Translate, rotate and add noise to the fiducials, then register. The targets that are near the fiducials should have a better alignment than those far from the fiducials.
Now, reset the setup. Where would you add two fiducials to improve the overall TRE? Experiment with various fiducial configurations.
Step5: FRE-TRE, and Occam's razor
When we perform registration, our goal is to minimize the Target Registration Error. In practice it needs to be below a problem specific threshold for the registration to be useful.
The target point(s) can be a single point or a region in space, and we want to minimize our registration error for this target. We go about this task by minimizing another quantity, in paired-point registration this is the FRE, in the case of intensity based registration we minimize an appropriate similarity metric. In both cases we expect that TRE is minimized indirectly.
This can easily lead us astray, down the path of overfitting. In our 2D case, instead of using a rigid transformation with three degrees of freedom we may be tempted to use an affine transformation with six degrees of freedom. By introducing these additional degrees of freedom we will likely improve the FRE, but what about TRE?
In the cell below you can qualitatively evaluate the effects of overfitting. Start by adding noise with no rotation or translation and then register. Switch to an affine transformation model and see how registration effects the fiducials and targets. You can then repeat this qualitative evaluation incorporating translation/rotation and noise.
In this notebook we are working in an ideal setting, we know the appropriate transformation model is rigid. Unfortunately, this is often not the case. So which transformation model should you use? When presented with multiple competing hypotheses we select one using the principle of parsimony, often referred to as Occam's razor. Our choice is to select the simplest model that can explain our observations.
In the case of registration, the transformation model with the least degrees of freedom that reduces the TRE below the problem specific threshold. | Python Code:
import SimpleITK as sitk
import numpy as np
import copy
%matplotlib notebook
from gui import PairedPointDataManipulation, display_errors
import matplotlib.pyplot as plt
from registration_utilities import registration_errors
Explanation: Registration Errors, Terminology and Interpretation <a href="https://mybinder.org/v2/gh/InsightSoftwareConsortium/SimpleITK-Notebooks/master?filepath=Python%2F68_Registration_Errors.ipynb"><img style="float: right;" src="https://mybinder.org/badge_logo.svg"></a>
Registration is defined as the estimation of a geometric transformation aligning objects such that the distance between corresponding points on these objects is minimized, bringing the objects into alignment.
When working with point-based registration algorithms we have three types of errors associated with our points (originally defined in [1]):
Fiducial Localization Error (FLE): The error in determining the location of a point which is used to estimate the transformation. The most widely used FLE model is that of a zero mean Gaussian with independent, identically distributed errors. The figure below illustrates the various possible fiducial localization errors:
<img src="fle.svg" style="width:600px"/><br><br>
Fiducial Registration Error (FRE): The error of the fiducial markers following registration, $\|T(\mathbf{p_f}) - \mathbf{p_m}\|$ where $T$ is the estimated transformation and the points $\mathbf{p_f},\;\mathbf{p_m}$ were used to estimate $T$.
Target Registration Error (TRE): The error of the target fiducial markers following registration,$\|T(\mathbf{p_f}) - \mathbf{p_m}\|$ where $T$ is the estimated transformation and the points $\mathbf{p_f},\;\mathbf{p_m}$ were not used to estimate $T$.
Things to remember:
1. TRE is the only quantity of interest, but in most cases we can only estimate its distribution.
2. FRE should never be used as a surrogate for TRE as the TRE for a specific registration is uncorrelated with its FRE [2].
3. TRE is spatially varying.
4. A good TRE is dependent on using a good fiducial configuration [3].
5. The least squares solution to paired-point registration is sensitive to outliers.
[1] "Registration of Head Volume Images Using Implantable Fiducial Markers", C. R. Maurer Jr. et al., IEEE Trans Med Imaging, 16(4):447-462, 1997.
[2] "Fiducial registration error and target registration error are uncorrelated", J. Michael Fitzpatrick, SPIE Medical Imaging: Visualization, Image-Guided Procedures, and Modeling, 7261:1–12, 2009.
[3] "Fiducial point placement and the accuracy of point-based, rigid body registration", J. B. West et al., Neurosurgery, 48(4):810-816, 2001.
End of explanation
manipulation_interface = PairedPointDataManipulation(sitk.Euler2DTransform())
Explanation: FLE, FRE, TRE empirical experimentation
In the following cell you will use a user interface to experiment with the various concepts associated with FLE, FRE, and TRE.
Interacting with the GUI
Change the mode and interact directly with the figure:
Edit: Add pairs of fiducial markers with a left click and pairs of target markers with a right click.
Markers can only be added prior to any manipulation (translation/rotation/noise/bias...)
Translate: right-mouse button down + drag, anywhere in the figure.
Rotate: right-mouse button down + drag, anywhere in the figure, will rotate the fiducials around their centroid
(marked by a blue dot).
Buttons:
Clear Fiducials/Targets: Removes all fiducial/target marker pairs.
Reset: Moves all marker pairs to the original locations.
Noise: Add noise to the fiducial markers.
Bias (FRE<TRE): Add bias to all fiducial markers so that FRE<TRE (move all markers in the same direction).
Bias (FRE>TRE): Add bias to all fiducial markers so that FRE>TRE (move half of the markers in one direction and
the other half in the opposite direction).
Register: Align the two point sets using the paired fiducials.
Marker glyphs:
* Light red plus: moving fiducials
* Dark red x: fixed fiducials
* Light green circle: moving targets
* Dark green square: fixed fiducials
End of explanation
ideal_fixed_fiducials = [
[23.768817532447077, 60.082971482049849],
[29.736559467930949, 68.740980140058511],
[37.639785274382561, 68.524529923608299],
[41.994623984059984, 59.000720399798773],
]
ideal_fixed_targets = [
[32.317204629221266, 60.732322131400501],
[29.413978822769653, 56.403317802396167],
]
ideal_moving_fiducials = [
[76.77857043206542, 30.557710579173616],
[86.1401622129338, 25.76859196933914],
[86.95501792478755, 17.904506579872375],
[78.07960498849866, 12.346214284259808],
]
ideal_moving_targets = [
[78.53588814928511, 22.166738486331596],
[73.86559697098288, 24.481339720595585],
]
# Registration with perfect data (no noise or outliers)
fixed_fiducials = copy.deepcopy(ideal_fixed_fiducials)
fixed_targets = copy.deepcopy(ideal_fixed_targets)
moving_fiducials = copy.deepcopy(ideal_moving_fiducials)
moving_targets = copy.deepcopy(ideal_moving_targets)
# Flatten the point lists, SimpleITK expects a single list/tuple with coordinates (x1,y1,...xn,yn)
fixed_fiducials_flat = [c for p in fixed_fiducials for c in p]
moving_fiducials_flat = [c for p in moving_fiducials for c in p]
transform = sitk.LandmarkBasedTransformInitializer(
sitk.Euler2DTransform(), fixed_fiducials_flat, moving_fiducials_flat
)
FRE_information = registration_errors(transform, fixed_fiducials, moving_fiducials)
TRE_information = registration_errors(transform, fixed_targets, moving_targets)
FLE_values = [0.0] * len(moving_fiducials)
FLE_information = (
np.mean(FLE_values),
np.std(FLE_values),
np.min(FLE_values),
np.max(FLE_values),
FLE_values,
)
display_errors(
fixed_fiducials,
fixed_targets,
FLE_information,
FRE_information,
TRE_information,
title="Ideal Input",
)
# Change fourth fiducial to an outlier and register
outlier_fiducial = [88.07960498849866, 22.34621428425981]
FLE_values[3] = np.sqrt(
(outlier_fiducial[0] - moving_fiducials[3][0]) ** 2
+ (outlier_fiducial[1] - moving_fiducials[3][1]) ** 2
)
moving_fiducials[3][0] = 88.07960498849866
moving_fiducials[3][1] = 22.34621428425981
moving_fiducials_flat = [c for p in moving_fiducials for c in p]
transform = sitk.LandmarkBasedTransformInitializer(
sitk.Euler2DTransform(), fixed_fiducials_flat, moving_fiducials_flat
)
FRE_information = registration_errors(transform, fixed_fiducials, moving_fiducials)
TRE_information = registration_errors(transform, fixed_targets, moving_targets)
FLE_information = (
np.mean(FLE_values),
np.std(FLE_values),
np.min(FLE_values),
np.max(FLE_values),
FLE_values,
)
display_errors(
fixed_fiducials,
fixed_targets,
FLE_information,
FRE_information,
TRE_information,
title="Single Outlier",
)
Explanation: Sensitivity to outliers
The least-squares solutions to the paired-point registration task used by the SimpleITK LandmarkBasedTransformInitializer method are optimal under the assumption of isotropic homogeneous zero mean Gaussian noise. They will readily fail in the presence of outliers (breakdown point is 0, a single outlier can cause arbitrarily large errors).
The GUI above allows you to observe this qualitatively. In the following code cells we illustrate this quantitatively.
End of explanation
# Registration with same bias added to all points
fixed_fiducials = copy.deepcopy(ideal_fixed_fiducials)
fixed_targets = copy.deepcopy(ideal_fixed_targets)
moving_fiducials = copy.deepcopy(ideal_moving_fiducials)
bias_vector = [4.5, 4.5]
bias_fle = np.sqrt(bias_vector[0] ** 2 + bias_vector[1] ** 2)
for fiducial in moving_fiducials:
fiducial[0] += bias_vector[0]
fiducial[1] += bias_vector[1]
FLE_values = [bias_fle] * len(moving_fiducials)
moving_targets = copy.deepcopy(ideal_moving_targets)
# Flatten the point lists, SimpleITK expects a single list/tuple with coordinates (x1,y1,...xn,yn)
fixed_fiducials_flat = [c for p in fixed_fiducials for c in p]
moving_fiducials_flat = [c for p in moving_fiducials for c in p]
transform = sitk.LandmarkBasedTransformInitializer(
sitk.Euler2DTransform(), fixed_fiducials_flat, moving_fiducials_flat
)
FRE_information = registration_errors(transform, fixed_fiducials, moving_fiducials)
TRE_information = registration_errors(transform, fixed_targets, moving_targets)
FLE_information = (
np.mean(FLE_values),
np.std(FLE_values),
np.min(FLE_values),
np.max(FLE_values),
FLE_values,
)
display_errors(
fixed_fiducials,
fixed_targets,
FLE_information,
FRE_information,
TRE_information,
title="FRE<TRE",
)
# Registration with bias in one direction for half the fiducials and in the opposite direction for the other half
moving_fiducials = copy.deepcopy(ideal_moving_fiducials)
pol = 1
for fiducial in moving_fiducials:
fiducial[0] += bias_vector[0] * pol
fiducial[1] += bias_vector[1] * pol
pol *= -1.0
FLE_values = [bias_fle] * len(moving_fiducials)
moving_targets = copy.deepcopy(ideal_moving_targets)
# Flatten the point lists, SimpleITK expects a single list/tuple with coordinates (x1,y1,...xn,yn)
fixed_fiducials_flat = [c for p in fixed_fiducials for c in p]
moving_fiducials_flat = [c for p in moving_fiducials for c in p]
transform = sitk.LandmarkBasedTransformInitializer(
sitk.Euler2DTransform(), fixed_fiducials_flat, moving_fiducials_flat
)
FRE_information = registration_errors(transform, fixed_fiducials, moving_fiducials)
TRE_information = registration_errors(transform, fixed_targets, moving_targets)
FLE_information = (
np.mean(FLE_values),
np.std(FLE_values),
np.min(FLE_values),
np.max(FLE_values),
FLE_values,
)
display_errors(
fixed_fiducials,
fixed_targets,
FLE_information,
FRE_information,
TRE_information,
title="FRE>TRE",
)
Explanation: FRE is not a surrogate for TRE
In the next code cell we illustrate that FRE and TRE are not correlated. We first add the same fixed bias to
all of the moving fiducials. This results in a large TRE, but the FRE remains zero. We then add a fixed bias to half of the moving fiducials and the negative of that bias to the other half. This results in a small TRE, but the FRE is large.
End of explanation
fiducials = [
[31.026882048576109, 65.696247315510021],
[34.252688500189009, 70.674602293864993],
[41.349462693737394, 71.756853376116084],
[47.801075596963202, 68.510100129362826],
[52.47849495180192, 63.315294934557635],
]
targets = [
[38.123656242124497, 64.397546016808718],
[43.768817532447073, 63.748195367458059],
[26.833333661479333, 8.7698403891030861],
[33.768817532447073, 8.120489739752438],
]
manipulation_interface = PairedPointDataManipulation(sitk.Euler2DTransform())
manipulation_interface.set_fiducials(fiducials)
manipulation_interface.set_targets(targets)
Explanation: Fiducial Configuration
Even when our model of the world is correct, no outliers and FLE is isotropic and homogeneous, the fiducial configuration
has a significant effect on the TRE. Ideally you want the targets to be at the centroid of your fiducial configuration.
This is illustrated in the code cell below. Translate, rotate and add noise to the fiducials, then register. The targets that are near the fiducials should have a better alignment than those far from the fiducials.
Now, reset the setup. Where would you add two fiducials to improve the overall TRE? Experiment with various fiducial configurations.
End of explanation
fiducials = [
[31.026882048576109, 65.696247315510021],
[41.349462693737394, 71.756853376116084],
[52.47849495180192, 63.315294934557635],
]
targets = [
[38.123656242124497, 64.397546016808718],
[43.768817532447073, 63.748195367458059],
]
manipulation_interface = PairedPointDataManipulation(sitk.Euler2DTransform())
# manipulation_interface = PairedPointDataManipulation(sitk.AffineTransform(2))
manipulation_interface.set_fiducials(fiducials)
manipulation_interface.set_targets(targets)
Explanation: FRE-TRE, and Occam's razor
When we perform registration, our goal is to minimize the Target Registration Error. In practice it needs to be below a problem specific threshold for the registration to be useful.
The target point(s) can be a single point or a region in space, and we want to minimize our registration error for this target. We go about this task by minimizing another quantity, in paired-point registration this is the FRE, in the case of intensity based registration we minimize an appropriate similarity metric. In both cases we expect that TRE is minimized indirectly.
This can easily lead us astray, down the path of overfitting. In our 2D case, instead of using a rigid transformation with three degrees of freedom we may be tempted to use an affine transformation with six degrees of freedom. By introducing these additional degrees of freedom we will likely improve the FRE, but what about TRE?
In the cell below you can qualitatively evaluate the effects of overfitting. Start by adding noise with no rotation or translation and then register. Switch to an affine transformation model and see how registration effects the fiducials and targets. You can then repeat this qualitative evaluation incorporating translation/rotation and noise.
In this notebook we are working in an ideal setting, we know the appropriate transformation model is rigid. Unfortunately, this is often not the case. So which transformation model should you use? When presented with multiple competing hypotheses we select one using the principle of parsimony, often referred to as Occam's razor. Our choice is to select the simplest model that can explain our observations.
In the case of registration, the transformation model with the least degrees of freedom that reduces the TRE below the problem specific threshold.
End of explanation |
1,357 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
You are currently looking at version 1.2 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
Assignment 2 - Pandas Introduction
All questions are weighted the same in this assignment.
Part 1
The following code loads the olympics dataset (olympics.csv), which was derrived from the Wikipedia entry on All Time Olympic Games Medals, and does some basic data cleaning.
The columns are organized as # of Summer games, Summer medals, # of Winter games, Winter medals, total # number of games, total # of medals. Use this dataset to answer the questions below.
Step1: Question 0 (Example)
What is the first country in df?
This function should return a Series.
Step2: Question 1
Which country has won the most gold medals in summer games?
This function should return a single string value.
Step3: Question 2
Which country had the biggest difference between their summer and winter gold medal counts?
This function should return a single string value.
Step4: Question 3
Which country has the biggest difference between their summer gold medal counts and winter gold medal counts relative to their total gold medal count?
$$\frac{Summer~Gold - Winter~Gold}{Total~Gold}$$
Only include countries that have won at least 1 gold in both summer and winter.
This function should return a single string value.
Step5: Question 4
Write a function that creates a Series called "Points" which is a weighted value where each gold medal (Gold.2) counts for 3 points, silver medals (Silver.2) for 2 points, and bronze medals (Bronze.2) for 1 point. The function should return only the column (a Series object) which you created.
This function should return a Series named Points of length 146
Step6: Part 2
For the next set of questions, we will be using census data from the United States Census Bureau. Counties are political and geographic subdivisions of states in the United States. This dataset contains population data for counties and states in the US from 2010 to 2015. See this document for a description of the variable names.
The census dataset (census.csv) should be loaded as census_df. Answer questions using this as appropriate.
Question 5
Which state has the most counties in it? (hint
Step7: Question 6
Only looking at the three most populous counties for each state, what are the three most populous states (in order of highest population to lowest population)? Use CENSUS2010POP.
This function should return a list of string values.
Step8: Question 7
Which county has had the largest absolute change in population within the period 2010-2015? (Hint
Step9: Question 8
In this datafile, the United States is broken up into four regions using the "REGION" column.
Create a query that finds the counties that belong to regions 1 or 2, whose name starts with 'Washington', and whose POPESTIMATE2015 was greater than their POPESTIMATE 2014.
This function should return a 5x2 DataFrame with the columns = ['STNAME', 'CTYNAME'] and the same index ID as the census_df (sorted ascending by index). | Python Code:
import pandas as pd
df = pd.read_csv('olympics.csv', index_col=0, skiprows=1)
for col in df.columns:
if col[:2]=='01':
df.rename(columns={col:'Gold'+col[4:]}, inplace=True)
if col[:2]=='02':
df.rename(columns={col:'Silver'+col[4:]}, inplace=True)
if col[:2]=='03':
df.rename(columns={col:'Bronze'+col[4:]}, inplace=True)
if col[:1]=='№':
df.rename(columns={col:'#'+col[1:]}, inplace=True)
names_ids = df.index.str.split('\s\(') # split the index by '('
df.index = names_ids.str[0] # the [0] element is the country name (new index)
df['ID'] = names_ids.str[1].str[:3] # the [1] element is the abbreviation or ID (take first 3 characters from that)
df = df.drop('Totals')
df.head()
Explanation: You are currently looking at version 1.2 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
Assignment 2 - Pandas Introduction
All questions are weighted the same in this assignment.
Part 1
The following code loads the olympics dataset (olympics.csv), which was derrived from the Wikipedia entry on All Time Olympic Games Medals, and does some basic data cleaning.
The columns are organized as # of Summer games, Summer medals, # of Winter games, Winter medals, total # number of games, total # of medals. Use this dataset to answer the questions below.
End of explanation
# You should write your whole answer within the function provided. The autograder will call
# this function and compare the return value against the correct solution value
def answer_zero():
# This function returns the row for Afghanistan, which is a Series object. The assignment
# question description will tell you the general format the autograder is expecting
return df.iloc[0]
# You can examine what your function returns by calling it in the cell. If you have questions
# about the assignment formats, check out the discussion forums for any FAQs
answer_zero()
Explanation: Question 0 (Example)
What is the first country in df?
This function should return a Series.
End of explanation
def answer_one():
return df[df['Gold'] == max(df['Gold'])].iloc[0].name
answer_one()
Explanation: Question 1
Which country has won the most gold medals in summer games?
This function should return a single string value.
End of explanation
def answer_two():
return df.loc[(df['Gold'] - df['Gold.1']).idxmax()].name
answer_two()
Explanation: Question 2
Which country had the biggest difference between their summer and winter gold medal counts?
This function should return a single string value.
End of explanation
def answer_three():
df_1 = df[(df['Gold']>=1) & (df['Gold.1']>=1)]
return df_1.loc[(abs(df_1['Gold'].astype('f') - df_1['Gold.1'].astype('f'))/df_1['Gold.2'].astype('f')).idxmax()].name
abs(-1)
answer_three()
Explanation: Question 3
Which country has the biggest difference between their summer gold medal counts and winter gold medal counts relative to their total gold medal count?
$$\frac{Summer~Gold - Winter~Gold}{Total~Gold}$$
Only include countries that have won at least 1 gold in both summer and winter.
This function should return a single string value.
End of explanation
def answer_four():
points = df['Gold.2']*3 + df['Silver.2']*2 + df['Bronze.2']*1
points.rename('Points', inplace=True)
return points
answer_four()
Explanation: Question 4
Write a function that creates a Series called "Points" which is a weighted value where each gold medal (Gold.2) counts for 3 points, silver medals (Silver.2) for 2 points, and bronze medals (Bronze.2) for 1 point. The function should return only the column (a Series object) which you created.
This function should return a Series named Points of length 146
End of explanation
census_df = pd.read_csv('census.csv')
census_df.head()
def answer_five():
census_df_50 = census_df[census_df['SUMLEV'] == 50]
#census_df_50 = census_df_50.reset_index()
#census_df_50 = census_df_50.set_index(['STNAME'])
census_df_50 = census_df_50.groupby(['STNAME']).sum()
return census_df_50.loc[census_df_50['COUNTY'].idxmax()].name
answer_five()
Explanation: Part 2
For the next set of questions, we will be using census data from the United States Census Bureau. Counties are political and geographic subdivisions of states in the United States. This dataset contains population data for counties and states in the US from 2010 to 2015. See this document for a description of the variable names.
The census dataset (census.csv) should be loaded as census_df. Answer questions using this as appropriate.
Question 5
Which state has the most counties in it? (hint: consider the sumlevel key carefully! You'll need this for future questions too...)
This function should return a single string value.
End of explanation
def answer_six():
census_df_50 = census_df[census_df['SUMLEV'] == 50]
census_df_50 = census_df_50.groupby(['STNAME'])['CENSUS2010POP'].nlargest(3)
census_df_50 = census_df_50.reset_index()
census_df_50 = census_df_50.groupby(['STNAME']).sum()
census_df_50= census_df_50.sort(['CENSUS2010POP'], ascending=False)[:3]
return list(census_df_50.index)
answer_six()
Explanation: Question 6
Only looking at the three most populous counties for each state, what are the three most populous states (in order of highest population to lowest population)? Use CENSUS2010POP.
This function should return a list of string values.
End of explanation
def answer_seven():
census_df_50 = census_df[census_df['SUMLEV'] == 50]
#census_df_50 = census_df_50.reset_index()
#census_df_50 = census_df_50.set_index(['STNAME'])
col_list = ['POPESTIMATE2010', 'POPESTIMATE2011','POPESTIMATE2012','POPESTIMATE2013','POPESTIMATE2014', 'POPESTIMATE2015']
census_df_50 = census_df_50.groupby(['CTYNAME']).sum()
census_df_50['POPE_DIFF_ABS'] = census_df_50[col_list].max(axis=1) - census_df_50[col_list].min(axis=1)
#census_df_50 = census_df_50.sort(['POPE_DIFF_ABS'], ascending=False).iloc[0]
return census_df_50.loc[census_df_50['POPE_DIFF_ABS'].idxmax()].name
answer_seven()
Explanation: Question 7
Which county has had the largest absolute change in population within the period 2010-2015? (Hint: population values are stored in columns POPESTIMATE2010 through POPESTIMATE2015, you need to consider all six columns.)
e.g. If County Population in the 5 year period is 100, 120, 80, 105, 100, 130, then its largest change in the period would be |130-80| = 50.
This function should return a single string value.
End of explanation
def answer_eight():
census_df_50 = census_df[census_df['SUMLEV'] == 50]
census_df_50 = census_df_50[(census_df_50['REGION'] == 1) |(census_df_50['REGION'] == 2 )]
census_df_50 = census_df_50[census_df_50['CTYNAME'].str.startswith('Washington')]
census_df_50 = census_df_50[census_df_50['POPESTIMATE2015'] > census_df_50['POPESTIMATE2014'] ]
return census_df_50.filter(items=['STNAME', 'CTYNAME'])
answer_eight()
Explanation: Question 8
In this datafile, the United States is broken up into four regions using the "REGION" column.
Create a query that finds the counties that belong to regions 1 or 2, whose name starts with 'Washington', and whose POPESTIMATE2015 was greater than their POPESTIMATE 2014.
This function should return a 5x2 DataFrame with the columns = ['STNAME', 'CTYNAME'] and the same index ID as the census_df (sorted ascending by index).
End of explanation |
1,358 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Train a ready to use TensorFlow model with a simple pipeline
Step1: BATCH_SIZE might be increased for modern GPUs with lots of memory (4GB and higher).
Step2: Create a dataset
MNIST is a dataset of handwritten digits frequently used as a baseline for machine learning tasks.
Downloading MNIST database might take a few minutes to complete.
Step3: There are also predefined CIFAR10 and CIFAR100 datasets.
Define a pipeline config
Config allows to create flexible pipelines which take parameters.
For instance, if you put a model type into config, you can run a pipeline against different models.
See a list of available models to choose the one which fits you best.
Step4: Create a template pipeline
A template pipeline is not linked to any dataset. It's just an abstract sequence of actions, so it cannot be executed, but it serves as a convenient building block.
Step5: Train the model
Apply a dataset and a config to a template pipeline to create a runnable pipeline
Step6: Run the pipeline (it might take from a few minutes to a few hours depending on your hardware)
Step7: Note that the progress bar often increments by 2 at a time - that's prefetch in action.
It does not give much here, though, since almost all time is spent in model training which is performed under a thread-lock one batch after another without any parallelism (otherwise the model would not learn anything as different batches would rewrite one another's model weights updates).
Step8: Test the model
It is much faster than training, but if you don't have GPU it would take some patience.
Step9: Let's get the accumulated metrics information
Step10: Or a shorter version
Step11: Save the model
After learning the model, you may need to save it. It's easy to do this. | Python Code:
import os
import sys
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import matplotlib.pyplot as plt
# the following line is not required if BatchFlow is installed as a python package.
sys.path.append("../..")
from batchflow import Pipeline, B, C, D, F, V
from batchflow.opensets import MNIST, CIFAR10, CIFAR100
from batchflow.models.tf import ResNet18
Explanation: Train a ready to use TensorFlow model with a simple pipeline
End of explanation
BATCH_SIZE = 64
Explanation: BATCH_SIZE might be increased for modern GPUs with lots of memory (4GB and higher).
End of explanation
dataset = MNIST(bar=True)
Explanation: Create a dataset
MNIST is a dataset of handwritten digits frequently used as a baseline for machine learning tasks.
Downloading MNIST database might take a few minutes to complete.
End of explanation
config = dict(model=ResNet18)
Explanation: There are also predefined CIFAR10 and CIFAR100 datasets.
Define a pipeline config
Config allows to create flexible pipelines which take parameters.
For instance, if you put a model type into config, you can run a pipeline against different models.
See a list of available models to choose the one which fits you best.
End of explanation
train_template = (Pipeline()
.init_variable('loss_history', [])
.init_model('dynamic', C('model'), 'conv_nn',
config={'inputs/images/shape': B.image_shape,
'inputs/labels/classes': D.num_classes,
'initial_block/inputs': 'images'})
.to_array()
.train_model('conv_nn', fetches='loss', images=B.images, labels=B.labels,
save_to=V('loss_history', mode='a'))
)
Explanation: Create a template pipeline
A template pipeline is not linked to any dataset. It's just an abstract sequence of actions, so it cannot be executed, but it serves as a convenient building block.
End of explanation
train_pipeline = (train_template << dataset.train) << config
Explanation: Train the model
Apply a dataset and a config to a template pipeline to create a runnable pipeline:
End of explanation
train_pipeline.run(BATCH_SIZE, shuffle=True, n_epochs=1, drop_last=True, bar=True, prefetch=1)
Explanation: Run the pipeline (it might take from a few minutes to a few hours depending on your hardware)
End of explanation
plt.figure(figsize=(15, 5))
plt.plot(train_pipeline.v('loss_history'))
plt.xlabel("Iterations"), plt.ylabel("Loss")
plt.show()
Explanation: Note that the progress bar often increments by 2 at a time - that's prefetch in action.
It does not give much here, though, since almost all time is spent in model training which is performed under a thread-lock one batch after another without any parallelism (otherwise the model would not learn anything as different batches would rewrite one another's model weights updates).
End of explanation
test_pipeline = (dataset.test.p
.import_model('conv_nn', train_pipeline)
.init_variable('predictions')
.init_variable('metrics')
.to_array()
.predict_model('conv_nn', fetches='predictions', images=B.images, save_to=V('predictions'))
.gather_metrics('class', targets=B.labels, predictions=V('predictions'),
fmt='logits', axis=-1, save_to=V('metrics', mode='a'))
.run(BATCH_SIZE, shuffle=True, n_epochs=1, drop_last=False, bar=True)
)
Explanation: Test the model
It is much faster than training, but if you don't have GPU it would take some patience.
End of explanation
metrics = test_pipeline.get_variable('metrics')
Explanation: Let's get the accumulated metrics information
End of explanation
metrics.evaluate('accuracy')
metrics.evaluate(['false_positive_rate', 'false_negative_rate'], multiclass=None)
Explanation: Or a shorter version: metrics = test_pipeline.v('metrics')
Now we can easiliy calculate any metrics we need
End of explanation
train_pipeline.save_model_now('conv_nn', path='path/to/save')
Explanation: Save the model
After learning the model, you may need to save it. It's easy to do this.
End of explanation |
1,359 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Translating between Currencies
Step1: Translating between currencies requires a number of different choices
do you want to consider the relative value of two currencies based on Market Exchange Rates or Purchasing Power Parity?
do you subscribe to the the GDP Deflator or Consumer Price Index schools of inflation calculations?
Which to use can be very context or use case dependent. salamanca offers all options with data supplied by the World Bank. Below are a few examples.
Basics
Let's start by getting to know the trusty Translator and its primary weapon exchange()
Step2: Every translation is based on countries and years. By default, the Translator assumes you want the USD value of a currency in a year based on market exchange rates using GDP deflators.
So, for example, translating 20 Euros (the currency of Austria) in 2010 would net you 26.5 US Dollars.
Step3: You can further translate 20 2010 Euros into 2015 US Dollars as
Step4: Additional Options
You can specify options such as using CPI rather than GDP Deflators
Step5: Similarly, you can use Purchasing Power Parity rather than Market Exchange Rates | Python Code:
from salamanca.currency import Translator
Explanation: Translating between Currencies
End of explanation
xltr = Translator()
Explanation: Translating between currencies requires a number of different choices
do you want to consider the relative value of two currencies based on Market Exchange Rates or Purchasing Power Parity?
do you subscribe to the the GDP Deflator or Consumer Price Index schools of inflation calculations?
Which to use can be very context or use case dependent. salamanca offers all options with data supplied by the World Bank. Below are a few examples.
Basics
Let's start by getting to know the trusty Translator and its primary weapon exchange()
End of explanation
xltr.exchange(20, iso='AUT', yr=2010)
xltr.exchange(20, fromiso='AUT', toiso='USA', yr=2010) # equivalent to the above defaults
Explanation: Every translation is based on countries and years. By default, the Translator assumes you want the USD value of a currency in a year based on market exchange rates using GDP deflators.
So, for example, translating 20 Euros (the currency of Austria) in 2010 would net you 26.5 US Dollars.
End of explanation
xltr.exchange(20, fromiso='AUT', toiso='USA',
fromyr=2010, toyr=2015)
Explanation: You can further translate 20 2010 Euros into 2015 US Dollars as
End of explanation
xltr.exchange(20, fromiso='AUT', toiso='USA',
fromyr=2010, toyr=2015,
inflation_method='cpi')
Explanation: Additional Options
You can specify options such as using CPI rather than GDP Deflators
End of explanation
xltr.exchange(20, fromiso='AUT', toiso='USA',
fromyr=2010, toyr=2015,
units='PPP')
Explanation: Similarly, you can use Purchasing Power Parity rather than Market Exchange Rates
End of explanation |
1,360 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mpi-m', 'icon-esm-lr', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: MPI-M
Source ID: ICON-ESM-LR
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:17
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
1,361 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stochastic Matrices
These guys are matrices that predict a probability distribution iteratively. You present a "current" distribution, then you multiply by the "transition matrix" and it tells you the "next" distribution.
Step1: x0 = c0vecs[ | Python Code:
#
# define a transition matrix
#
T = np.array([[0.1, 0.2, 0.3],
[0.5,0.3,0.6],
[0.4,0.5,0.1]])
T
#
# Start out with a random prob distribution vector
#
x0 = np.array([np.random.rand(3)]).T
x0 = x0/x0.sum()
x0
x1 = T.dot(x0)
x1
x2 = T.dot(x1)
x2
x3 = T.dot(x2)
x3
xn = x3
for i in range(100):
xn = T.dot(xn)
xn
vals, vecs = np.linalg.eig(T)
vals
vecs[:,0]/vecs[:,0].sum()
vecs.sum(axis=0)
vecs/vecs.sum(axis=0)[0]
x0 = np.array([[1/3,1/3,1/3]]).T
x0
vecs[:,0]
vecs[:,1]
vecs[:,2]
Explanation: Stochastic Matrices
These guys are matrices that predict a probability distribution iteratively. You present a "current" distribution, then you multiply by the "transition matrix" and it tells you the "next" distribution.
End of explanation
-0.3618**100
T.dot(vecs[:,0])
T.dot(vecs[:,1]), vecs[:,1],
vecs[:,1]
-.1118/.8090, 0.0690983/ -0.5, 0.0427051/-0.30901699
a = np.array([1,2,3])
b = np.array([2,4,6])
a/b
T.dot(x0)/x0, x0
Explanation: x0 = c0vecs[:,0] + c1vecs[:,1] + c2*vecs[:,2]
Tx0 = T(c0vecs[:,0] + c1vecs[:,1] + c2*vecs[:,2])
= c0*1*v0 + c1*(-0.138)*v1 + c2*(-0.3618)*v2
TTx0 = c011v0 + c1(-0.138)2v1 + c2(-0.3618)2v2
TTTx0 = c013v0 + c1((-0.138)3v1 + c2(-0.3618)3v2
(T100)x0 = c0*1100v0 + c1((-0.138)100v1 + c2(-0.3618)100*v2
End of explanation |
1,362 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting the house prices data set for king county
Loading graphlab
Step1: Load some house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: Exploring the data for housing sales
The house price is correlated with the number of square feet of living space.
Step3: Create a simple regression model of sqft_living to price
Split data into training and testing, for spliting dataset of the at a particuler point we are using seed. So
We set some what seed=123 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
Step4: Build the regression model using only sqft_living as a feature and
-called model as sqft_model and use feature sqft_living
Step5: Evaluate the simple model
Step6: RMSE of about \$255,170!
Let's show what our predictions look like
Matplotlib is a Python plotting library that is also useful for plotting. import it for ploting
Step7: plot a graph between the price and sqrt_living
and a graph between sqrt_living and predict price by the model
Step8: Above
Step9: Explore other features in the data
To build a more elaborate model, we will explore using more features.
Step10: Pull the bar at the bottom to view more of the data.
98039 is the most expensive zip code.
Build a regression model with more features
Step11: Comparing the results of the simple model with adding more features
Step12: The RMSE goes down from \$255,170 to \$179,508 with more features.
Apply learned models to predict prices of 3 houses
The first house we will use is considered an "average" house in Seattle.
Step13: <img src="http
Step14: In this case, the model with more features provides a worse prediction than the simpler model with only 1 feature. However, on average, the model with more features is better.
Prediction for a second, fancier house
We will now examine the predictions for a fancier house.
Step15: <img src="https
Step16: In this case, the model with more features provides a better prediction. This behavior is expected here, because this house is more differentiated by features that go beyond its square feet of living space, especially the fact that it's a waterfront house.
Last house, super fancy
Our last house is a very large one owned by a famous Seattleite.
Step17: <img src="https
Step18: The model predicts a price of over $13M for this house! But we expect the house to cost much more. (There are very few samples in the dataset of houses that are this fancy, so we don't expect the model to capture a perfect prediction here.)
Now let's build a model with some advance feature
Step19: here you can see the there is no difference between the my_features_model and advanced_features_model
Now let predict the price of the house with our new model | Python Code:
import graphlab
Explanation: Predicting the house prices data set for king county
Loading graphlab
End of explanation
sales = graphlab.SFrame('home_data.gl/')
sales.head(5)
Explanation: Load some house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
graphlab.canvas.set_target('ipynb')
sales.show(view="Scatter Plot", x="sqft_living", y="price")
Explanation: Exploring the data for housing sales
The house price is correlated with the number of square feet of living space.
End of explanation
train_data,test_data = sales.random_split(.8,seed=123)
Explanation: Create a simple regression model of sqft_living to price
Split data into training and testing, for spliting dataset of the at a particuler point we are using seed. So
We set some what seed=123 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
End of explanation
sqft_model = graphlab.linear_regression.create(train_data, target='price', features=['sqft_living'],validation_set=None)
Explanation: Build the regression model using only sqft_living as a feature and
-called model as sqft_model and use feature sqft_living
End of explanation
print test_data['price'].mean()
print sqft_model.evaluate(test_data)
Explanation: Evaluate the simple model
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: RMSE of about \$255,170!
Let's show what our predictions look like
Matplotlib is a Python plotting library that is also useful for plotting. import it for ploting
End of explanation
plt.plot(test_data['sqft_living'],test_data['price'],'.',
test_data['sqft_living'],sqft_model.predict(test_data),'-')
Explanation: plot a graph between the price and sqrt_living
and a graph between sqrt_living and predict price by the model
End of explanation
sqft_model.get('coefficients')
Explanation: Above: blue dots are original data, green line is the prediction from the simple regression.
Below: we can view the learned regression coefficients.
End of explanation
my_features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode']
sales[my_features].show()
sales.show(view='BoxWhisker Plot', x='zipcode', y='price')
Explanation: Explore other features in the data
To build a more elaborate model, we will explore using more features.
End of explanation
my_features_model = graphlab.linear_regression.create(train_data,target='price',features=my_features,validation_set=None)
print my_features
Explanation: Pull the bar at the bottom to view more of the data.
98039 is the most expensive zip code.
Build a regression model with more features
End of explanation
print sqft_model.evaluate(test_data)
print my_features_model.evaluate(test_data)
Explanation: Comparing the results of the simple model with adding more features
End of explanation
house1 = sales[sales['id']=='5309101200']
house1
Explanation: The RMSE goes down from \$255,170 to \$179,508 with more features.
Apply learned models to predict prices of 3 houses
The first house we will use is considered an "average" house in Seattle.
End of explanation
print house1['price']
print sqft_model.predict(house1)
print my_features_model.predict(house1)
Explanation: <img src="http://info.kingcounty.gov/Assessor/eRealProperty/MediaHandler.aspx?Media=2916871">
End of explanation
house2 = sales[sales['id']=='1925069082']
house2
Explanation: In this case, the model with more features provides a worse prediction than the simpler model with only 1 feature. However, on average, the model with more features is better.
Prediction for a second, fancier house
We will now examine the predictions for a fancier house.
End of explanation
print sqft_model.predict(house2)
print my_features_model.predict(house2)
Explanation: <img src="https://ssl.cdn-redfin.com/photo/1/bigphoto/302/734302_0.jpg">
End of explanation
bill_gates = {'bedrooms':[8],
'bathrooms':[25],
'sqft_living':[50000],
'sqft_lot':[225000],
'floors':[4],
'zipcode':['98039'],
'condition':[10],
'grade':[10],
'waterfront':[1],
'view':[4],
'sqft_above':[37500],
'sqft_basement':[12500],
'yr_built':[1994],
'yr_renovated':[2010],
'lat':[47.627606],
'long':[-122.242054],
'sqft_living15':[5000],
'sqft_lot15':[40000]}
Explanation: In this case, the model with more features provides a better prediction. This behavior is expected here, because this house is more differentiated by features that go beyond its square feet of living space, especially the fact that it's a waterfront house.
Last house, super fancy
Our last house is a very large one owned by a famous Seattleite.
End of explanation
print my_features_model.predict(graphlab.SFrame(bill_gates))
Explanation: <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/d/d9/Bill_gates%27_house.jpg/2560px-Bill_gates%27_house.jpg">
End of explanation
advanced_features = ['bedrooms', 'bathrooms', 'sqft_living',
'sqft_lot', 'floors', 'zipcode',
'condition','grade', 'waterfront',
'view','sqft_above','sqft_basement',
'yr_built','yr_renovated', 'lat', 'long',
'sqft_living15','sqft_lot15'
]
advanced_features_model = graphlab.linear_regression.create(train_data,target='price',features=advanced_features,validation_set=None)
print advanced_features
print advanced_features_model.evaluate(test_data)
print sqft_model.evaluate(test_data)
print my_features_model.evaluate(test_data)
Explanation: The model predicts a price of over $13M for this house! But we expect the house to cost much more. (There are very few samples in the dataset of houses that are this fancy, so we don't expect the model to capture a perfect prediction here.)
Now let's build a model with some advance feature
End of explanation
print my_features_model.predict(house2)
print advanced_features_model.predict(house2)
Explanation: here you can see the there is no difference between the my_features_model and advanced_features_model
Now let predict the price of the house with our new model
End of explanation |
1,363 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simplified Selenium Functions
For example, if you are using Selenium in your automated functional tests,
instead of coding directly in Selenium like this
Step1: You can alternatively use Marigoso functions to help you code a little bit better like this | Python Code:
from selenium import webdriver
browser = webdriver.Firefox()
browser.get('https://python.org.')
download = browser.find_element_by_link_text('Downloads')
download.click()
download = browser.find_element_by_id('downloads')
ul = download.find_element_by_tag_name('ul')
lis = ul.find_elements_by_tag_name('li')
Explanation: Simplified Selenium Functions
For example, if you are using Selenium in your automated functional tests,
instead of coding directly in Selenium like this:
End of explanation
from marigoso import Test
browser = Test().launch_browser('Firefox')
browser.get_url('https://python.org')
browser.press('Downloads')
download = browser.get_element('id=downloads')
ul = download.get_child('tag=ul')
lis = ul.get_children('tag=li')
Explanation: You can alternatively use Marigoso functions to help you code a little bit better like this:
End of explanation |
1,364 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Access a Database with Python - Iris Dataset
The Iris dataset is a popular dataset especially in the Machine Learning community, it is a set of features of 50 Iris flowers and their classification into 3 species.
It is often used to introduce classification Machine Learning algorithms.
First let's download the dataset in SQLite format from Kaggle
Step1: Access the Database with the sqlite3 Package
We can use the sqlite3 package from the Python standard library to connect to the sqlite database
Step2: A sqlite3.Cursor object is our interface to the database, mostly throught the execute method that allows to run any SQL query on our database.
First of all we can get a list of all the tables saved into the database, this is done by reading the column name from the sqlite_master metadata table with
Step3: a shortcut to directly execute the query and gather the results is the fetchall method
Step4: Notice
Step5: It is evident that the interface provided by sqlite3 is low-level, for data exploration purposes we would like to directly import data into a more user friendly library like pandas.
Import data from a database to pandas
Step6: pandas.read_sql_query takes a SQL query and a connection object and imports the data into a DataFrame, also keeping the same data types of the database columns. pandas provides a lot of the same functionality of SQL with a more user-friendly interface.
However, sqlite3 is extremely useful for downselecting data before importing them in pandas.
For example you might have 1 TB of data in a table stored in a database on a server machine. You are interested in working on a subset of the data based on some criterion, unfortunately it would be impossible to first load data into pandas and then filter them, therefore we should tell the database to perform the filtering and just load into pandas the downsized dataset. | Python Code:
import os
data_iris_folder_content = os.listdir("./iris-species")
error_message = "Error: sqlite file not available, check instructions above to download it"
assert "database.sqlite" in data_iris_folder_content, error_message
Explanation: Access a Database with Python - Iris Dataset
The Iris dataset is a popular dataset especially in the Machine Learning community, it is a set of features of 50 Iris flowers and their classification into 3 species.
It is often used to introduce classification Machine Learning algorithms.
First let's download the dataset in SQLite format from Kaggle:
https://www.kaggle.com/uciml/iris/
Download database.sqlite and save it in the data/iris folder.
<p><img src="https://upload.wikimedia.org/wikipedia/commons/4/49/Iris_germanica_%28Purple_bearded_Iris%29%2C_Wakehurst_Place%2C_UK_-_Diliff.jpg" alt="Iris germanica (Purple bearded Iris), Wakehurst Place, UK - Diliff.jpg" height="145" width="114"></p>
<p><br> From <a href="https://commons.wikimedia.org/wiki/File:Iris_germanica_(Purple_bearded_Iris),_Wakehurst_Place,_UK_-_Diliff.jpg#/media/File:Iris_germanica_(Purple_bearded_Iris),_Wakehurst_Place,_UK_-_Diliff.jpg">Wikimedia</a>, by <a href="//commons.wikimedia.org/wiki/User:Diliff" title="User:Diliff">Diliff</a> - <span class="int-own-work" lang="en">Own work</span>, <a href="http://creativecommons.org/licenses/by-sa/3.0" title="Creative Commons Attribution-Share Alike 3.0">CC BY-SA 3.0</a>, <a href="https://commons.wikimedia.org/w/index.php?curid=33037509">Link</a></p>
First let's check that the sqlite database is available and display an error message if the file is not available (assert checks if the expression is True, otherwise throws AssertionError with the error message string provided):
End of explanation
import sqlite3
conn = sqlite3.connect('./iris-species/database.sqlite')
cursor = conn.cursor()
type(cursor)
Explanation: Access the Database with the sqlite3 Package
We can use the sqlite3 package from the Python standard library to connect to the sqlite database:
End of explanation
for row in cursor.execute("SELECT name FROM sqlite_master"):
print(row)
Explanation: A sqlite3.Cursor object is our interface to the database, mostly throught the execute method that allows to run any SQL query on our database.
First of all we can get a list of all the tables saved into the database, this is done by reading the column name from the sqlite_master metadata table with:
SELECT name FROM sqlite_master
The output of the execute method is an iterator that can be used in a for loop to print the value of each row.
End of explanation
cursor.execute("SELECT name FROM sqlite_master").fetchall()
Explanation: a shortcut to directly execute the query and gather the results is the fetchall method:
End of explanation
sample_data = cursor.execute("SELECT * FROM Iris LIMIT 20").fetchall()
print(type(sample_data))
sample_data
[row[0] for row in cursor.description]
Explanation: Notice: this way of finding the available tables in a database is specific to sqlite, other databases like MySQL or PostgreSQL have different syntax.
Then we can execute standard SQL query on the database, SQL is a language designed to interact with data stored in a relational database. It has a standard specification, therefore the commands below work on any database.
If you need to connect to another database, you would use another package instead of sqlite3, for example:
MySQL Connector for MySQL
Psycopg for PostgreSQL
pymssql for Microsoft MS SQL
then you would connect to the database using specific host, port and authentication credentials but then you could execute the same exact SQL statements.
Let's take a look for example at the first 3 rows in the Iris table:
End of explanation
import pandas as pd
iris_data = pd.read_sql_query("SELECT * FROM Iris", conn)
iris_data.head()
iris_data.dtypes
Explanation: It is evident that the interface provided by sqlite3 is low-level, for data exploration purposes we would like to directly import data into a more user friendly library like pandas.
Import data from a database to pandas
End of explanation
iris_setosa_data = pd.read_sql_query("SELECT * FROM Iris WHERE Species == 'Iris-setosa'", conn)
iris_setosa_data
print(iris_setosa_data.shape)
print(iris_data.shape)
Explanation: pandas.read_sql_query takes a SQL query and a connection object and imports the data into a DataFrame, also keeping the same data types of the database columns. pandas provides a lot of the same functionality of SQL with a more user-friendly interface.
However, sqlite3 is extremely useful for downselecting data before importing them in pandas.
For example you might have 1 TB of data in a table stored in a database on a server machine. You are interested in working on a subset of the data based on some criterion, unfortunately it would be impossible to first load data into pandas and then filter them, therefore we should tell the database to perform the filtering and just load into pandas the downsized dataset.
End of explanation |
1,365 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Rock, Paper & Scissors with TensorFlow Hub - TFLite
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Select the Hub/TF2 module to use
Hub modules for TF 1.x won't work here, please use one of the selections provided.
Step3: Data preprocessing
Use TensorFlow Datasets to load the rock, paper and scissors dataset.
This tfds package is the easiest way to load pre-defined data. If you have your own data, and are interested in importing using it with TensorFlow see loading image data
Step4: The tfds.load method downloads and caches the data, and returns a tf.data.Dataset object. These objects provide powerful, efficient methods for manipulating data and piping it into your model.
Since "rock_paper_scissors" doesn't define standard splits, use the subsplit feature to divide it into (train, validation, test) with 80%, 10%, 10% of the data respectively.
Step5: Format the Data
Use the tf.image module to format the images for the task.
Resize the images to a fixes input size, and rescale the input channels
Step6: Now shuffle and batch the data
Step7: Inspect a batch
Step8: Defining the model
All it takes is to put a linear classifier on top of the feature_extractor_layer with the Hub module.
For speed, we start out with a non-trainable feature_extractor_layer, but you can also enable fine-tuning for greater accuracy.
Step9: Training the model
Step10: Export the model
Step11: Export the SavedModel
Step12: Convert with TFLiteConverter
Step13: Test the TFLite model using the Python Interpreter
Step14: Download the model
NOTE
Step15: Prepare the test images for download (Optional)
This part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import os
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE")
Explanation: Rock, Paper & Scissors with TensorFlow Hub - TFLite
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_lite/tflite_c06_exercise_rock_paper_scissors_solution.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_lite/tflite_c06_exercise_rock_paper_scissors_solution.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
Setup
End of explanation
module_selection = ("mobilenet_v2", 224, 1280) #@param ["(\"mobilenet_v2\", 224, 1280)", "(\"inception_v3\", 299, 2048)"] {type:"raw", allow-input: true}
handle_base, pixels, FV_SIZE = module_selection
MODULE_HANDLE = "https://tfhub.dev/google/tf2-preview/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {} and output dimension {}".format(
MODULE_HANDLE, IMAGE_SIZE, FV_SIZE))
Explanation: Select the Hub/TF2 module to use
Hub modules for TF 1.x won't work here, please use one of the selections provided.
End of explanation
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
Explanation: Data preprocessing
Use TensorFlow Datasets to load the rock, paper and scissors dataset.
This tfds package is the easiest way to load pre-defined data. If you have your own data, and are interested in importing using it with TensorFlow see loading image data
End of explanation
splits = tfds.Split.ALL.subsplit(weighted=(80, 10, 10))
splits, info = tfds.load('rock_paper_scissors', with_info=True, as_supervised=True, split = splits)
(train_examples, validation_examples, test_examples) = splits
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
Explanation: The tfds.load method downloads and caches the data, and returns a tf.data.Dataset object. These objects provide powerful, efficient methods for manipulating data and piping it into your model.
Since "rock_paper_scissors" doesn't define standard splits, use the subsplit feature to divide it into (train, validation, test) with 80%, 10%, 10% of the data respectively.
End of explanation
def format_image(image, label):
image = tf.image.resize(image, IMAGE_SIZE) / 255.0
return image, label
Explanation: Format the Data
Use the tf.image module to format the images for the task.
Resize the images to a fixes input size, and rescale the input channels
End of explanation
BATCH_SIZE = 32 #@param {type:"integer"}
train_batches = train_examples.shuffle(num_examples // 4).batch(BATCH_SIZE).map(format_image).prefetch(1)
validation_batches = validation_examples.batch(BATCH_SIZE).map(format_image).prefetch(1)
test_batches = test_examples.batch(1).map(format_image)
Explanation: Now shuffle and batch the data
End of explanation
for image_batch, label_batch in train_batches.take(1):
pass
image_batch.shape
Explanation: Inspect a batch
End of explanation
do_fine_tuning = False #@param {type:"boolean"}
print("Building model with", MODULE_HANDLE)
model = tf.keras.Sequential([
hub.KerasLayer(MODULE_HANDLE,
input_shape=IMAGE_SIZE + (3, ),
output_shape=[FV_SIZE],
trainable=do_fine_tuning),
tf.keras.layers.Dense(num_classes)
])
model.summary()
Explanation: Defining the model
All it takes is to put a linear classifier on top of the feature_extractor_layer with the Hub module.
For speed, we start out with a non-trainable feature_extractor_layer, but you can also enable fine-tuning for greater accuracy.
End of explanation
if do_fine_tuning:
model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.002, momentum=0.9),
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
else:
model.compile(
optimizer='adam',
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 5
hist = model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches)
Explanation: Training the model
End of explanation
RPS_SAVED_MODEL = "rps_saved_model"
Explanation: Export the model
End of explanation
tf.saved_model.save(model, RPS_SAVED_MODEL)
%%bash -s $RPS_SAVED_MODEL
saved_model_cli show --dir $1 --tag_set serve --signature_def serving_default
loaded = tf.saved_model.load(RPS_SAVED_MODEL)
print(list(loaded.signatures.keys()))
infer = loaded.signatures["serving_default"]
print(infer.structured_input_signature)
print(infer.structured_outputs)
Explanation: Export the SavedModel
End of explanation
converter = tf.lite.TFLiteConverter.from_saved_model(RPS_SAVED_MODEL)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
with open("converted_model.tflite", "wb") as f:
f.write(tflite_model)
Explanation: Convert with TFLiteConverter
End of explanation
# Load TFLite model and allocate tensors.
tflite_model_file = 'converted_model.tflite'
with open(tflite_model_file, 'rb') as fid:
tflite_model = fid.read()
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
from tqdm import tqdm
# Gather results for the randomly sampled test images
predictions = []
test_labels, test_imgs = [], []
for img, label in tqdm(test_batches.take(10)):
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions.append(interpreter.get_tensor(output_index))
test_labels.append(label.numpy()[0])
test_imgs.append(img)
#@title Utility functions for plotting
# Utilities for plotting
class_names = ['rock', 'paper', 'scissors']
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
img = np.squeeze(img)
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
print(type(predicted_label), type(true_label))
if predicted_label == true_label:
color = 'green'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
#@title Visualize the outputs { run: "auto" }
index = 0 #@param {type:"slider", min:0, max:9, step:1}
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(index, predictions, test_labels, test_imgs)
plt.show()
Explanation: Test the TFLite model using the Python Interpreter
End of explanation
with open('labels.txt', 'w') as f:
f.write('\n'.join(class_names))
try:
from google.colab import files
files.download('converted_model.tflite')
files.download('labels.txt')
except:
pass
Explanation: Download the model
NOTE: You might have to run to the cell below twice
End of explanation
!mkdir -p test_images
from PIL import Image
for index, (image, label) in enumerate(test_batches.take(50)):
image = tf.cast(image * 255.0, tf.uint8)
image = tf.squeeze(image).numpy()
pil_image = Image.fromarray(image)
pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]], index))
!ls test_images
!zip -qq rps_test_images.zip -r test_images/
try:
files.download('rps_test_images.zip')
except:
pass
Explanation: Prepare the test images for download (Optional)
This part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples
End of explanation |
1,366 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression
This notebook covers multi-variate "linear regression". We'll be going over how to use the scikit-learn regression model, as well as how to train the regressor using the fit() method, and how to predict new labels using the predict() method. We'll be analyzing a data set consisting of house prices in Boston.
If you're interested in the deeper mathematics of linear regession methods, check out the wikipedia page and also check out Andrew Ng's wonderful lectures for free on youtube.
Step1: We'll start by looking a an example of a dataset from scikit-learn. First we'll import our usual data analysis imports, then sklearn's built-in boston dataset.
You should always try to do a quick visualization fo the data you have. Let's go ahead an make a histogram of the prices.
Step2: Univariate Regression
We will start by setting up the X and Y arrays for numpy to take in.
An important note for the X array
Step3: Let's import the linear regression library from the sklearn module.
The sklearn.linear_model.LinearRegression class is an estimator. Estimators predict a value based on the observed data. In scikit-learn, all estimators implement the fit() and predict() methods. The former method is used to learn the parameters of a model, and the latter method is used to predict the value of a response variable for an explanatory variable using the learned parameters. It is easy to experiment with different models using scikit-learn because all estimators implement the fit and predict methods.
Step4: Next, we create a LinearRegression object, afterwards, type lm. then press tab to see the list of methods availble on this object.
Step5: The functions we will be using are
Step6: Let's go ahead check the intercept and number of coefficients.
Step7: With our X and Y, we now have the solution to the linear regression.
$$y=mx+b$$
where b = Intercept, and m is the Coefficient of Estimate for the feature "Rooms"
Step8: Multivariate Regression
Let's add more features to our prediction model.
Step9: Finally, we're ready to pass the X and Y using the linear regression object.
Step10: Let's go ahead check the intercept and number of coefficients.
Step11: Great! So we have basically made an equation for a line, but instead of just oneo coefficient m and an intercept b, we now have 13 coefficients. To get an idea of what this looks like check out the documentation for this equation
Step12: Just like we initially plotted out, it seems the highest correlation between a feature and a house price was the number of rooms.
Now let's move on to Predicting prices!
Training and Validation
In a dataset a training set is implemented to build up a model, while a validation set is used to validate the model built. Data points in the training set are excluded from the validation set. The correct way to pick out samples from your dataset to be part either the training or validation (also called test) set is randomly.
Fortunately, scikit learn has a built in function specifically for this called train_test_split.
The parameters passed are your X and Y, then optionally test_size parameter, representing the proportion of the dataset to include in the test split. As well a train_size parameter. The default split is
Step13: Let's go ahead and see what the output of the train_test_split was
Step14: Great! Now that we have our training and testing sets we can continue on to predicint gprices based on the multiple variables.
Prediction!
Now that we have our training and testing sets, let's go ahead and try to use them to predict house prices. We'll use our training set for the prediction and then use our testing set for validation.
Step15: Now run a prediction on both the X training set and the testing set.
Step16: Let's see if we can find the error in our fitted line. A common error measure is called "root mean squared error" (RMSE). RMSE is similar to the standard deviation. It is calculated by taking the square root of the sum of the square error and divide by the # elements. Square error is the square of the sum of all differences between the prediction and the true value.
The root mean square error (RMSE) corresponds approximately to the standard deviation. i.e., a prediction won't vary more than 2 times the RMSE 95% of the time. Note
Step17: It looks like our mean square error between our training and testing was pretty close. But how do we actually visualize this?
Visualizing Risiduals
In regression analysis, the difference between the observed value of the dependent variable (y) and the predicted value (ŷ) is called the residual (e). Each data point has one residual, so that
Step18: Great! Looks like there aren't any major patterns to be concerned about, it may be interesting to check out the line pattern at the top of the graph, but overall the majority of the residuals seem to be randomly allocated above and below the horizontal. We could also use seaborn to create these plots | Python Code:
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline
Explanation: Regression
This notebook covers multi-variate "linear regression". We'll be going over how to use the scikit-learn regression model, as well as how to train the regressor using the fit() method, and how to predict new labels using the predict() method. We'll be analyzing a data set consisting of house prices in Boston.
If you're interested in the deeper mathematics of linear regession methods, check out the wikipedia page and also check out Andrew Ng's wonderful lectures for free on youtube.
End of explanation
from sklearn.datasets import load_boston
boston = load_boston()
print boston.DESCR
plt.hist(boston.target, bins=50)
plt.xlabel("Prices in $1000s")
plt.ylabel("Number of Houses")
# the 5th column in "boston" dataset is "RM" (# rooms)
plt.scatter(boston.data[:,5], boston.target)
plt.ylabel("Prices in $1000s")
plt.xlabel("# rooms")
boston_df = DataFrame(boston.data)
boston_df.columns = boston.feature_names
boston_df.head(5)
boston_df = DataFrame(boston.data)
boston_df.columns = boston.feature_names
boston_df['Price'] = boston.target
boston_df.head(5)
sns.lmplot('RM', 'Price', data=boston_df)
Explanation: We'll start by looking a an example of a dataset from scikit-learn. First we'll import our usual data analysis imports, then sklearn's built-in boston dataset.
You should always try to do a quick visualization fo the data you have. Let's go ahead an make a histogram of the prices.
End of explanation
# Set up X as median room values
X = boston_df.RM
# Use v to make X two-dimensional
X = np.vstack(boston_df.RM)
# Set up Y as the target price of the houses.
Y = boston_df.Price
type(X)
type(Y)
Explanation: Univariate Regression
We will start by setting up the X and Y arrays for numpy to take in.
An important note for the X array: Numpy expects a two-dimensional array, the first dimension is the different example values, and the second dimension is the attribute number. In this case we have our value as the mean number of rooms per house, and this is a single attribute so the second dimension of the array is just 1. So we'll need to create a (506,1) shape array. There are a few ways to do this, but an easy way to do this is by using numpy's built-in vertical stack tool, vstack.
End of explanation
import sklearn
from sklearn.linear_model import LinearRegression
Explanation: Let's import the linear regression library from the sklearn module.
The sklearn.linear_model.LinearRegression class is an estimator. Estimators predict a value based on the observed data. In scikit-learn, all estimators implement the fit() and predict() methods. The former method is used to learn the parameters of a model, and the latter method is used to predict the value of a response variable for an explanatory variable using the learned parameters. It is easy to experiment with different models using scikit-learn because all estimators implement the fit and predict methods.
End of explanation
# Create a LinearRegression Object
lreg = LinearRegression()
Explanation: Next, we create a LinearRegression object, afterwards, type lm. then press tab to see the list of methods availble on this object.
End of explanation
# Implement Linear Regression
lreg.fit(X,Y)
Explanation: The functions we will be using are:
lreg.fit() which fits a linear model
lreg.predict() which is used to predict Y using the linear model with estimated coefficients
lreg.score() which returns the coefficient of determination (R^2). A measure of how well observed outcomes are replicated by the model, learn more about it here
End of explanation
print(' The estimated intercept coefficient is %.2f ' %lreg.intercept_)
print(' The number of coefficients used was %d ' % len(lreg.coef_))
type(lreg.coef_)
# Set a DataFrame from the Features
coeff_df = DataFrame(["Intercept", "Rooms"])
coeff_df.columns = ['Feature']
# Set a new column lining up the coefficients from the linear regression
coeff_df["Coefficient Estimate"] = pd.Series(np.append(lreg.intercept_, lreg.coef_))
# Show
coeff_df
Explanation: Let's go ahead check the intercept and number of coefficients.
End of explanation
rooms = pd.Series([4], name="rooms")
X_test = pd.DataFrame(rooms)
X_test
lreg.predict(X_test)
Explanation: With our X and Y, we now have the solution to the linear regression.
$$y=mx+b$$
where b = Intercept, and m is the Coefficient of Estimate for the feature "Rooms"
End of explanation
# Data Columns
X_multi = boston_df.drop('Price',1)
# Targets
Y_target = boston_df.Price
Explanation: Multivariate Regression
Let's add more features to our prediction model.
End of explanation
# Implement Linear Regression
lreg.fit(X_multi,Y_target)
Explanation: Finally, we're ready to pass the X and Y using the linear regression object.
End of explanation
print(' The estimated intercept coefficient is %.2f ' %lreg.intercept_)
print(' The number of coefficients used was %d ' % len(lreg.coef_))
Explanation: Let's go ahead check the intercept and number of coefficients.
End of explanation
# Set a DataFrame from the Features
coeff_df = DataFrame(boston_df.columns)
coeff_df.columns = ['Features']
# Set a new column lining up the coefficients from the linear regression
coeff_df["Coefficient Estimate"] = pd.Series(lreg.coef_)
# Show
coeff_df
Explanation: Great! So we have basically made an equation for a line, but instead of just oneo coefficient m and an intercept b, we now have 13 coefficients. To get an idea of what this looks like check out the documentation for this equation:
$$ y(w,x) = w_0 + w_1 x_1 + ... + w_p x_p $$
Where $$w = (w_1, ...w_p)$$ as the coefficients and $$ w_0 $$ as the intercept
What we'll do next is set up a DataFrame showing all the Features and their estimated coefficients obtained form the linear regression.
End of explanation
# Grab the output and set as X and Y test and train data sets!
X_train, X_test, Y_train, Y_test = \
sklearn.cross_validation.train_test_split(X_multi,Y_target)
Explanation: Just like we initially plotted out, it seems the highest correlation between a feature and a house price was the number of rooms.
Now let's move on to Predicting prices!
Training and Validation
In a dataset a training set is implemented to build up a model, while a validation set is used to validate the model built. Data points in the training set are excluded from the validation set. The correct way to pick out samples from your dataset to be part either the training or validation (also called test) set is randomly.
Fortunately, scikit learn has a built in function specifically for this called train_test_split.
The parameters passed are your X and Y, then optionally test_size parameter, representing the proportion of the dataset to include in the test split. As well a train_size parameter. The default split is: 75% for training set and 25% for testing set. You can learn more about these parameters here
End of explanation
# Print shapes of the training and testing data sets
print(X_train.shape, X_test.shape, Y_train.shape, Y_test.shape)
X_train.head(5)
Explanation: Let's go ahead and see what the output of the train_test_split was:
End of explanation
# Create our regression object
lreg = LinearRegression()
# Once again do a linear regression, except only on the training sets this time
lreg.fit(X_train,Y_train)
Explanation: Great! Now that we have our training and testing sets we can continue on to predicint gprices based on the multiple variables.
Prediction!
Now that we have our training and testing sets, let's go ahead and try to use them to predict house prices. We'll use our training set for the prediction and then use our testing set for validation.
End of explanation
# Predictions on training and testing sets
pred_train = lreg.predict(X_train)
pred_test = lreg.predict(X_test)
Explanation: Now run a prediction on both the X training set and the testing set.
End of explanation
print("Fit a model X_train, and calculate MSE with Y_train: %.2f" \
% np.mean((Y_train - pred_train) ** 2))
print("Fit a model X_train, and calculate MSE with X_test and Y_test: %.2f" \
% np.mean((Y_test - pred_test) ** 2))
Explanation: Let's see if we can find the error in our fitted line. A common error measure is called "root mean squared error" (RMSE). RMSE is similar to the standard deviation. It is calculated by taking the square root of the sum of the square error and divide by the # elements. Square error is the square of the sum of all differences between the prediction and the true value.
The root mean square error (RMSE) corresponds approximately to the standard deviation. i.e., a prediction won't vary more than 2 times the RMSE 95% of the time. Note: Review the Normal Distribution Appendix lecture if this doesn't make sense to you or check out this link.
Now we will get the mean square error
End of explanation
# Scatter plot the training data
train = plt.scatter(pred_train,(Y_train-pred_train),c='b',alpha=0.5)
# Scatter plot the testing data
test = plt.scatter(pred_test,(Y_test-pred_test),c='r',alpha=0.5)
# Plot a horizontal axis line at 0
plt.hlines(y=0,xmin=-10,xmax=50)
#Labels
plt.legend((train,test),('Training','Test'),loc='lower left')
plt.title('Residual Plots')
Explanation: It looks like our mean square error between our training and testing was pretty close. But how do we actually visualize this?
Visualizing Risiduals
In regression analysis, the difference between the observed value of the dependent variable (y) and the predicted value (ŷ) is called the residual (e). Each data point has one residual, so that:
$$Residual = Observed\:value - Predicted\:value $$
You can think of these residuals in the same way as the D value we discussed earlier, in this case however, there were multiple data points considered.
A residual plot is a graph that shows the residuals on the vertical axis and the independent variable on the horizontal axis. If the points in a residual plot are randomly dispersed around the horizontal axis, a linear regression model is appropriate for the data; otherwise, a non-linear model is more appropriate.
Residual plots are a good way to visualize the errors in your data. If you have done a good job then your data should be randomly scattered around line zero. If there is some strucutre or pattern, that means your model is not capturing some thing. There could be an interaction between 2 variables that you're not considering, or may be you are measuring time dependent data. If this is the case go back to your model and check your data set closely.
So now let's go ahead and create the residual plot. For more info on the residual plots check out this great link.
End of explanation
# Residual plot of all the dataset using seaborn
sns.residplot('RM', 'Price', data = boston_df)
Explanation: Great! Looks like there aren't any major patterns to be concerned about, it may be interesting to check out the line pattern at the top of the graph, but overall the majority of the residuals seem to be randomly allocated above and below the horizontal. We could also use seaborn to create these plots:
End of explanation |
1,367 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Team
Step1: Import non-standard libraries (install as needed)
Step2: Optional directory creation
Step3: Is the ESRI Shapefile driver available?
Step4: Define a function which will create a shapefile from the points input and export it as kml if the option is set to True.
Step5: Define the file and layer name as well as the points to be mapped.
Step6: Define a function to create a nice map with the points using folium library.
Step7: Call the function specifying the list of points, the output map name and its zoom level. If not False, the map is saved as an html | Python Code:
from numpy import mean
import os
from os import makedirs,chdir
from os.path import exists
Explanation: Team: Satoshi Nakamoto <br>
Names: Alex Levering & Hèctor Muro <br>
Lesson 10 Exercise solution
Import standard libraries
End of explanation
from osgeo import ogr,osr
import folium
import simplekml
Explanation: Import non-standard libraries (install as needed)
End of explanation
if not exists('./data'):
makedirs('./data')
#chdir("./data")
Explanation: Optional directory creation
End of explanation
driverName = "ESRI Shapefile"
drv = ogr.GetDriverByName( driverName )
if drv is None:
print "%s driver not available.\n" % driverName
else:
print "%s driver IS available.\n" % driverName
Explanation: Is the ESRI Shapefile driver available?
End of explanation
def shpFromPoints(filename, layername, points, save_kml = True):
spatialReference = osr.SpatialReference()
spatialReference.ImportFromProj4('+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs')
ds = drv.CreateDataSource(filename)
layer=ds.CreateLayer(layername, spatialReference, ogr.wkbPoint)
layerDefinition = layer.GetLayerDefn()
point = ogr.Geometry(ogr.wkbPoint)
feature = ogr.Feature(layerDefinition)
kml = simplekml.Kml()
for i, value in enumerate(points):
point.SetPoint(0,value[0], value[1])
feature.SetGeometry(point)
layer.CreateFeature(feature)
kml.newpoint(name=str(i), coords = [(value[0],value[1])])
ds.Destroy()
if save_kml == True:
kml.save("my_points.kml")
Explanation: Define a function which will create a shapefile from the points input and export it as kml if the option is set to True.
End of explanation
filename = "wageningenpoints.shp"
layername = "wagpoints"
pts = [(51.987398, 5.665777),
(51.978434, 5.663133)]
shpFromPoints(filename, layername, pts)
Explanation: Define the file and layer name as well as the points to be mapped.
End of explanation
def mapFromPoints(pts, outname, zoom_level, save = True):
mean_long = mean([pt[0] for pt in pts])
mean_lat = mean([pt[1] for pt in pts])
point_map = folium.Map(location=[mean_long, mean_lat], zoom_start = zoom_level)
for pt in pts:
folium.Marker([pt[0], pt[1]],\
popup = folium.Popup(folium.element.IFrame(
html='''
<b>Latitude:</b> {lat}<br>
<b>Longitude:</b> {lon}<br>
'''.format(lat = pt[0], lon = pt[1]),\
width=150, height=100),\
max_width=150)).add_to(point_map)
if save == True:
point_map.save("{}.html".format(outname))
return point_map
Explanation: Define a function to create a nice map with the points using folium library.
End of explanation
mapFromPoints(pts, "SatoshiNakamotoMap", zoom_level = 6)
Explanation: Call the function specifying the list of points, the output map name and its zoom level. If not False, the map is saved as an html
End of explanation |
1,368 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This demo shows the method proposed in "Zhou, Bolei, et al. "Learning Deep Features for Discriminative Localization." arXiv preprint arXiv
Step1: Set the image you want to test and the classification network you want to use. Notice "conv_layer" should be the last conv layer before the average pooling layer.
Step2: Load the label name of each class.
Step3: Build network symbol and load network parameters.
Step4: Read the weight of the fc layer in softmax classification layer. Bias can be neglected since it does not really affect the result.
Load the image you want to test and convert it from BGR to RGB(opencv use BGR by default).
Step5: Feed the image data to our network and get the outputs.
We select the top 5 classes for visualization by default.
Step6: Localize the discriminative regions by analysing the class's response in the network's last conv feature map. | Python Code:
# -*- coding: UTF-8 –*-
import matplotlib.pyplot as plt
%matplotlib inline
from IPython import display
import os
ROOT_DIR = '.'
import sys
sys.path.insert(0, os.path.join(ROOT_DIR, 'lib'))
import cv2
import numpy as np
import mxnet as mx
import matplotlib.pyplot as plt
Explanation: This demo shows the method proposed in "Zhou, Bolei, et al. "Learning Deep Features for Discriminative Localization." arXiv preprint arXiv:1512.04150 (2015)".
The proposed method can automatically localize the discriminative regions in an image using global average pooling
(GAP) in CNNs.
You can download the pretrained Inception-V3 network from here. Other networks with similar structure(use global average pooling after the last conv feature map) should also work.
End of explanation
im_file = os.path.join(ROOT_DIR, 'sample_pics/barbell.jpg')
synset_file = os.path.join(ROOT_DIR, 'models/inception-v3/synset.txt')
net_json = os.path.join(ROOT_DIR, 'models/inception-v3/Inception-7-symbol.json')
conv_layer = 'ch_concat_mixed_10_chconcat_output'
prob_layer = 'softmax_output'
arg_fc = 'fc1'
params = os.path.join(ROOT_DIR, 'models/inception-v3/Inception-7-0001.params')
mean = (128, 128, 128)
raw_scale = 1.0
input_scale = 1.0/128
width = 299
height = 299
resize_size = 340
top_n = 5
ctx = mx.cpu(1)
Explanation: Set the image you want to test and the classification network you want to use. Notice "conv_layer" should be the last conv layer before the average pooling layer.
End of explanation
synset = [l.strip() for l in open(synset_file).readlines()]
Explanation: Load the label name of each class.
End of explanation
symbol = mx.sym.load(net_json)
internals = symbol.get_internals()
symbol = mx.sym.Group([internals[prob_layer], internals[conv_layer]])
save_dict = mx.nd.load(params)
arg_params = {}
aux_params = {}
for k, v in save_dict.items():
l2_tp, name = k.split(':', 1)
if l2_tp == 'arg':
arg_params[name] = v
if l2_tp == 'aux':
aux_params[name] = v
mod = mx.model.FeedForward(symbol,
arg_params=arg_params,
aux_params=aux_params,
ctx=ctx,
allow_extra_params=False,
numpy_batch_size=1)
Explanation: Build network symbol and load network parameters.
End of explanation
weight_fc = arg_params[arg_fc+'_weight'].asnumpy()
# bias_fc = arg_params[arg_fc+'_bias'].asnumpy()
im = cv2.imread(im_file)
rgb = cv2.cvtColor(cv2.resize(im, (width, height)), cv2.COLOR_BGR2RGB)
Explanation: Read the weight of the fc layer in softmax classification layer. Bias can be neglected since it does not really affect the result.
Load the image you want to test and convert it from BGR to RGB(opencv use BGR by default).
End of explanation
def im2blob(im, width, height, mean=None, input_scale=1.0, raw_scale=1.0, swap_channel=True):
blob = cv2.resize(im, (height, width)).astype(np.float32)
blob = blob.reshape((1, height, width, 3))
# from nhwc to nchw
blob = np.swapaxes(blob, 2, 3)
blob = np.swapaxes(blob, 1, 2)
if swap_channel:
blob[:, [0, 2], :, :] = blob[:, [2, 0], :, :]
if raw_scale != 1.0:
blob *= raw_scale
if isinstance(mean, np.ndarray):
blob -= mean
elif isinstance(mean, tuple) or isinstance(mean, list):
blob[:, 0, :, :] -= mean[0]
blob[:, 1, :, :] -= mean[1]
blob[:, 2, :, :] -= mean[2]
elif mean is None:
pass
else:
raise TypeError, 'mean should be either a tuple or a np.ndarray'
if input_scale != 1.0:
blob *= input_scale
return blob
blob = im2blob(im, width, height, mean=mean, swap_channel=True, raw_scale=raw_scale, input_scale=input_scale)
outputs = mod.predict(blob)
score = outputs[0][0]
conv_fm = outputs[1][0]
score_sort = -np.sort(-score)[:top_n]
inds_sort = np.argsort(-score)[:top_n]
Explanation: Feed the image data to our network and get the outputs.
We select the top 5 classes for visualization by default.
End of explanation
def get_cam(conv_feat_map, weight_fc):
assert len(weight_fc.shape) == 2
if len(conv_feat_map.shape) == 3:
C, H, W = conv_feat_map.shape
assert weight_fc.shape[1] == C
detection_map = weight_fc.dot(conv_feat_map.reshape(C, H*W))
detection_map = detection_map.reshape(-1, H, W)
elif len(conv_feat_map.shape) == 4:
N, C, H, W = conv_feat_map.shape
assert weight_fc.shape[1] == C
M = weight_fc.shape[0]
detection_map = np.zeros((N, M, H, W))
for i in xrange(N):
tmp_detection_map = weight_fc.dot(conv_feat_map[i].reshape(C, H*W))
detection_map[i, :, :, :] = tmp_detection_map.reshape(-1, H, W)
return detection_map
plt.figure(figsize=(18, 6))
plt.subplot(1, 1+top_n, 1)
plt.imshow(rgb)
cam = get_cam(conv_fm, weight_fc[inds_sort, :])
for k in xrange(top_n):
detection_map = np.squeeze(cam.astype(np.float32)[k, :, :])
heat_map = cv2.resize(detection_map, (width, height))
max_response = detection_map.mean()
heat_map /= heat_map.max()
im_show = rgb.astype(np.float32)/255*0.3 + plt.cm.jet(heat_map/heat_map.max())[:, :, :3]*0.7
plt.subplot(1, 1+top_n, k+2)
plt.imshow(im_show)
print 'Top %d: %s(%.6f), max_response=%.4f' % (k+1, synset[inds_sort[k]], score_sort[k], max_response)
plt.show()
Explanation: Localize the discriminative regions by analysing the class's response in the network's last conv feature map.
End of explanation |
1,369 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
E-Commerce data with Neural network
In this note, I am going to use neural network to analyze a e-commerce data. The data is from Udemy
Step1: The 2nd and 3rd column is numeric and need to be normalized. 1st, 4th and 5th colums are categorized variable. 5th column time_of_day will need to be transformed to 4 one-hot encoding variables. Last column user_action is the label. Code below will tranform the raw data into the format for training.
Step2: Forward Step, Cost Function
Forward step will involve softmax and logistic function. For the mathmatical details, see Itetsu Blog
Step3: Below will train a neural network model with 1 hidden layer with logistic function and output layer activating with softmax function.
Step4: Gradient Decent with Backpropgation
For the mathmatical details, see Itetsu Blog | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt # Plotting library
from sklearn.utils import shuffle
# Allow matplotlib to plot inside this notebook
%matplotlib inline
# Set the seed of the numpy random number generator so that the result is reproducable
np.random.seed(seed=1)
# check the data first
df = pd.read_csv('../data/ecommerce_data.csv')
df.head()
# 4 unique values for time_of_day
df.time_of_day.unique()
Explanation: E-Commerce data with Neural network
In this note, I am going to use neural network to analyze a e-commerce data. The data is from Udemy: Deep Learning with Python lecture. The label will have multiple class. The model will have 1 hidden layer with 5 hidden units and use logistic function for activation. The output layer will be activated by softmax.
Process the Data
Import required library and the data. Print out first few rows to confirm the data structure.
End of explanation
def get_data():
df = pd.read_csv('../data/ecommerce_data.csv')
data = df.as_matrix()
X = data[:, :-1] # last column is label
Y = data[:, -1]
# Normalization for 2nd and 3rd columns
X[:, 1] = (X[:, 1] - X[: ,1].mean())/X[:, 1].std()
X[:, 2] = (X[:, 2] - X[: ,2].mean())/X[:, 2].std()
# handle time_of_day
R, C = X.shape
# we will have 4 more columns for each value in time_of_day (4 unique values)
X2 = np.zeros((R, C+3)) # initialized as zero
Z = np.zeros((R, 4))
Z[np.arange(R), X[:, C-1].astype(np.int32)] = 1
# copy data from X except time_of_day
X2[:, 0:(C-1)] = X[:, 0:(C-1)]
# add 4 dummy variables for time_of_day
X2[:, (C-1):(C+3)] = Z
return X2, Y
# Produce multi-class indicator for Y
def y2indicator(y, K):
N = len(y)
ind = np.zeros((N, K))
for i in range(N):
ind[i, y[i]] = 1
return ind
Explanation: The 2nd and 3rd column is numeric and need to be normalized. 1st, 4th and 5th colums are categorized variable. 5th column time_of_day will need to be transformed to 4 one-hot encoding variables. Last column user_action is the label. Code below will tranform the raw data into the format for training.
End of explanation
def softmax(a):
expA = np.exp(a)
return expA / expA.sum(axis=1, keepdims=True)
def forward(X, W1, b1, W2, b2):
Z = np.tanh(X.dot(W1) + b1)
return softmax(Z.dot(W2) + b2), Z # also return cost of hidden layer to calculate derivatives
def predict(P_Y_given_X):
return np.argmax(P_Y_given_X, axis=1)
def classification_rate(Y, P):
return np.mean(Y == P)
def cross_entropy(T, pY):
return -np.mean(T*np.log(pY))
Explanation: Forward Step, Cost Function
Forward step will involve softmax and logistic function. For the mathmatical details, see Itetsu Blog: Neural-Network Cost-Function.
We can at first produce functions for producing prediction as below.
End of explanation
# create train data
X, Y = get_data()
X, Y = shuffle(X, Y)
Y = Y.astype(np.int32)
M = 5 # n of hidden units
D = X.shape[1] # n of inputs
K = len(set(Y)) # n of class/ output nodes
# training data
Xtrain = X[:-100]
Ytrain = Y[:-100]
Ytrain_ind = y2indicator(Ytrain, K)
# test/validation data
Xtest = X[-100:]
Ytest = Y[-100:]
Ytest_ind = y2indicator(Ytest, K)
# initialize weight
W1 = np.random.randn(D, M)
b1 = np.zeros(M)
W2 = np.random.randn(M, K)
b2 = np.zeros(K)
Explanation: Below will train a neural network model with 1 hidden layer with logistic function and output layer activating with softmax function.
End of explanation
# start training
train_costs = []
test_costs = []
learning_rate = 0.001
for i in range(10000):
pYtrain, Ztrain = forward(Xtrain, W1, b1, W2, b2)
pYtest, Ztest = forward(Xtest, W1, b1, W2, b2)
ctrain = cross_entropy(Ytrain_ind, pYtrain)
ctest = cross_entropy(Ytest_ind, pYtest)
train_costs.append(ctrain)
test_costs.append(ctest)
W2 -= learning_rate*Ztrain.T.dot(pYtrain - Ytrain_ind)
b2 -= learning_rate*(pYtrain - Ytrain_ind).sum(axis=0)
dZ = (pYtrain - Ytrain_ind).dot(W2.T) * (1- Ztrain*Ztrain)
W1 -= learning_rate*Xtrain.T.dot(dZ)
b1 -= learning_rate*dZ.sum(axis=0)
if i % 1000 == 0:
print(i, ctrain, ctest)
print("Final train classification_rate:", classification_rate(Ytrain, predict(pYtrain)))
print("Final test classification_rate:", classification_rate(Ytest, predict(pYtest)))
legend1, = plt.plot(train_costs, label='train cost')
legend2, = plt.plot(test_costs, label='test cost')
plt.legend([legend1, legend2])
plt.show()
Explanation: Gradient Decent with Backpropgation
For the mathmatical details, see Itetsu Blog: Neural-Network Backward-propagation.
End of explanation |
1,370 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The MNIST dataset
The MNIST database of handwritten digits, available at Yann Lecun web site, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST.
The digits have been size-normalized and centered in a fixed-size image.
It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting.
<br>
<div>Table of contents</div>
<div id="toc"></div>
<br>
MNIST numbers
| | |
|---------------------|-----|
|training images
Step1: Below, as an example, we take the first ten samples from the training set and plot them
Step2: <br><br><br><br><br><br><br><br><br><br><br><br>Next cell is just for styling | Python Code:
# import the mnist class
from mnist import MNIST
# init with the 'data' dir
mndata = MNIST('./data')
# Load data
mndata.load_training()
mndata.load_testing()
# The number of pixels per side of all images
img_side = 28
# Each input is a raw vector.
# The number of units of the network
# corresponds to the number of input elements
n_mnist_pixels = img_side*img_side
Explanation: The MNIST dataset
The MNIST database of handwritten digits, available at Yann Lecun web site, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST.
The digits have been size-normalized and centered in a fixed-size image.
It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting.
<br>
<div>Table of contents</div>
<div id="toc"></div>
<br>
MNIST numbers
| | |
|---------------------|-----|
|training images: |60000|
|test images: |10000|
|image pixels: |28x28|
|image format: |raw vector of 784 elements |
| |encoding 0-255|
MNIST with python
Get the dataset and the python functionalities
To can obtain and use it in python in two steps:
* Install python-mnist:
sudo pip install python_mnist
Download the files of the dataset in a folder called data. You can find them here. Just unzip them in the same directory of your python scripts.
Using the dataset in python
Now we can use the dataset in a readable way by using the load_training and load_testing methods of the MNIST object:
End of explanation
%matplotlib inline
from pylab import *
# Define the number of samples to take
num_samples = 10
# create a figure where we will store all samples
figure(figsize=(10,1))
# Iterate over samples indices
for sample in xrange(num_samples) :
# The image corresponding to the 'sample' index
img = mndata.train_images[sample]
# The label of the image
label = mndata.train_labels[sample]
# The image is stored as a rolled vector,
# we have to roll it back in a matrix
aimg = array(img).reshape(img_side, img_side)
# Open a subplot for each sample
subplot(1, num_samples, sample+1)
# The corresponding digit is the title of the plot
title(label)
# We use imshow to plot the matrix of pixels
imshow(aimg, interpolation = 'none',
aspect = 'auto', cmap = cm.binary)
axis("off")
show()
Explanation: Below, as an example, we take the first ten samples from the training set and plot them:
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open("../style/ipybn.css", "r").read()
return HTML(styles)
css_styling()
Explanation: <br><br><br><br><br><br><br><br><br><br><br><br>Next cell is just for styling
End of explanation |
1,371 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Background information on filtering
Here we give some background information on filtering in general, and
how it is done in MNE-Python in particular.
Recommended reading for practical applications of digital
filter design can be found in Parks & Burrus (1987) [1] and
Ifeachor & Jervis (2002) [2], and for filtering in an
M/EEG context we recommend reading Widmann et al. (2015) [7]_.
To see how to use the default filters in MNE-Python on actual data, see
the tut-filter-resample tutorial.
Problem statement
Practical issues with filtering electrophysiological data are covered
in Widmann et al. (2012) [6]_, where they conclude with this statement
Step1: Take for example an ideal low-pass filter, which would give a magnitude
response of 1 in the pass-band (up to frequency $f_p$) and a magnitude
response of 0 in the stop-band (down to frequency $f_s$) such that
$f_p=f_s=40$ Hz here (shown to a lower limit of -60 dB for simplicity)
Step2: This filter hypothetically achieves zero ripple in the frequency domain,
perfect attenuation, and perfect steepness. However, due to the discontinuity
in the frequency response, the filter would require infinite ringing in the
time domain (i.e., infinite order) to be realized. Another way to think of
this is that a rectangular window in the frequency domain is actually a sinc_
function in the time domain, which requires an infinite number of samples
(and thus infinite time) to represent. So although this filter has ideal
frequency suppression, it has poor time-domain characteristics.
Let's try to naïvely make a brick-wall filter of length 0.1 s, and look
at the filter itself in the time domain and the frequency domain
Step3: This is not so good! Making the filter 10 times longer (1 s) gets us a
slightly better stop-band suppression, but still has a lot of ringing in
the time domain. Note the x-axis is an order of magnitude longer here,
and the filter has a correspondingly much longer group delay (again equal
to half the filter length, or 0.5 seconds)
Step4: Let's make the stop-band tighter still with a longer filter (10 s),
with a resulting larger x-axis
Step5: Now we have very sharp frequency suppression, but our filter rings for the
entire 10 seconds. So this naïve method is probably not a good way to build
our low-pass filter.
Fortunately, there are multiple established methods to design FIR filters
based on desired response characteristics. These include
Step6: Accepting a shallower roll-off of the filter in the frequency domain makes
our time-domain response potentially much better. We end up with a more
gradual slope through the transition region, but a much cleaner time
domain signal. Here again for the 1 s filter
Step7: Since our lowpass is around 40 Hz with a 10 Hz transition, we can actually
use a shorter filter (5 cycles at 10 Hz = 0.5 s) and still get acceptable
stop-band attenuation
Step8: But if we shorten the filter too much (2 cycles of 10 Hz = 0.2 s),
our effective stop frequency gets pushed out past 60 Hz
Step9: If we want a filter that is only 0.1 seconds long, we should probably use
something more like a 25 Hz transition band (0.2 s = 5 cycles @ 25 Hz)
Step10: So far, we have only discussed non-causal filtering, which means that each
sample at each time point $t$ is filtered using samples that come
after ($t + \Delta t$) and before ($t - \Delta t$) the current
time point $t$.
In this sense, each sample is influenced by samples that come both before
and after it. This is useful in many cases, especially because it does not
delay the timing of events.
However, sometimes it can be beneficial to use causal filtering,
whereby each sample $t$ is filtered only using time points that came
after it.
Note that the delay is variable (whereas for linear/zero-phase filters it
is constant) but small in the pass-band. Unlike zero-phase filters, which
require time-shifting backward the output of a linear-phase filtering stage
(and thus becoming non-causal), minimum-phase filters do not require any
compensation to achieve small delays in the pass-band. Note that as an
artifact of the minimum phase filter construction step, the filter does
not end up being as steep as the linear/zero-phase version.
We can construct a minimum-phase filter from our existing linear-phase
filter with the
Step11: Applying FIR filters
Now lets look at some practical effects of these filters by applying
them to some data.
Let's construct a Gaussian-windowed sinusoid (i.e., Morlet imaginary part)
plus noise (random and line). Note that the original clean signal contains
frequency content in both the pass band and transition bands of our
low-pass filter.
Step12: Filter it with a shallow cutoff, linear-phase FIR (which allows us to
compensate for the constant filter delay)
Step13: Filter it with a different design method fir_design="firwin2", and also
compensate for the constant filter delay. This method does not produce
quite as sharp a transition compared to fir_design="firwin", despite
being twice as long
Step14: Let's also filter with the MNE-Python 0.13 default, which is a
long-duration, steep cutoff FIR that gets applied twice
Step15: Let's also filter it with the MNE-C default, which is a long-duration
steep-slope FIR filter designed using frequency-domain techniques
Step16: And now an example of a minimum-phase filter
Step18: Both the MNE-Python 0.13 and MNE-C filters have excellent frequency
attenuation, but it comes at a cost of potential
ringing (long-lasting ripples) in the time domain. Ringing can occur with
steep filters, especially in signals with frequency content around the
transition band. Our Morlet wavelet signal has power in our transition band,
and the time-domain ringing is thus more pronounced for the steep-slope,
long-duration filter than the shorter, shallower-slope filter
Step19: IIR filters
MNE-Python also offers IIR filtering functionality that is based on the
methods from
Step20: The falloff of this filter is not very steep.
<div class="alert alert-info"><h4>Note</h4><p>Here we have made use of second-order sections (SOS)
by using
Step21: There are other types of IIR filters that we can use. For a complete list,
check out the documentation for
Step22: If we can live with even more ripple, we can get it slightly steeper,
but the impulse response begins to ring substantially longer (note the
different x-axis scale)
Step23: Applying IIR filters
Now let's look at how our shallow and steep Butterworth IIR filters
perform on our Morlet signal from before
Step24: Some pitfalls of filtering
Multiple recent papers have noted potential risks of drawing
errant inferences due to misapplication of filters.
Low-pass problems
Filters in general, especially those that are non-causal (zero-phase), can
make activity appear to occur earlier or later than it truly did. As
mentioned in VanRullen (2011) [3], investigations of commonly (at the time)
used low-pass filters created artifacts when they were applied to simulated
data. However, such deleterious effects were minimal in many real-world
examples in Rousselet (2012) [5].
Perhaps more revealing, it was noted in Widmann & Schröger (2012) [6] that
the problematic low-pass filters from VanRullen (2011) [3]
Step25: Similarly, in a P300 paradigm reported by Kappenman & Luck (2010) [12]_,
they found that applying a 1 Hz high-pass decreased the probability of
finding a significant difference in the N100 response, likely because
the P300 response was smeared (and inverted) in time by the high-pass
filter such that it tended to cancel out the increased N100. However,
they nonetheless note that some high-passing can still be useful to deal
with drifts in the data.
Even though these papers generally advise a 0.1 Hz or lower frequency for
a high-pass, it is important to keep in mind (as most authors note) that
filtering choices should depend on the frequency content of both the
signal(s) of interest and the noise to be suppressed. For example, in
some of the MNE-Python examples involving sample-data,
high-pass values of around 1 Hz are used when looking at auditory
or visual N100 responses, because we analyze standard (not deviant) trials
and thus expect that contamination by later or slower components will
be limited.
Baseline problems (or solutions?)
In an evolving discussion, Tanner et al. (2015) [8] suggest using baseline
correction to remove slow drifts in data. However, Maess et al. (2016) [9]
suggest that baseline correction, which is a form of high-passing, does
not offer substantial advantages over standard high-pass filtering.
Tanner et al. (2016) [10]_ rebutted that baseline correction can correct
for problems with filtering.
To see what they mean, consider again our old simulated signal x from
before
Step26: In response, Maess et al. (2016) [11]_ note that these simulations do not
address cases of pre-stimulus activity that is shared across conditions, as
applying baseline correction will effectively copy the topology outside the
baseline period. We can see this if we give our signal x with some
consistent pre-stimulus activity, which makes everything look bad.
<div class="alert alert-info"><h4>Note</h4><p>An important thing to keep in mind with these plots is that they
are for a single simulated sensor. In multi-electrode recordings
the topology (i.e., spatial pattern) of the pre-stimulus activity
will leak into the post-stimulus period. This will likely create a
spatially varying distortion of the time-domain signals, as the
averaged pre-stimulus spatial pattern gets subtracted from the
sensor time courses.</p></div>
Putting some activity in the baseline period
Step27: Both groups seem to acknowledge that the choices of filtering cutoffs, and
perhaps even the application of baseline correction, depend on the
characteristics of the data being investigated, especially when it comes to | Python Code:
import numpy as np
from numpy.fft import fft, fftfreq
from scipy import signal
import matplotlib.pyplot as plt
from mne.time_frequency.tfr import morlet
from mne.viz import plot_filter, plot_ideal_filter
import mne
sfreq = 1000.
f_p = 40.
flim = (1., sfreq / 2.) # limits for plotting
Explanation: Background information on filtering
Here we give some background information on filtering in general, and
how it is done in MNE-Python in particular.
Recommended reading for practical applications of digital
filter design can be found in Parks & Burrus (1987) [1] and
Ifeachor & Jervis (2002) [2], and for filtering in an
M/EEG context we recommend reading Widmann et al. (2015) [7]_.
To see how to use the default filters in MNE-Python on actual data, see
the tut-filter-resample tutorial.
Problem statement
Practical issues with filtering electrophysiological data are covered
in Widmann et al. (2012) [6]_, where they conclude with this statement:
Filtering can result in considerable distortions of the time course
(and amplitude) of a signal as demonstrated by VanRullen (2011) [[3]_].
Thus, filtering should not be used lightly. However, if effects of
filtering are cautiously considered and filter artifacts are minimized,
a valid interpretation of the temporal dynamics of filtered
electrophysiological data is possible and signals missed otherwise
can be detected with filtering.
In other words, filtering can increase signal-to-noise ratio (SNR), but if it
is not used carefully, it can distort data. Here we hope to cover some
filtering basics so users can better understand filtering trade-offs and why
MNE-Python has chosen particular defaults.
Filtering basics
Let's get some of the basic math down. In the frequency domain, digital
filters have a transfer function that is given by:
\begin{align}H(z) &= \frac{b_0 + b_1 z^{-1} + b_2 z^{-2} + \ldots + b_M z^{-M}}
{1 + a_1 z^{-1} + a_2 z^{-2} + \ldots + a_N z^{-M}} \
&= \frac{\sum_{k=0}^Mb_kz^{-k}}{\sum_{k=1}^Na_kz^{-k}}\end{align}
In the time domain, the numerator coefficients $b_k$ and denominator
coefficients $a_k$ can be used to obtain our output data
$y(n)$ in terms of our input data $x(n)$ as:
\begin{align}:label: summations
y(n) &= b_0 x(n) + b_1 x(n-1) + \ldots + b_M x(n-M)
- a_1 y(n-1) - a_2 y(n - 2) - \ldots - a_N y(n - N)\\
&= \sum_{k=0}^M b_k x(n-k) - \sum_{k=1}^N a_k y(n-k)\end{align}
In other words, the output at time $n$ is determined by a sum over
1. the numerator coefficients $b_k$, which get multiplied by
the previous input values $x(n-k)$, and
2. the denominator coefficients $a_k$, which get multiplied by
the previous output values $y(n-k)$.
Note that these summations correspond to (1) a weighted moving average and
(2) an autoregression.
Filters are broken into two classes: FIR_ (finite impulse response) and
IIR_ (infinite impulse response) based on these coefficients.
FIR filters use a finite number of numerator
coefficients $b_k$ ($\forall k, a_k=0$), and thus each output
value of $y(n)$ depends only on the $M$ previous input values.
IIR filters depend on the previous input and output values, and thus can have
effectively infinite impulse responses.
As outlined in Parks & Burrus (1987) [1]_, FIR and IIR have different
trade-offs:
* A causal FIR filter can be linear-phase -- i.e., the same time delay
across all frequencies -- whereas a causal IIR filter cannot. The phase
and group delay characteristics are also usually better for FIR filters.
* IIR filters can generally have a steeper cutoff than an FIR filter of
equivalent order.
* IIR filters are generally less numerically stable, in part due to
accumulating error (due to its recursive calculations).
In MNE-Python we default to using FIR filtering. As noted in Widmann et al.
(2015) [7]_:
Despite IIR filters often being considered as computationally more
efficient, they are recommended only when high throughput and sharp
cutoffs are required (Ifeachor and Jervis, 2002 [[2]_], p. 321)...
FIR filters are easier to control, are always stable, have a
well-defined passband, can be corrected to zero-phase without
additional computations, and can be converted to minimum-phase.
We therefore recommend FIR filters for most purposes in
electrophysiological data analysis.
When designing a filter (FIR or IIR), there are always trade-offs that
need to be considered, including but not limited to:
1. Ripple in the pass-band
2. Attenuation of the stop-band
3. Steepness of roll-off
4. Filter order (i.e., length for FIR filters)
5. Time-domain ringing
In general, the sharper something is in frequency, the broader it is in time,
and vice-versa. This is a fundamental time-frequency trade-off, and it will
show up below.
FIR Filters
First, we will focus on FIR filters, which are the default filters used by
MNE-Python.
Designing FIR filters
Here we'll try to design a low-pass filter and look at trade-offs in terms
of time- and frequency-domain filter characteristics. Later, in
tut_effect_on_signals, we'll look at how such filters can affect
signals when they are used.
First let's import some useful tools for filtering, and set some default
values for our data that are reasonable for M/EEG.
End of explanation
nyq = sfreq / 2. # the Nyquist frequency is half our sample rate
freq = [0, f_p, f_p, nyq]
gain = [1, 1, 0, 0]
third_height = np.array(plt.rcParams['figure.figsize']) * [1, 1. / 3.]
ax = plt.subplots(1, figsize=third_height)[1]
plot_ideal_filter(freq, gain, ax, title='Ideal %s Hz lowpass' % f_p, flim=flim)
Explanation: Take for example an ideal low-pass filter, which would give a magnitude
response of 1 in the pass-band (up to frequency $f_p$) and a magnitude
response of 0 in the stop-band (down to frequency $f_s$) such that
$f_p=f_s=40$ Hz here (shown to a lower limit of -60 dB for simplicity):
End of explanation
n = int(round(0.1 * sfreq))
n -= n % 2 - 1 # make it odd
t = np.arange(-(n // 2), n // 2 + 1) / sfreq # center our sinc
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, sfreq, freq, gain, 'Sinc (0.1 s)', flim=flim, compensate=True)
Explanation: This filter hypothetically achieves zero ripple in the frequency domain,
perfect attenuation, and perfect steepness. However, due to the discontinuity
in the frequency response, the filter would require infinite ringing in the
time domain (i.e., infinite order) to be realized. Another way to think of
this is that a rectangular window in the frequency domain is actually a sinc_
function in the time domain, which requires an infinite number of samples
(and thus infinite time) to represent. So although this filter has ideal
frequency suppression, it has poor time-domain characteristics.
Let's try to naïvely make a brick-wall filter of length 0.1 s, and look
at the filter itself in the time domain and the frequency domain:
End of explanation
n = int(round(1. * sfreq))
n -= n % 2 - 1 # make it odd
t = np.arange(-(n // 2), n // 2 + 1) / sfreq
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, sfreq, freq, gain, 'Sinc (1.0 s)', flim=flim, compensate=True)
Explanation: This is not so good! Making the filter 10 times longer (1 s) gets us a
slightly better stop-band suppression, but still has a lot of ringing in
the time domain. Note the x-axis is an order of magnitude longer here,
and the filter has a correspondingly much longer group delay (again equal
to half the filter length, or 0.5 seconds):
End of explanation
n = int(round(10. * sfreq))
n -= n % 2 - 1 # make it odd
t = np.arange(-(n // 2), n // 2 + 1) / sfreq
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, sfreq, freq, gain, 'Sinc (10.0 s)', flim=flim, compensate=True)
Explanation: Let's make the stop-band tighter still with a longer filter (10 s),
with a resulting larger x-axis:
End of explanation
trans_bandwidth = 10 # 10 Hz transition band
f_s = f_p + trans_bandwidth # = 50 Hz
freq = [0., f_p, f_s, nyq]
gain = [1., 1., 0., 0.]
ax = plt.subplots(1, figsize=third_height)[1]
title = '%s Hz lowpass with a %s Hz transition' % (f_p, trans_bandwidth)
plot_ideal_filter(freq, gain, ax, title=title, flim=flim)
Explanation: Now we have very sharp frequency suppression, but our filter rings for the
entire 10 seconds. So this naïve method is probably not a good way to build
our low-pass filter.
Fortunately, there are multiple established methods to design FIR filters
based on desired response characteristics. These include:
1. The Remez_ algorithm (:func:`scipy.signal.remez`, `MATLAB firpm`_)
2. Windowed FIR design (:func:`scipy.signal.firwin2`,
:func:`scipy.signal.firwin`, and `MATLAB fir2`_)
3. Least squares designs (:func:`scipy.signal.firls`, `MATLAB firls`_)
4. Frequency-domain design (construct filter in Fourier
domain and use an :func:`IFFT <numpy.fft.ifft>` to invert it)
<div class="alert alert-info"><h4>Note</h4><p>Remez and least squares designs have advantages when there are
"do not care" regions in our frequency response. However, we want
well controlled responses in all frequency regions.
Frequency-domain construction is good when an arbitrary response
is desired, but generally less clean (due to sampling issues) than
a windowed approach for more straightforward filter applications.
Since our filters (low-pass, high-pass, band-pass, band-stop)
are fairly simple and we require precise control of all frequency
regions, we will primarily use and explore windowed FIR design.</p></div>
If we relax our frequency-domain filter requirements a little bit, we can
use these functions to construct a lowpass filter that instead has a
transition band, or a region between the pass frequency $f_p$
and stop frequency $f_s$, e.g.:
End of explanation
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (1.0 s)',
flim=flim, compensate=True)
Explanation: Accepting a shallower roll-off of the filter in the frequency domain makes
our time-domain response potentially much better. We end up with a more
gradual slope through the transition region, but a much cleaner time
domain signal. Here again for the 1 s filter:
End of explanation
n = int(round(sfreq * 0.5)) + 1
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (0.5 s)',
flim=flim, compensate=True)
Explanation: Since our lowpass is around 40 Hz with a 10 Hz transition, we can actually
use a shorter filter (5 cycles at 10 Hz = 0.5 s) and still get acceptable
stop-band attenuation:
End of explanation
n = int(round(sfreq * 0.2)) + 1
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (0.2 s)',
flim=flim, compensate=True)
Explanation: But if we shorten the filter too much (2 cycles of 10 Hz = 0.2 s),
our effective stop frequency gets pushed out past 60 Hz:
End of explanation
trans_bandwidth = 25
f_s = f_p + trans_bandwidth
freq = [0, f_p, f_s, nyq]
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 50 Hz transition (0.2 s)',
flim=flim, compensate=True)
Explanation: If we want a filter that is only 0.1 seconds long, we should probably use
something more like a 25 Hz transition band (0.2 s = 5 cycles @ 25 Hz):
End of explanation
h_min = signal.minimum_phase(h)
plot_filter(h_min, sfreq, freq, gain, 'Minimum-phase', flim=flim)
Explanation: So far, we have only discussed non-causal filtering, which means that each
sample at each time point $t$ is filtered using samples that come
after ($t + \Delta t$) and before ($t - \Delta t$) the current
time point $t$.
In this sense, each sample is influenced by samples that come both before
and after it. This is useful in many cases, especially because it does not
delay the timing of events.
However, sometimes it can be beneficial to use causal filtering,
whereby each sample $t$ is filtered only using time points that came
after it.
Note that the delay is variable (whereas for linear/zero-phase filters it
is constant) but small in the pass-band. Unlike zero-phase filters, which
require time-shifting backward the output of a linear-phase filtering stage
(and thus becoming non-causal), minimum-phase filters do not require any
compensation to achieve small delays in the pass-band. Note that as an
artifact of the minimum phase filter construction step, the filter does
not end up being as steep as the linear/zero-phase version.
We can construct a minimum-phase filter from our existing linear-phase
filter with the :func:scipy.signal.minimum_phase function, and note
that the falloff is not as steep:
End of explanation
dur = 10.
center = 2.
morlet_freq = f_p
tlim = [center - 0.2, center + 0.2]
tticks = [tlim[0], center, tlim[1]]
flim = [20, 70]
x = np.zeros(int(sfreq * dur) + 1)
blip = morlet(sfreq, [morlet_freq], n_cycles=7)[0].imag / 20.
n_onset = int(center * sfreq) - len(blip) // 2
x[n_onset:n_onset + len(blip)] += blip
x_orig = x.copy()
rng = np.random.RandomState(0)
x += rng.randn(len(x)) / 1000.
x += np.sin(2. * np.pi * 60. * np.arange(len(x)) / sfreq) / 2000.
Explanation: Applying FIR filters
Now lets look at some practical effects of these filters by applying
them to some data.
Let's construct a Gaussian-windowed sinusoid (i.e., Morlet imaginary part)
plus noise (random and line). Note that the original clean signal contains
frequency content in both the pass band and transition bands of our
low-pass filter.
End of explanation
transition_band = 0.25 * f_p
f_s = f_p + transition_band
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
# This would be equivalent:
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
fir_design='firwin', verbose=True)
x_v16 = np.convolve(h, x)
# this is the linear->zero phase, causal-to-non-causal conversion / shift
x_v16 = x_v16[len(h) // 2:]
plot_filter(h, sfreq, freq, gain, 'MNE-Python 0.16 default', flim=flim,
compensate=True)
Explanation: Filter it with a shallow cutoff, linear-phase FIR (which allows us to
compensate for the constant filter delay):
End of explanation
transition_band = 0.25 * f_p
f_s = f_p + transition_band
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
# This would be equivalent:
# filter_dur = 6.6 / transition_band # sec
# n = int(sfreq * filter_dur)
# h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.)
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
fir_design='firwin2', verbose=True)
x_v14 = np.convolve(h, x)[len(h) // 2:]
plot_filter(h, sfreq, freq, gain, 'MNE-Python 0.14 default', flim=flim,
compensate=True)
Explanation: Filter it with a different design method fir_design="firwin2", and also
compensate for the constant filter delay. This method does not produce
quite as sharp a transition compared to fir_design="firwin", despite
being twice as long:
End of explanation
transition_band = 0.5 # Hz
f_s = f_p + transition_band
filter_dur = 10. # sec
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
# This would be equivalent
# n = int(sfreq * filter_dur)
# h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.)
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
h_trans_bandwidth=transition_band,
filter_length='%ss' % filter_dur,
fir_design='firwin2', verbose=True)
x_v13 = np.convolve(np.convolve(h, x)[::-1], h)[::-1][len(h) - 1:-len(h) - 1]
# the effective h is one that is applied to the time-reversed version of itself
h_eff = np.convolve(h, h[::-1])
plot_filter(h_eff, sfreq, freq, gain, 'MNE-Python 0.13 default', flim=flim,
compensate=True)
Explanation: Let's also filter with the MNE-Python 0.13 default, which is a
long-duration, steep cutoff FIR that gets applied twice:
End of explanation
h = mne.filter.design_mne_c_filter(sfreq, l_freq=None, h_freq=f_p + 2.5)
x_mne_c = np.convolve(h, x)[len(h) // 2:]
transition_band = 5 # Hz (default in MNE-C)
f_s = f_p + transition_band
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
plot_filter(h, sfreq, freq, gain, 'MNE-C default', flim=flim, compensate=True)
Explanation: Let's also filter it with the MNE-C default, which is a long-duration
steep-slope FIR filter designed using frequency-domain techniques:
End of explanation
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
phase='minimum', fir_design='firwin',
verbose=True)
x_min = np.convolve(h, x)
transition_band = 0.25 * f_p
f_s = f_p + transition_band
filter_dur = 6.6 / transition_band # sec
n = int(sfreq * filter_dur)
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
plot_filter(h, sfreq, freq, gain, 'Minimum-phase filter', flim=flim)
Explanation: And now an example of a minimum-phase filter:
End of explanation
axes = plt.subplots(1, 2)[1]
def plot_signal(x, offset):
Plot a signal.
t = np.arange(len(x)) / sfreq
axes[0].plot(t, x + offset)
axes[0].set(xlabel='Time (s)', xlim=t[[0, -1]])
X = fft(x)
freqs = fftfreq(len(x), 1. / sfreq)
mask = freqs >= 0
X = X[mask]
freqs = freqs[mask]
axes[1].plot(freqs, 20 * np.log10(np.maximum(np.abs(X), 1e-16)))
axes[1].set(xlim=flim)
yscale = 30
yticklabels = ['Original', 'Noisy', 'FIR-firwin (0.16)', 'FIR-firwin2 (0.14)',
'FIR-steep (0.13)', 'FIR-steep (MNE-C)', 'Minimum-phase']
yticks = -np.arange(len(yticklabels)) / yscale
plot_signal(x_orig, offset=yticks[0])
plot_signal(x, offset=yticks[1])
plot_signal(x_v16, offset=yticks[2])
plot_signal(x_v14, offset=yticks[3])
plot_signal(x_v13, offset=yticks[4])
plot_signal(x_mne_c, offset=yticks[5])
plot_signal(x_min, offset=yticks[6])
axes[0].set(xlim=tlim, title='FIR, Lowpass=%d Hz' % f_p, xticks=tticks,
ylim=[-len(yticks) / yscale, 1. / yscale],
yticks=yticks, yticklabels=yticklabels)
for text in axes[0].get_yticklabels():
text.set(rotation=45, size=8)
axes[1].set(xlim=flim, ylim=(-60, 10), xlabel='Frequency (Hz)',
ylabel='Magnitude (dB)')
mne.viz.tight_layout()
plt.show()
Explanation: Both the MNE-Python 0.13 and MNE-C filters have excellent frequency
attenuation, but it comes at a cost of potential
ringing (long-lasting ripples) in the time domain. Ringing can occur with
steep filters, especially in signals with frequency content around the
transition band. Our Morlet wavelet signal has power in our transition band,
and the time-domain ringing is thus more pronounced for the steep-slope,
long-duration filter than the shorter, shallower-slope filter:
End of explanation
sos = signal.iirfilter(2, f_p / nyq, btype='low', ftype='butter', output='sos')
plot_filter(dict(sos=sos), sfreq, freq, gain, 'Butterworth order=2', flim=flim,
compensate=True)
x_shallow = signal.sosfiltfilt(sos, x)
del sos
Explanation: IIR filters
MNE-Python also offers IIR filtering functionality that is based on the
methods from :mod:scipy.signal. Specifically, we use the general-purpose
functions :func:scipy.signal.iirfilter and :func:scipy.signal.iirdesign,
which provide unified interfaces to IIR filter design.
Designing IIR filters
Let's continue with our design of a 40 Hz low-pass filter and look at
some trade-offs of different IIR filters.
Often the default IIR filter is a Butterworth filter_, which is designed
to have a maximally flat pass-band. Let's look at a few filter orders,
i.e., a few different number of coefficients used and therefore steepness
of the filter:
<div class="alert alert-info"><h4>Note</h4><p>Notice that the group delay (which is related to the phase) of
the IIR filters below are not constant. In the FIR case, we can
design so-called linear-phase filters that have a constant group
delay, and thus compensate for the delay (making the filter
non-causal) if necessary. This cannot be done with IIR filters, as
they have a non-linear phase (non-constant group delay). As the
filter order increases, the phase distortion near and in the
transition band worsens. However, if non-causal (forward-backward)
filtering can be used, e.g. with :func:`scipy.signal.filtfilt`,
these phase issues can theoretically be mitigated.</p></div>
End of explanation
iir_params = dict(order=8, ftype='butter')
filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
method='iir', iir_params=iir_params,
verbose=True)
plot_filter(filt, sfreq, freq, gain, 'Butterworth order=8', flim=flim,
compensate=True)
x_steep = signal.sosfiltfilt(filt['sos'], x)
Explanation: The falloff of this filter is not very steep.
<div class="alert alert-info"><h4>Note</h4><p>Here we have made use of second-order sections (SOS)
by using :func:`scipy.signal.sosfilt` and, under the
hood, :func:`scipy.signal.zpk2sos` when passing the
``output='sos'`` keyword argument to
:func:`scipy.signal.iirfilter`. The filter definitions
given `above <tut_filtering_basics>` use the polynomial
numerator/denominator (sometimes called "tf") form ``(b, a)``,
which are theoretically equivalent to the SOS form used here.
In practice, however, the SOS form can give much better results
due to issues with numerical precision (see
:func:`scipy.signal.sosfilt` for an example), so SOS should be
used whenever possible.</p></div>
Let's increase the order, and note that now we have better attenuation,
with a longer impulse response. Let's also switch to using the MNE filter
design function, which simplifies a few things and gives us some information
about the resulting filter:
End of explanation
iir_params.update(ftype='cheby1',
rp=1., # dB of acceptable pass-band ripple
)
filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
method='iir', iir_params=iir_params,
verbose=True)
plot_filter(filt, sfreq, freq, gain,
'Chebychev-1 order=8, ripple=1 dB', flim=flim, compensate=True)
Explanation: There are other types of IIR filters that we can use. For a complete list,
check out the documentation for :func:scipy.signal.iirdesign. Let's
try a Chebychev (type I) filter, which trades off ripple in the pass-band
to get better attenuation in the stop-band:
End of explanation
iir_params['rp'] = 6.
filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
method='iir', iir_params=iir_params,
verbose=True)
plot_filter(filt, sfreq, freq, gain,
'Chebychev-1 order=8, ripple=6 dB', flim=flim,
compensate=True)
Explanation: If we can live with even more ripple, we can get it slightly steeper,
but the impulse response begins to ring substantially longer (note the
different x-axis scale):
End of explanation
axes = plt.subplots(1, 2)[1]
yticks = np.arange(4) / -30.
yticklabels = ['Original', 'Noisy', 'Butterworth-2', 'Butterworth-8']
plot_signal(x_orig, offset=yticks[0])
plot_signal(x, offset=yticks[1])
plot_signal(x_shallow, offset=yticks[2])
plot_signal(x_steep, offset=yticks[3])
axes[0].set(xlim=tlim, title='IIR, Lowpass=%d Hz' % f_p, xticks=tticks,
ylim=[-0.125, 0.025], yticks=yticks, yticklabels=yticklabels,)
for text in axes[0].get_yticklabels():
text.set(rotation=45, size=8)
axes[1].set(xlim=flim, ylim=(-60, 10), xlabel='Frequency (Hz)',
ylabel='Magnitude (dB)')
mne.viz.adjust_axes(axes)
mne.viz.tight_layout()
plt.show()
Explanation: Applying IIR filters
Now let's look at how our shallow and steep Butterworth IIR filters
perform on our Morlet signal from before:
End of explanation
x = np.zeros(int(2 * sfreq))
t = np.arange(0, len(x)) / sfreq - 0.2
onset = np.where(t >= 0.5)[0][0]
cos_t = np.arange(0, int(sfreq * 0.8)) / sfreq
sig = 2.5 - 2.5 * np.cos(2 * np.pi * (1. / 0.8) * cos_t)
x[onset:onset + len(sig)] = sig
iir_lp_30 = signal.iirfilter(2, 30. / sfreq, btype='lowpass')
iir_hp_p1 = signal.iirfilter(2, 0.1 / sfreq, btype='highpass')
iir_lp_2 = signal.iirfilter(2, 2. / sfreq, btype='lowpass')
iir_hp_2 = signal.iirfilter(2, 2. / sfreq, btype='highpass')
x_lp_30 = signal.filtfilt(iir_lp_30[0], iir_lp_30[1], x, padlen=0)
x_hp_p1 = signal.filtfilt(iir_hp_p1[0], iir_hp_p1[1], x, padlen=0)
x_lp_2 = signal.filtfilt(iir_lp_2[0], iir_lp_2[1], x, padlen=0)
x_hp_2 = signal.filtfilt(iir_hp_2[0], iir_hp_2[1], x, padlen=0)
xlim = t[[0, -1]]
ylim = [-2, 6]
xlabel = 'Time (sec)'
ylabel = r'Amplitude ($\mu$V)'
tticks = [0, 0.5, 1.3, t[-1]]
axes = plt.subplots(2, 2)[1].ravel()
for ax, x_f, title in zip(axes, [x_lp_2, x_lp_30, x_hp_2, x_hp_p1],
['LP$_2$', 'LP$_{30}$', 'HP$_2$', 'LP$_{0.1}$']):
ax.plot(t, x, color='0.5')
ax.plot(t, x_f, color='k', linestyle='--')
ax.set(ylim=ylim, xlim=xlim, xticks=tticks,
title=title, xlabel=xlabel, ylabel=ylabel)
mne.viz.adjust_axes(axes)
mne.viz.tight_layout()
plt.show()
Explanation: Some pitfalls of filtering
Multiple recent papers have noted potential risks of drawing
errant inferences due to misapplication of filters.
Low-pass problems
Filters in general, especially those that are non-causal (zero-phase), can
make activity appear to occur earlier or later than it truly did. As
mentioned in VanRullen (2011) [3], investigations of commonly (at the time)
used low-pass filters created artifacts when they were applied to simulated
data. However, such deleterious effects were minimal in many real-world
examples in Rousselet (2012) [5].
Perhaps more revealing, it was noted in Widmann & Schröger (2012) [6] that
the problematic low-pass filters from VanRullen (2011) [3]:
Used a least-squares design (like :func:scipy.signal.firls) that
included "do-not-care" transition regions, which can lead to
uncontrolled behavior.
Had a filter length that was independent of the transition bandwidth,
which can cause excessive ringing and signal distortion.
High-pass problems
When it comes to high-pass filtering, using corner frequencies above 0.1 Hz
were found in Acunzo et al. (2012) [4]_ to:
"... generate a systematic bias easily leading to misinterpretations of
neural activity.”
In a related paper, Widmann et al. (2015) [7] also came to suggest a
0.1 Hz highpass. More evidence followed in Tanner et al. (2015) [8] of
such distortions. Using data from language ERP studies of semantic and
syntactic processing (i.e., N400 and P600), using a high-pass above 0.3 Hz
caused significant effects to be introduced implausibly early when compared
to the unfiltered data. From this, the authors suggested the optimal
high-pass value for language processing to be 0.1 Hz.
We can recreate a problematic simulation from Tanner et al. (2015) [8]_:
"The simulated component is a single-cycle cosine wave with an amplitude
of 5µV [sic], onset of 500 ms poststimulus, and duration of 800 ms. The
simulated component was embedded in 20 s of zero values to avoid
filtering edge effects... Distortions [were] caused by 2 Hz low-pass
and high-pass filters... No visible distortion to the original
waveform [occurred] with 30 Hz low-pass and 0.01 Hz high-pass filters...
Filter frequencies correspond to the half-amplitude (-6 dB) cutoff
(12 dB/octave roll-off)."
<div class="alert alert-info"><h4>Note</h4><p>This simulated signal contains energy not just within the
pass-band, but also within the transition and stop-bands -- perhaps
most easily understood because the signal has a non-zero DC value,
but also because it is a shifted cosine that has been
*windowed* (here multiplied by a rectangular window), which
makes the cosine and DC frequencies spread to other frequencies
(multiplication in time is convolution in frequency, so multiplying
by a rectangular window in the time domain means convolving a sinc
function with the impulses at DC and the cosine frequency in the
frequency domain).</p></div>
End of explanation
def baseline_plot(x):
all_axes = plt.subplots(3, 2)[1]
for ri, (axes, freq) in enumerate(zip(all_axes, [0.1, 0.3, 0.5])):
for ci, ax in enumerate(axes):
if ci == 0:
iir_hp = signal.iirfilter(4, freq / sfreq, btype='highpass',
output='sos')
x_hp = signal.sosfiltfilt(iir_hp, x, padlen=0)
else:
x_hp -= x_hp[t < 0].mean()
ax.plot(t, x, color='0.5')
ax.plot(t, x_hp, color='k', linestyle='--')
if ri == 0:
ax.set(title=('No ' if ci == 0 else '') +
'Baseline Correction')
ax.set(xticks=tticks, ylim=ylim, xlim=xlim, xlabel=xlabel)
ax.set_ylabel('%0.1f Hz' % freq, rotation=0,
horizontalalignment='right')
mne.viz.adjust_axes(axes)
mne.viz.tight_layout()
plt.suptitle(title)
plt.show()
baseline_plot(x)
Explanation: Similarly, in a P300 paradigm reported by Kappenman & Luck (2010) [12]_,
they found that applying a 1 Hz high-pass decreased the probability of
finding a significant difference in the N100 response, likely because
the P300 response was smeared (and inverted) in time by the high-pass
filter such that it tended to cancel out the increased N100. However,
they nonetheless note that some high-passing can still be useful to deal
with drifts in the data.
Even though these papers generally advise a 0.1 Hz or lower frequency for
a high-pass, it is important to keep in mind (as most authors note) that
filtering choices should depend on the frequency content of both the
signal(s) of interest and the noise to be suppressed. For example, in
some of the MNE-Python examples involving sample-data,
high-pass values of around 1 Hz are used when looking at auditory
or visual N100 responses, because we analyze standard (not deviant) trials
and thus expect that contamination by later or slower components will
be limited.
Baseline problems (or solutions?)
In an evolving discussion, Tanner et al. (2015) [8] suggest using baseline
correction to remove slow drifts in data. However, Maess et al. (2016) [9]
suggest that baseline correction, which is a form of high-passing, does
not offer substantial advantages over standard high-pass filtering.
Tanner et al. (2016) [10]_ rebutted that baseline correction can correct
for problems with filtering.
To see what they mean, consider again our old simulated signal x from
before:
End of explanation
n_pre = (t < 0).sum()
sig_pre = 1 - np.cos(2 * np.pi * np.arange(n_pre) / (0.5 * n_pre))
x[:n_pre] += sig_pre
baseline_plot(x)
Explanation: In response, Maess et al. (2016) [11]_ note that these simulations do not
address cases of pre-stimulus activity that is shared across conditions, as
applying baseline correction will effectively copy the topology outside the
baseline period. We can see this if we give our signal x with some
consistent pre-stimulus activity, which makes everything look bad.
<div class="alert alert-info"><h4>Note</h4><p>An important thing to keep in mind with these plots is that they
are for a single simulated sensor. In multi-electrode recordings
the topology (i.e., spatial pattern) of the pre-stimulus activity
will leak into the post-stimulus period. This will likely create a
spatially varying distortion of the time-domain signals, as the
averaged pre-stimulus spatial pattern gets subtracted from the
sensor time courses.</p></div>
Putting some activity in the baseline period:
End of explanation
# Use the same settings as when calling e.g., `raw.filter()`
fir_coefs = mne.filter.create_filter(
data=None, # data is only used for sanity checking, not strictly needed
sfreq=1000., # sfreq of your data in Hz
l_freq=None,
h_freq=40., # assuming a lowpass of 40 Hz
method='fir',
fir_window='hamming',
fir_design='firwin',
verbose=True)
# See the printed log for the transition bandwidth and filter length.
# Alternatively, get the filter length through:
filter_length = fir_coefs.shape[0]
Explanation: Both groups seem to acknowledge that the choices of filtering cutoffs, and
perhaps even the application of baseline correction, depend on the
characteristics of the data being investigated, especially when it comes to:
The frequency content of the underlying evoked activity relative
to the filtering parameters.
The validity of the assumption of no consistent evoked activity
in the baseline period.
We thus recommend carefully applying baseline correction and/or high-pass
values based on the characteristics of the data to be analyzed.
Filtering defaults
Defaults in MNE-Python
Most often, filtering in MNE-Python is done at the :class:mne.io.Raw level,
and thus :func:mne.io.Raw.filter is used. This function under the hood
(among other things) calls :func:mne.filter.filter_data to actually
filter the data, which by default applies a zero-phase FIR filter designed
using :func:scipy.signal.firwin. In Widmann et al. (2015) [7]_, they
suggest a specific set of parameters to use for high-pass filtering,
including:
"... providing a transition bandwidth of 25% of the lower passband
edge but, where possible, not lower than 2 Hz and otherwise the
distance from the passband edge to the critical frequency.”
In practice, this means that for each high-pass value l_freq or
low-pass value h_freq below, you would get this corresponding
l_trans_bandwidth or h_trans_bandwidth, respectively,
if the sample rate were 100 Hz (i.e., Nyquist frequency of 50 Hz):
+------------------+-------------------+-------------------+
| l_freq or h_freq | l_trans_bandwidth | h_trans_bandwidth |
+==================+===================+===================+
| 0.01 | 0.01 | 2.0 |
+------------------+-------------------+-------------------+
| 0.1 | 0.1 | 2.0 |
+------------------+-------------------+-------------------+
| 1.0 | 1.0 | 2.0 |
+------------------+-------------------+-------------------+
| 2.0 | 2.0 | 2.0 |
+------------------+-------------------+-------------------+
| 4.0 | 2.0 | 2.0 |
+------------------+-------------------+-------------------+
| 8.0 | 2.0 | 2.0 |
+------------------+-------------------+-------------------+
| 10.0 | 2.5 | 2.5 |
+------------------+-------------------+-------------------+
| 20.0 | 5.0 | 5.0 |
+------------------+-------------------+-------------------+
| 40.0 | 10.0 | 10.0 |
+------------------+-------------------+-------------------+
| 50.0 | 12.5 | 12.5 |
+------------------+-------------------+-------------------+
MNE-Python has adopted this definition for its high-pass (and low-pass)
transition bandwidth choices when using l_trans_bandwidth='auto' and
h_trans_bandwidth='auto'.
To choose the filter length automatically with filter_length='auto',
the reciprocal of the shortest transition bandwidth is used to ensure
decent attenuation at the stop frequency. Specifically, the reciprocal
(in samples) is multiplied by 3.1, 3.3, or 5.0 for the Hann, Hamming,
or Blackman windows, respectively, as selected by the fir_window
argument for fir_design='firwin', and double these for
fir_design='firwin2' mode.
<div class="alert alert-info"><h4>Note</h4><p>For ``fir_design='firwin2'``, the multiplicative factors are
doubled compared to what is given in Ifeachor & Jervis (2002) [2]_
(p. 357), as :func:`scipy.signal.firwin2` has a smearing effect
on the frequency response, which we compensate for by
increasing the filter length. This is why
``fir_desgin='firwin'`` is preferred to ``fir_design='firwin2'``.</p></div>
In 0.14, we default to using a Hamming window in filter design, as it
provides up to 53 dB of stop-band attenuation with small pass-band ripple.
<div class="alert alert-info"><h4>Note</h4><p>In band-pass applications, often a low-pass filter can operate
effectively with fewer samples than the high-pass filter, so
it is advisable to apply the high-pass and low-pass separately
when using ``fir_design='firwin2'``. For design mode
``fir_design='firwin'``, there is no need to separate the
operations, as the lowpass and highpass elements are constructed
separately to meet the transition band requirements.</p></div>
For more information on how to use the
MNE-Python filtering functions with real data, consult the preprocessing
tutorial on tut-filter-resample.
Defaults in MNE-C
MNE-C by default uses:
5 Hz transition band for low-pass filters.
3-sample transition band for high-pass filters.
Filter length of 8197 samples.
The filter is designed in the frequency domain, creating a linear-phase
filter such that the delay is compensated for as is done with the MNE-Python
phase='zero' filtering option.
Squared-cosine ramps are used in the transition regions. Because these
are used in place of more gradual (e.g., linear) transitions,
a given transition width will result in more temporal ringing but also more
rapid attenuation than the same transition width in windowed FIR designs.
The default filter length will generally have excellent attenuation
but long ringing for the sample rates typically encountered in M/EEG data
(e.g. 500-2000 Hz).
Defaults in other software
A good but possibly outdated comparison of filtering in various software
packages is available in Widmann et al. (2015) [7]_. Briefly:
EEGLAB
MNE-Python 0.14 defaults to behavior very similar to that of EEGLAB
(see the EEGLAB filtering FAQ_ for more information).
FieldTrip
By default FieldTrip applies a forward-backward Butterworth IIR filter
of order 4 (band-pass and band-stop filters) or 2 (for low-pass and
high-pass filters). Similar filters can be achieved in MNE-Python when
filtering with :meth:raw.filter(..., method='iir') <mne.io.Raw.filter>
(see also :func:mne.filter.construct_iir_filter for options).
For more information, see e.g. the
FieldTrip band-pass documentation <ftbp_>_.
Reporting Filters
On page 45 in Widmann et al. (2015) [7]_, there is a convenient list of
important filter parameters that should be reported with each publication:
Filter type (high-pass, low-pass, band-pass, band-stop, FIR, IIR)
Cutoff frequency (including definition)
Filter order (or length)
Roll-off or transition bandwidth
Passband ripple and stopband attenuation
Filter delay (zero-phase, linear-phase, non-linear phase) and causality
Direction of computation (one-pass forward/reverse, or two-pass forward
and reverse)
In the following, we will address how to deal with these parameters in MNE:
Filter type
Depending on the function or method used, the filter type can be specified.
To name an example, in :func:mne.filter.create_filter, the relevant
arguments would be l_freq, h_freg, method, and if the method is FIR
fir_window and fir_design.
Cutoff frequency
The cutoff of FIR filters in MNE is defined as half-amplitude cutoff in the
middle of the transition band. That is, if you construct a lowpass FIR filter
with h_freq = 40, the filter function will provide a transition
bandwidth that depends on the h_trans_bandwidth argument. The desired
half-amplitude cutoff of the lowpass FIR filter is then at
h_freq + transition_bandwidth/2..
Filter length (order) and transition bandwidth (roll-off)
In the tut_filtering_in_python section, we have already talked about
the default filter lengths and transition bandwidths that are used when no
custom values are specified using the respective filter function's arguments.
If you want to find out about the filter length and transition bandwidth that
were used through the 'auto' setting, you can use
:func:mne.filter.create_filter to print out the settings once more:
End of explanation |
1,372 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing CHIRPS and ARC2 Precipitation Data in Kenya
In this demo we are comparing two historical precipitation datasets - CHIRPS and ARC2.
Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS) is a global 30+ year gridded rainfall dataset using satellite and in situ data with a resolution 0.05 degrees, while the Africa Rainfall Climatology version 2 (ARC2) is also a 30+ year gridded analyzis of precipitation using satellite and in situ data with resolution of 0.1 degrees.
Using these datasets you can analyze extreme events that occured in the past or identify long time precipitation trends. Even though CHIRPS and ARC2 have some differences, the trends remain similar.
In this demo we will
Step1: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
Step2: At first, we need to define the dataset names and temporal ranges. Please note that the datasets have different time ranges. So we will download the data from 1981, when CHRIPS starts (ARC2 is from 1983).
Step3: Then we define spatial range. We decided to analyze Kenya, where agriculture is the second largest contributor to the GDP, after the service sector. Most of its agricultural production comes from the fertile highlands of Kenya in South-Western part of the country, where they grow tea, coffee, sisal, pyrethrum, corn, and wheat. However, feel free to change the area according to your interest.
Step4: Download the data with package API
Create package objects
Send commands for the package creation
Download the package files
Step5: Work with downloaded files
We start with opening the files with xarray and then it will compute some basic statistics for the dataset comparison
Step6: In the plot below we see the ARC2 and CHIRPS time-series, where the annual precipitation is averaged over the area. We can see that one or the other dataset over/under estimates the values, however the trend remains the same. We can also see that 1996 and 2005 have been quite wet years for South-West Kenya.
Step7: In the plot above, we used data from 1982 to show all the data from CHIRPS. We now want to limit the data to have the same time range for both of the datasets, so that we can compare them.
Step8: Then we will find out the maximum precipitation over the whole period, and we will see that CHIRPS shows much higher values than ARC2. The differences between ARC2 and CHIRPS are brought out in CHIRPS Reality Checks document as well.
Step9: In this section, we will find minimum, maximum and average number of dry days. Interestingly, CHIRPS and ARC2 datasets have very similar values for dry days. We can see that there is 9,912 - 10,406 dry days in 34 years on average. Which is not that much, only about 27 days per year.
Step10: Monthly averages over the period
Here we will be comparing the monthly averages first using the violin plot and then the bar plot for an easier overview.
Step11: In the violin plot below we can see that CHRIPS has significantly bigger maximum values during April, May and November. However, during most of the months the mean values of ARC2 and CHIRPS are quite similar.
Step12: We will now demonstrate the mean monthly values on the bar plot as well so that it would be easier to follow the monthly averages. They are similar for both of the datasets. The biggest differences are in April and November — we saw the same thing in the previous plot. In addition, we can also see that the wettest month of the year is April and the summer months are the driest.
Step13: Finally, let’s see the monthly anomalies for 2016. The period used for computing the climatology is 1983-2017. Positive values in the plot means that 2016 precipitation was above long term normal. It seems that April in 2016 had significant precipitation in South-West Kenya. At the same time, October and December, which are short rain periods, had less precipitation than normal.
There were serious droughts in Kenya during 2016 but the mostly covered Northern and South-East Kenya. World Weather Attribution has made a 2016 Kenya drought analyzis from where you can also see the South-West area had more precipitation than normal, but a little less between June and December, except for November. | Python Code:
%matplotlib notebook
import numpy as np
from dh_py_access import package_api
import dh_py_access.lib.datahub as datahub
import xarray as xr
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from po_data_process import get_data_in_pandas_dataframe, make_plot,get_comparison_graph
import dh_py_access.package_api as package_api
import matplotlib.gridspec as gridspec
import calendar
#import warnings
import datetime
#warnings.filterwarnings("ignore")
import matplotlib
print (matplotlib.__version__)
Explanation: Comparing CHIRPS and ARC2 Precipitation Data in Kenya
In this demo we are comparing two historical precipitation datasets - CHIRPS and ARC2.
Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS) is a global 30+ year gridded rainfall dataset using satellite and in situ data with a resolution 0.05 degrees, while the Africa Rainfall Climatology version 2 (ARC2) is also a 30+ year gridded analyzis of precipitation using satellite and in situ data with resolution of 0.1 degrees.
Using these datasets you can analyze extreme events that occured in the past or identify long time precipitation trends. Even though CHIRPS and ARC2 have some differences, the trends remain similar.
In this demo we will:
1) demonstrate the use of package API to fetch data;
2) show time-series of averaged data over the area of Kenya;
3) investigate data to:
a. find the maximum precipitation over the entire period
b. get average yearly values and compare them with time-series plot
c. find out average number of dry days in both of the datasets
d. compare average monthly values using violin plot and bar plot
e. find out 2016 monthly anomalies
End of explanation
server = 'api.planetos.com'
API_key = open('APIKEY').readlines()[0].strip() #'<YOUR API KEY HERE>'
version = 'v1'
Explanation: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
End of explanation
dh=datahub.datahub(server,version,API_key)
dataset1='noaa_arc2_africa_01'
variable_name1 = 'pr'
dataset2='chg_chirps_global_05'
variable_name2 = 'precip'
time_start = '1981-01-01T00:00:00'
time_end = '2017-11-01T00:00:00'
Explanation: At first, we need to define the dataset names and temporal ranges. Please note that the datasets have different time ranges. So we will download the data from 1981, when CHRIPS starts (ARC2 is from 1983).
End of explanation
area_name = 'Kenya'
latitude_north = 1.6; longitude_west = 34.2
latitude_south = -2.5; longitude_east = 38.4
Explanation: Then we define spatial range. We decided to analyze Kenya, where agriculture is the second largest contributor to the GDP, after the service sector. Most of its agricultural production comes from the fertile highlands of Kenya in South-Western part of the country, where they grow tea, coffee, sisal, pyrethrum, corn, and wheat. However, feel free to change the area according to your interest.
End of explanation
package_arc2_africa_01 = package_api.package_api(dh,dataset1,variable_name1,longitude_west,longitude_east,latitude_south,latitude_north,time_start,time_end,area_name=area_name)
package_chg_chirps_global_05 = package_api.package_api(dh,dataset2,variable_name2,longitude_west,longitude_east,latitude_south,latitude_north,time_start,time_end,area_name=area_name)
package_arc2_africa_01.make_package()
package_chg_chirps_global_05.make_package()
package_arc2_africa_01.download_package()
package_chg_chirps_global_05.download_package()
Explanation: Download the data with package API
Create package objects
Send commands for the package creation
Download the package files
End of explanation
dd1 = xr.open_dataset(package_arc2_africa_01.local_file_name)
dd2 = xr.open_dataset(package_chg_chirps_global_05.local_file_name)
Explanation: Work with downloaded files
We start with opening the files with xarray and then it will compute some basic statistics for the dataset comparison:
Average yearly values
Number of dry days
Number of days with precipitation over 10 mm
Average monthly values
2016 monthly anomalies
End of explanation
yearly_sum1 = dd1.pr.resample(time="1AS").sum('time')
yearly_mean_sum1 = yearly_sum1.mean(axis=(1,2))
yearly_sum2 = dd2.precip.resample(time="1AS").sum('time')
yearly_mean_sum2 = yearly_sum2.mean(axis=(1,2))
fig = plt.figure(figsize=(10,5))
plt.plot(yearly_mean_sum1.time,yearly_mean_sum1, '*-',linewidth = 1,label = dataset1)
plt.plot(yearly_mean_sum2.time,yearly_mean_sum2, '*-',linewidth = 1,c='red',label = dataset2)
plt.legend()
plt.grid()
plt.show()
Explanation: In the plot below we see the ARC2 and CHIRPS time-series, where the annual precipitation is averaged over the area. We can see that one or the other dataset over/under estimates the values, however the trend remains the same. We can also see that 1996 and 2005 have been quite wet years for South-West Kenya.
End of explanation
time_start = '1983-01-01T00:00:00'
dd2 = dd2.sel(time = slice(time_start,time_end))
dd2_dat = np.ma.masked_where(np.isnan(dd2.precip.data),dd2.precip.data)
dd2_dat = dd2.precip.data
dd1_dat = dd1.pr.data
Explanation: In the plot above, we used data from 1982 to show all the data from CHIRPS. We now want to limit the data to have the same time range for both of the datasets, so that we can compare them.
End of explanation
# maximum precipitation over the whole period
print ('\033[1mMaximum precipitation over the whole period \033[0m')
print(dataset1 + '\t' + str(np.nanmax(dd1_dat)))
print(dataset2 + '\t' + str(np.nanmax(dd2_dat)))
Explanation: Then we will find out the maximum precipitation over the whole period, and we will see that CHIRPS shows much higher values than ARC2. The differences between ARC2 and CHIRPS are brought out in CHIRPS Reality Checks document as well.
End of explanation
dd1_dry_days = np.sum(np.where(dd1_dat>0.1,0,1),axis=0)
dd2_dry_days = np.sum(np.where(dd2_dat>0.1,0,1),axis=0)
# minimum, maximum and average nr of dry days
print ('\033[1mNumber of dry days:\tMinimum\t Maximum Average\033[0m')
print(dataset1 + '\t' + str(np.amin(dd1_dry_days)), '\t',str(np.amax(dd1_dry_days)),'\t',str(np.mean(dd1_dry_days)))
print(dataset2 + '\t' + str(np.amin(dd2_dry_days)),'\t',str(np.amax(dd2_dry_days)),'\t',str(np.mean(dd2_dry_days)))
Explanation: In this section, we will find minimum, maximum and average number of dry days. Interestingly, CHIRPS and ARC2 datasets have very similar values for dry days. We can see that there is 9,912 - 10,406 dry days in 34 years on average. Which is not that much, only about 27 days per year.
End of explanation
##help(dd1.precip.resample)
dd1_monthly_avg = dd1.pr.resample(time="1MS").sum('time')
dd2_monthly_avg = dd2.precip.resample(time="1MS").sum('time')
mm_data1 = [];mm_data2 = []
for i in range(12):
mmm1 = np.mean(dd1_monthly_avg[i::12,:,:],axis=0).values
mm_data1.append(mmm1.mean(axis=1))
mmm2 = np.mean(dd2_monthly_avg[i::12,:,:],axis=0).values
mm_data2.append(mmm2.mean(axis=1))
Explanation: Monthly averages over the period
Here we will be comparing the monthly averages first using the violin plot and then the bar plot for an easier overview.
End of explanation
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(111)
ax.violinplot(mm_data1,np.arange(0.75,12.75,1),
showmeans=True,
showmedians=False)
ax.violinplot(mm_data2,np.arange(1.25,13.25,1),
showmeans=True,
showmedians=False)
plt.setp(ax, xticks = np.arange(1,13,1),
xticklabels=[calendar.month_abbr[m] for m in np.arange(1,13,1)])
plt.show()
Explanation: In the violin plot below we can see that CHRIPS has significantly bigger maximum values during April, May and November. However, during most of the months the mean values of ARC2 and CHIRPS are quite similar.
End of explanation
averaged_monthly_mean2 = np.mean(mm_data2,axis = (1))
averaged_monthly_mean1 = np.mean(mm_data1,axis = (1))
fig = plt.figure(figsize = (8,6))
ax = fig.add_subplot(111)
bar_width = 0.35
opacity = 0.4
ax.bar(np.arange(0,12,1)-bar_width/2,averaged_monthly_mean2,
bar_width,
alpha=opacity,
color='b',
label = dataset2)
ax.bar(np.arange(0,12,1) + bar_width/2,averaged_monthly_mean1,
bar_width,
alpha=opacity,
color='r',
label = dataset1)
plt.legend()
plt.setp(ax, xticks = np.arange(0,12,1),
xticklabels=[calendar.month_abbr[m+1] for m in np.arange(0,12,1)])
plt.show()
Explanation: We will now demonstrate the mean monthly values on the bar plot as well so that it would be easier to follow the monthly averages. They are similar for both of the datasets. The biggest differences are in April and November — we saw the same thing in the previous plot. In addition, we can also see that the wettest month of the year is April and the summer months are the driest.
End of explanation
time_start = '2016-01-01T00:00:00'
time_end = '2016-12-31T23:00:00'
dd2_2016 = dd2.sel(time = slice(time_start,time_end))
dd1_2016 = dd1.sel(time = slice(time_start,time_end))
dd1_monthly2016_avg = dd1_2016.pr.resample(time="1MS").sum('time')
dd2_monthly2016_avg = dd2_2016.precip.resample(time="1MS").sum('time')
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(111)
plt.plot(np.arange(1,13,1),np.mean(dd2_monthly2016_avg,axis = (1,2))-averaged_monthly_mean2, '*-',linewidth = 1,label = dataset1)
plt.plot(np.arange(1,13,1),np.mean(dd1_monthly2016_avg,axis = (1,2))-averaged_monthly_mean1, '*-',linewidth = 1,c='red',label = dataset2)
plt.setp(ax, xticks = np.arange(1,13,1),
xticklabels=[calendar.month_abbr[m] for m in np.arange(1,13,1)])
plt.legend()
plt.grid()
plt.show()
Explanation: Finally, let’s see the monthly anomalies for 2016. The period used for computing the climatology is 1983-2017. Positive values in the plot means that 2016 precipitation was above long term normal. It seems that April in 2016 had significant precipitation in South-West Kenya. At the same time, October and December, which are short rain periods, had less precipitation than normal.
There were serious droughts in Kenya during 2016 but the mostly covered Northern and South-East Kenya. World Weather Attribution has made a 2016 Kenya drought analyzis from where you can also see the South-West area had more precipitation than normal, but a little less between June and December, except for November.
End of explanation |
1,373 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Blue Sky Run Engine
Contents
Step1: The Run Engine processes messages
A message has four parts
Step2: Moving a motor and reading it back is boring. Let's add a detector.
Step4: There is two-way communication between the message generator and the Run Engine
Above we the three messages with the responses they generated from the RunEngine. We can use these responses to make our scan adaptive.
Step5: Control timing with 'sleep' and 'wait'
The 'sleep' command is as simple as it sounds.
Step6: The 'wait' command is more powerful. It watches for Movers (e.g., motor) to report being done.
Wait for one motor to be done moving
Step7: Notice, in the log, that the response to wait is the set of Movers the scan was waiting on.
Wait for two motors to both be done moving
Step8: Advanced Example
Step9: Runs can be paused and safely resumed or aborted
"Hard Pause"
Step10: The scan thread sleeps and waits for more user input, to resume or abort. (On resume, this example will obviously hit the same pause condition again --- nothing has changed.)
Step11: Other threads can request a pause
Calling RE.request_pause(hard=True) or RE.request_pause(hard=False) has the same affect as a 'pause' command.
SIGINT (Ctrl+C) is reliably caught before each message is processed, even across threads.
SIGINT triggers a hard pause. If no checkpoint commands have been issued, CTRL+C causes the Run Engine to abort.
Step12: If the scan contains checkpoints, it's possible to resume after Ctrl+C.
Step13: Threading is optional -- switch it off for easier debugging
Again, we'll interrupt the scan. We get exactly the same result, but this time we see a full Traceback.
Step14: Any functions can subscribe to the live data stream (e.g., live plotting)
In the examples above, the runs have been emitting RunStart and RunStop Documents, but no Events or Event Descriptors. We will add those now.
Emitting Events and Event Descriptors
The 'create' and 'save' commands collect all the reads between them into one Event.
If that particular set of objects has never been bundled into an Event during this run, then an Event Descriptor is also created.
All four Documents -- RunStart, RunStop, Event, and EventDescriptor -- are simply Python dictionaries.
Step15: Very Simple Example
Any user function that accepts a Python dictionary can be registered as a "consumer" of these Event Documents. Here's a toy example.
Step16: To use this consumer function during a run
Step17: The use it by default on every run for this instance of the Run Engine
Step18: The output token, an integer, can be use to unsubscribe later.
Step19: Live Plotting
First, we'll create some axes. The code below updates the plot while the run is ongoing.
Step20: Saving Documents to metadatastore
Mission-critical consumers can be run on the scan thread, where they will block the scan until they return from processing the emitted Documents. This should not be used for computationally heavy tasks like visualization. Its only intended use is for saving data to metadatastore, but users can register any consumers they want, at risk of slowing down the scan.
RE._register_scan_callback('event', some_critical_func)
The convenience function register_mds registers metadatastore's four insert_* functions to consume their four respective Documents. These are registered on the scan thread, so data is guaranteed to be saved in metadatastore.
Step21: We can verify that this worked by loading this one-point scan from the DataBroker and displaying the data using DataMuxer.
Step22: Flyscan prototype
Asserts that flyscans are managed by an object which has three methods
Step23: The fly scan results are in metadatastore....
Step24: Fly scan + stepscan
Do a step scan with one motor and a fly scan with another | Python Code:
from bluesky import Mover, SynGauss, Msg, RunEngine
motor = Mover('motor', ['pos'])
det = SynGauss('det', motor, 'pos', center=0, Imax=1, sigma=1)
Explanation: Blue Sky Run Engine
Contents:
The Run Engine processes messages
There is two-way communication between the message generator and the Run Engine
Control timing with 'sleep' and 'wait'
Runs can be paused and cleanly resumed or aborted
Any functions can subscribe to the live data stream (e.g., live plotting)
Fly Scan Prototype
End of explanation
Msg('set', motor, {'pos': 5})
Msg('trigger', motor)
Msg('read', motor)
RE = RunEngine()
def simple_scan(motor):
"Set, trigger, read"
yield Msg('set', motor, {'pos': 5})
yield Msg('trigger', motor)
yield Msg('read', motor)
RE.run(simple_scan(motor))
Explanation: The Run Engine processes messages
A message has four parts: a command string, an object, a tuple of positional arguments, and a dictionary of keyword arguments.
End of explanation
def simple_scan2(motor, det):
"Set, trigger motor, trigger detector, read"
yield Msg('set', motor, {'pos': 5})
yield Msg('trigger', motor)
yield Msg('trigger', det)
yield Msg('read', det)
RE.run(simple_scan2(motor, det))
Explanation: Moving a motor and reading it back is boring. Let's add a detector.
End of explanation
def adaptive_scan(motor, det, threshold):
Set, trigger, read until the detector reads intensity < threshold
i = 0
while True:
print("LOOP %d" % i)
yield Msg('set', motor, {'pos': i})
yield Msg('trigger', motor)
yield Msg('trigger', det)
reading = yield Msg('read', det)
if reading['det']['value'] < threshold:
print('DONE')
break
i += 1
RE.run(adaptive_scan(motor, det, 0.2))
Explanation: There is two-way communication between the message generator and the Run Engine
Above we the three messages with the responses they generated from the RunEngine. We can use these responses to make our scan adaptive.
End of explanation
def sleepy_scan(motor, det):
"Set, trigger motor, sleep for a fixed time, trigger detector, read"
yield Msg('set', motor, {'pos': 5})
yield Msg('trigger', motor)
yield Msg('sleep', None, 2) # units: seconds
yield Msg('trigger', det)
yield Msg('read', det)
RE.run(sleepy_scan(motor, det))
Explanation: Control timing with 'sleep' and 'wait'
The 'sleep' command is as simple as it sounds.
End of explanation
def wait_one(motor, det):
"Set, trigger, read"
yield Msg('set', motor, {'pos': 5})
yield Msg('trigger', motor, block_group='A') # Add motor to group 'A'.
yield Msg('wait', None, 'A') # Wait for everything in group 'A' to report done.
yield Msg('trigger', det)
yield Msg('read', det)
RE.run(wait_one(motor, det))
Explanation: The 'wait' command is more powerful. It watches for Movers (e.g., motor) to report being done.
Wait for one motor to be done moving
End of explanation
def wait_multiple(motors, det):
"Set motors, trigger all motors, wait for all motors to move."
for motor in motors:
yield Msg('set', motor, {'pos': 5})
yield Msg('trigger', motor, block_group='A') # Trigger each motor and add it to group 'A'.
yield Msg('wait', None, 'A') # Wait for everything in group 'A' to report done.
yield Msg('trigger', det)
yield Msg('read', det)
motor1 = Mover('motor1', ['pos'])
motor2 = Mover('motor2', ['pos'])
RE.run(wait_multiple([motor1, motor2], det))
Explanation: Notice, in the log, that the response to wait is the set of Movers the scan was waiting on.
Wait for two motors to both be done moving
End of explanation
def wait_complex(motors, det):
"Set motors, trigger motors, wait for all motors to move."
# Same as above...
for motor in motors[:-1]:
yield Msg('set', motor, {'pos': 5})
yield Msg('trigger', motor, block_group='A')
# ...but put the last motor is separate group.
yield Msg('set', motors[-1], {'pos': 5})
yield Msg('trigger', motors[-1], block_group='B')
yield Msg('wait', None, 'A') # Wait for everything in group 'A' to report done.
yield Msg('trigger', det)
yield Msg('read', det)
yield Msg('wait', None, 'B') # Wait for everything in group 'B' to report done.
yield Msg('trigger', det)
yield Msg('read', det)
motor3 = Mover('motor3', ['pos'])
RE.run(wait_complex([motor1, motor2, motor3], det))
Explanation: Advanced Example: Wait for different groups of motors at different points in the run
If the 'A' bit seems pointless, the payoff is here. We trigger all the motors at once, wait for the first two, read, wait for the last one, and read again. This is merely meant to show that complex control flow is possible.
End of explanation
def conditional_hard_pause(motor, det):
for i in range(5):
yield Msg('checkpoint')
yield Msg('set', motor, {'pos': i})
yield Msg('trigger', motor)
yield Msg('trigger', det)
reading = yield Msg('read', det)
if reading['det']['value'] < 0.2:
yield Msg('pause', hard=True)
RE.run(conditional_hard_pause(motor, det))
Explanation: Runs can be paused and safely resumed or aborted
"Hard Pause": Stop immediately. On resume, rerun messages from last 'checkpoint' command.
The Run Engine does not guess where it is safe to resume. The 'pause' command must follow a 'checkpoint' command, indicating a safe point to go back to in the event of a hard pause.
End of explanation
RE.state
RE.resume()
RE.state
RE.abort()
def conditional_soft_pause(motor, det):
for i in range(5):
yield Msg('checkpoint')
yield Msg('set', motor, {'pos': i})
yield Msg('trigger', motor)
yield Msg('trigger', det)
reading = yield Msg('read', det)
if reading['det']['value'] < 0.2:
yield Msg('pause', hard=False)
# If a soft pause is requested, the Run Engine will
# still execute these messages before pausing.
yield Msg('set', motor, {'pos': i + 0.5})
yield Msg('trigger', motor)
RE.run(conditional_soft_pause(motor, det))
Explanation: The scan thread sleeps and waits for more user input, to resume or abort. (On resume, this example will obviously hit the same pause condition again --- nothing has changed.)
End of explanation
RE.run(sleepy_scan(motor, det))
Explanation: Other threads can request a pause
Calling RE.request_pause(hard=True) or RE.request_pause(hard=False) has the same affect as a 'pause' command.
SIGINT (Ctrl+C) is reliably caught before each message is processed, even across threads.
SIGINT triggers a hard pause. If no checkpoint commands have been issued, CTRL+C causes the Run Engine to abort.
End of explanation
def sleepy_scan_checkpoints(motor, det):
"Set, trigger motor, sleep for a fixed time, trigger detector, read"
yield Msg('checkpoint')
yield Msg('set', motor, {'pos': 5})
yield Msg('trigger', motor)
yield Msg('sleep', None, 2) # units: seconds
yield Msg('trigger', det)
yield Msg('read', det)
RE.run(sleepy_scan_checkpoints(motor, det))
RE.resume()
Explanation: If the scan contains checkpoints, it's possible to resume after Ctrl+C.
End of explanation
RE.run(simple_scan(motor), use_threading=False)
Explanation: Threading is optional -- switch it off for easier debugging
Again, we'll interrupt the scan. We get exactly the same result, but this time we see a full Traceback.
End of explanation
def simple_scan_saving(motor, det):
"Set, trigger, read"
yield Msg('create')
yield Msg('set', motor, {'pos': 5})
yield Msg('trigger', motor)
yield Msg('read', motor)
yield Msg('read', det)
yield Msg('save')
RE.run(simple_scan_saving(motor, det))
Explanation: Any functions can subscribe to the live data stream (e.g., live plotting)
In the examples above, the runs have been emitting RunStart and RunStop Documents, but no Events or Event Descriptors. We will add those now.
Emitting Events and Event Descriptors
The 'create' and 'save' commands collect all the reads between them into one Event.
If that particular set of objects has never been bundled into an Event during this run, then an Event Descriptor is also created.
All four Documents -- RunStart, RunStop, Event, and EventDescriptor -- are simply Python dictionaries.
End of explanation
def print_event_time(doc):
print('===== EVENT TIME:', doc['time'], '=====')
Explanation: Very Simple Example
Any user function that accepts a Python dictionary can be registered as a "consumer" of these Event Documents. Here's a toy example.
End of explanation
RE.run(simple_scan_saving(motor, det), subscriptions={'event': print_event_time})
Explanation: To use this consumer function during a run:
End of explanation
token = RE.subscribe('event', print_event_time)
token
Explanation: The use it by default on every run for this instance of the Run Engine:
End of explanation
RE.unsubscribe(token)
Explanation: The output token, an integer, can be use to unsubscribe later.
End of explanation
%matplotlib notebook
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
def stepscan(motor, detector):
for i in range(-5, 5):
yield Msg('create')
yield Msg('set', motor, {'pos': i})
yield Msg('trigger', motor)
yield Msg('trigger', det)
yield Msg('read', motor)
yield Msg('read', detector)
yield Msg('save')
def live_scalar_plotter(ax, y, x):
x_data, y_data = [], []
line, = ax.plot([], [], 'ro', markersize=10)
def update_plot(doc):
# Update with the latest data.
x_data.append(doc['data'][x]['value'])
y_data.append(doc['data'][y]['value'])
line.set_data(x_data, y_data)
# Rescale and redraw.
ax.relim(visible_only=True)
ax.autoscale_view(tight=True)
ax.figure.canvas.draw()
return update_plot
# Point the function to our axes above, and specify what to plot.
my_plotter = live_scalar_plotter(ax, 'det', 'pos')
RE.run(stepscan(motor, det), subscriptions={'event': my_plotter})
Explanation: Live Plotting
First, we'll create some axes. The code below updates the plot while the run is ongoing.
End of explanation
%run register_mds.py
register_mds(RE)
Explanation: Saving Documents to metadatastore
Mission-critical consumers can be run on the scan thread, where they will block the scan until they return from processing the emitted Documents. This should not be used for computationally heavy tasks like visualization. Its only intended use is for saving data to metadatastore, but users can register any consumers they want, at risk of slowing down the scan.
RE._register_scan_callback('event', some_critical_func)
The convenience function register_mds registers metadatastore's four insert_* functions to consume their four respective Documents. These are registered on the scan thread, so data is guaranteed to be saved in metadatastore.
End of explanation
RE.run(simple_scan_saving(motor, det))
from dataportal import DataBroker as db
header = db[-1]
header
from dataportal import DataMuxer as dm
dm.from_events(db.fetch_events(header)).to_sparse_dataframe()
Explanation: We can verify that this worked by loading this one-point scan from the DataBroker and displaying the data using DataMuxer.
End of explanation
flyer = FlyMagic('flyer', 'theta', 'sin')
def fly_scan(flyer):
yield Msg('kickoff', flyer)
yield Msg('collect', flyer)
yield Msg('kickoff', flyer)
yield Msg('collect', flyer)
# Note that there is no 'create'/'save' here. That is managed by 'collect'.
RE.run(fly_gen(flyer), use_threading=False)
Explanation: Flyscan prototype
Asserts that flyscans are managed by an object which has three methods:
- describe : same as for everything else
- kickoff : method which starts the flyscan. This should be a fast-to-
execute function that is assumed to just poke at some external
hardware.
- collect : collects the data from flyscan. This method yields partial
event documents. The 'time' and 'data' fields should be
filled in, the rest will be filled in by the run engine.
End of explanation
header = db[-1]
header
res = dm.from_events(db.fetch_events(header)).to_sparse_dataframe()
res
fig, ax = plt.subplots()
ax.cla()
res = dm.from_events(db.fetch_events(header)).to_sparse_dataframe()
ax.plot(res['sin'], label='sin')
ax.plot(res['theta'], label='theta')
ax.legend()
fig.canvas.draw()
Explanation: The fly scan results are in metadatastore....
End of explanation
def fly_step(flyer, motor):
for x in range(-5, 5):
# step
yield Msg('create')
yield Msg('set', motor, {'pos': x})
yield Msg('trigger', motor)
yield Msg('read', motor)
yield Msg('save')
# fly
yield Msg('kickoff', flyer)
yield Msg('collect', flyer)
flyer.reset()
RE.run(fly_step(flyer, motor))
header = db[-1]
header
mux = dm.from_events(db.fetch_events(header))
res = mux.bin_on('sin', interpolation={'pos':'nearest'})
%matplotlib notebook
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
sc = ax.scatter(res.theta.val.values, res.pos.val.values, c=res.sin.values, s=150, cmap='RdBu')
cb = fig.colorbar(sc)
cb.set_label('I [arb]')
ax.set_xlabel(r'$\theta$ [rad]')
ax.set_ylabel('pos [arb]')
ax.set_title('async flyscan + step scan')
fig.canvas.draw()
res
Explanation: Fly scan + stepscan
Do a step scan with one motor and a fly scan with another
End of explanation |
1,374 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy
Step1: If we want to multiply each number in that list by 3, we can certainly do it by looping through and multiplying each individual number by 3, but that seems like way too much work. Let's see what Numpy can do instead.
First, we need to import the Numpy package, which we commonly rename as "np."
Step2: Here, we created a numpy array from scratch. You can also convert a list or tuple into an array
Step3: There are a couple of ways to check if something is an array (as opposed to a list), but here's a really straight-forward way
Step4: Let's say you want to know what the first element in the array is. You can select elements of arrays the same way you do for lists
Step5: You can perform slices in the same way as well
Step6: All the elements of an array must be the same type. For instance, you can't have both integers and floats in a single array. If you already have an array of integers, and you try to put a float in there, Numpy will automatically convert it
Step7: Array arithmetic is always done element-wise. That is, the arithmetic is performed on each individual element, one at a time.
Step8: What if you want an array, but you don't know what you want to put into it yet? If you know what size it needs to be, you can create the whole thing at once and make every element a 1 or a 0
Step9: Notice that you can specify whether you want the numbers to be floats (the default) or integers. You can do complex numbers too.
If you don't know what size your array will be, you can create an empty one and append elements to it
Step10: Extra
Step11: There are some advantages of using np.arange() over range(); one of the most important ones is that np.arange() can take floats, not just ints.
Step12: Numpy has some functions that make statistical calculations easy
Step13: We can use Numpy to select individual elements with certain properties. There are two ways to do this
Step14: Numpy is a great way to read in data from ASCII files, like the data files you got from the 20 m robotic telescope. Let's start by creating a data file and then reading it in, both using Numpy.
Step15: Save the array to a file
Step16: Now we can use Numpy to read the file into an array
Step17: Matplotlib
Step18: Matplotlib can create all the plots you need. For instance, we can plot our readThisData array from the Numpy session, which was two columns with 5 numbers each. Let's plot those numbers. It only takes a single line of code
Step19: If we want lines to connect the points
Step20: Here's another example of what Numpy and Matplotlib can do
Step21: Let's add some labels and titles to our plot
Step22: Let's change how this looks a little
Step23: Here's another kind of plot we can do
Step24: What if we want logarithmic axes? The easiest way is to use semilogx(), semilogy(), or loglog()
Step25: When you're in the IPython Notebook, plots show up as soon as you make them. If, on the other hand, you're writing a separate script and running from the command line (by typing in "python script.py" where script.py is the name of the script you just wrote) OR typing everything into the command line, you'll need to explicitly tell python to plot it. The same three lines above would instead be | Python Code:
a = [1,2,3]
b = 3*a
print b
Explanation: Numpy: "number" + "python"
Numpy is a Python package that is commonly used by scientists. You can already do some math in Python by itself, but Numpy makes things even easier.
We've already seen that Python has data structures such as lists, tuples, and dictionaries; <b>Numpy has arrays</b>. Arrays are just matrices, which you might have seen in math classes. With Numpy, we can do math on entire matrices at once. To start with, here's what happens if we make a list of numbers and try to multiply each number in that list by 3:
End of explanation
import numpy as np
a = np.array([1,2,3])
b = 3*a
print b
Explanation: If we want to multiply each number in that list by 3, we can certainly do it by looping through and multiplying each individual number by 3, but that seems like way too much work. Let's see what Numpy can do instead.
First, we need to import the Numpy package, which we commonly rename as "np."
End of explanation
c = [2,5,8]
print c
c = np.array(c)
print c
Explanation: Here, we created a numpy array from scratch. You can also convert a list or tuple into an array:
End of explanation
c
Explanation: There are a couple of ways to check if something is an array (as opposed to a list), but here's a really straight-forward way:
End of explanation
c[0]
c[1]
c[-2]
Explanation: Let's say you want to know what the first element in the array is. You can select elements of arrays the same way you do for lists:
End of explanation
c[0:2]
Explanation: You can perform slices in the same way as well:
End of explanation
d = np.array([0.0,1,2,3,4],dtype=np.float32)
print d,type(d)
d.dtype
d[0] = 35.21
print d
Explanation: All the elements of an array must be the same type. For instance, you can't have both integers and floats in a single array. If you already have an array of integers, and you try to put a float in there, Numpy will automatically convert it:
End of explanation
array1 = np.array((10, 20, 30))
array2 = np.array((1, 5, 10))
print array1 + array2
print array1 - array2
print 3*array1
print array1 * array2
Explanation: Array arithmetic is always done element-wise. That is, the arithmetic is performed on each individual element, one at a time.
End of explanation
ones_array = np.ones(5)
print ones_array
print 5*ones_array
zeros_array = np.zeros(5, int)
print zeros_array
Explanation: What if you want an array, but you don't know what you want to put into it yet? If you know what size it needs to be, you can create the whole thing at once and make every element a 1 or a 0:
End of explanation
a=[1,2,3]
a.append(4)
a
f = np.array(())
print f
f = np.append(f, 3)
f = np.append(f, 5)
# f.append(5)
print f
# Question: what if you want that 3 to be an integer?
g = np.append(f, (2,1,0))
print g
Explanation: Notice that you can specify whether you want the numbers to be floats (the default) or integers. You can do complex numbers too.
If you don't know what size your array will be, you can create an empty one and append elements to it:
End of explanation
print np.arange(10)
print np.arange(10.0)
print np.arange(1, 10)
print np.arange(1, 10, 2)
print np.arange(10, 1, -1)
Explanation: Extra: Figure out how to insert or delete elements
If you want an array of numbers in chronological order, Numpy has a very handy function called "arange" that we saw on the first day.
End of explanation
print range(5, 10, 0.1)
print np.arange(5, 10, 0.1)
Explanation: There are some advantages of using np.arange() over range(); one of the most important ones is that np.arange() can take floats, not just ints.
End of explanation
redshifts = np.array((0.2, 1.56, 6.3, 0.003, 0.9, 4.54, 1.1))
print redshifts.min(), redshifts.max()
print redshifts.sum(), redshifts.prod()
print redshifts.mean(), redshifts.std()
print redshifts.argmax()
print redshifts[redshifts.argmax()]
Explanation: Numpy has some functions that make statistical calculations easy:
End of explanation
close = np.where(redshifts < 1)
print close
print redshifts[close]
middle = np.where( (redshifts>1) & (redshifts<2))
print redshifts[middle]
far = redshifts > 2
print far
print redshifts[far]
Explanation: We can use Numpy to select individual elements with certain properties. There are two ways to do this:
End of explanation
saveThisData = np.random.rand(20,2) # This prints out random numbers
# between 0 and 1 in the shape we tell it
print saveThisData
print saveThisData.shape
saveThisData = np.reshape(saveThisData, (2,20)) # If we don't like the shape we can reshape it
print saveThisData
Explanation: Numpy is a great way to read in data from ASCII files, like the data files you got from the 20 m robotic telescope. Let's start by creating a data file and then reading it in, both using Numpy.
End of explanation
np.savetxt('myData.txt', saveThisData)
Explanation: Save the array to a file:
End of explanation
readThisData = np.genfromtxt('myData.txt')
print readThisData
print readThisData[0]
print readThisData[:,0]
Explanation: Now we can use Numpy to read the file into an array:
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Matplotlib: "math" + "plotting" + "library" (I think)
Matplotlib is a Python plotting package which can do anything from simple plots to complicated ones that are used in scientific papers.
We need to import the matplotlib.pyplot library, which is generally written as "plt" in shorthand. The second line below, ("%matplotlib inline") tells ipython to make the plots in this notebook; otherwise, the plots will appear in new windows.
End of explanation
print readThisData
plt.scatter(readThisData[0], readThisData[1])
Explanation: Matplotlib can create all the plots you need. For instance, we can plot our readThisData array from the Numpy session, which was two columns with 5 numbers each. Let's plot those numbers. It only takes a single line of code:
End of explanation
plt.plot(readThisData[0], readThisData[1])
plt.plot(readThisData[0], readThisData[1], 'o') # another way to make a scatter plot
Explanation: If we want lines to connect the points:
End of explanation
x = np.linspace(0, 2*np.pi, 50)
y = np.sin(x)
plt.plot(x, y)
Explanation: Here's another example of what Numpy and Matplotlib can do:
End of explanation
plt.plot(x, y, label='values')
plt.xlabel('x')
plt.ylabel('sin(x)')
plt.legend()
plt.title("trigonometry!") # you can use double or single quotes
Explanation: Let's add some labels and titles to our plot:
End of explanation
plt.plot(x, y, 'o-', color='red', markersize=6, linewidth=2, label='values')
plt.xlabel('x')
plt.ylabel('sin(x)')
plt.title("trigonometry!")
Explanation: Let's change how this looks a little:
End of explanation
plt.hist(readThisData[0], bins=20)
Explanation: Here's another kind of plot we can do:
End of explanation
x = np.linspace(-5, 5)
y = np.exp(-x**2)
plt.semilogy(x, y)
Explanation: What if we want logarithmic axes? The easiest way is to use semilogx(), semilogy(), or loglog():
End of explanation
from IPython.display import Image
Image(filename='breakout.png')
data = np.genfromtxt('Skynet_57026_GradMap_Milky_Way_-_Third_Quad_11753_12572.spect.cyb.txt')
freq = data[:,0]
XL1 = data[:,1]
YR1 = data[:,2]
plt.plot(freq, XL1, 'blue', label='XL1')
plt.plot(freq, YR1, 'black', label='YR1', linestyle=':')
plt.xlim(1415,1428)
plt.ylim(100,350)
plt.xlabel('frequency (Hz)', size=14)
plt.ylabel('counts', size=14)
plt.legend()
plt.title('Milky Way third quadrant', size=14, weight='bold')
peak_index = np.argmax(XL1)
peak = freq[peak_index]
print peak
Explanation: When you're in the IPython Notebook, plots show up as soon as you make them. If, on the other hand, you're writing a separate script and running from the command line (by typing in "python script.py" where script.py is the name of the script you just wrote) OR typing everything into the command line, you'll need to explicitly tell python to plot it. The same three lines above would instead be:
x = np.linspace(-5,5)
y = np.exp(-x**2)
plt.semilogy(x, y)
plt.show()
There are a lot of other plots you can make with matplotlib. The best way to find out how to do something is to look at the gallery of examples: http://matplotlib.org/gallery.html
Breakout session
Plot data you took using the 20m Telescope over the weekend.
Save the "Spectrum data" ASCII file at http://tinyurl.com/nljjmdh. (This is just the second half of your observation called "GRADMAP MILKY WAY - THIRD QUAD".) Read in the file and plot the XL1 data.
Make your plot look exactly like the one below.
Find the frequency of the biggest peak.
Bonus: Find the frequency of the second, smaller peak.
End of explanation |
1,375 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experiment
Step1: Load and check data
Step2: ## Analysis
Experiment Details
Step3: What are optimal levels of hebbian and weight pruning
Step4: No relevant difference | Python Code:
%load_ext autoreload
%autoreload 2
import sys
sys.path.append("../../")
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import glob
import tabulate
import pprint
import click
import numpy as np
import pandas as pd
from ray.tune.commands import *
from dynamic_sparse.common.browser import *
Explanation: Experiment:
Evaluate pruning by magnitude weighted by coactivations.
Motivation.
Test new proposed method
End of explanation
exps = ['improved_magpruning_test1', ]
paths = [os.path.expanduser("~/nta/results/{}".format(e)) for e in exps]
df = load_many(paths)
df.head(5)
# replace hebbian prine
df['hebbian_prune_perc'] = df['hebbian_prune_perc'].replace(np.nan, 0.0, regex=True)
df['weight_prune_perc'] = df['weight_prune_perc'].replace(np.nan, 0.0, regex=True)
df.columns
df.shape
df.iloc[1]
df.groupby('model')['model'].count()
Explanation: Load and check data
End of explanation
# Did any trials failed?
df[df["epochs"]<30]["epochs"].count()
# Removing failed or incomplete trials
df_origin = df.copy()
df = df_origin[df_origin["epochs"]>=30]
df.shape
# which ones failed?
# failed, or still ongoing?
df_origin['failed'] = df_origin["epochs"]<30
df_origin[df_origin['failed']]['epochs']
# helper functions
def mean_and_std(s):
return "{:.3f} ± {:.3f}".format(s.mean(), s.std())
def round_mean(s):
return "{:.0f}".format(round(s.mean()))
stats = ['min', 'max', 'mean', 'std']
def agg(columns, filter=None, round=3):
if filter is None:
return (df.groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
else:
return (df[filter].groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
Explanation: ## Analysis
Experiment Details
End of explanation
agg(['weight_prune_perc'])
multi2 = (df['weight_prune_perc'] % 0.2 == 0)
agg(['weight_prune_perc'], multi2)
Explanation: What are optimal levels of hebbian and weight pruning
End of explanation
pd.pivot_table(df[filter],
index='hebbian_prune_perc',
columns='weight_prune_perc',
values='val_acc_max',
aggfunc=mean_and_std)
df.shape
Explanation: No relevant difference
End of explanation |
1,376 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Capacity of the Binary-Input AWGN (BI-AWGN) Channel
This code is provided as supplementary material of the OFC short course SC468
This code illustrates
* Calculating the capacity of the binary input AWGN channel using numerical integration
* Capacity as a function of $E_s/N_0$ and $E_b/N_0$
Step1: Conditional pdf $f_{Y|X}(y|x)$ for a channel with noise variance (per dimension) $\sigma_n^2$. This is merely the Gaussian pdf with mean $x$ and variance $\sigma_n^2$
Step2: Output pdf $f_Y(y) = \frac12[f_{Y|X}(y|X=+1)+f_{Y|X}(y|X=-1)]$
Step3: This is the function we like to integrate, $f_Y(y)\cdot\log_2(f_Y(y))$. We need to take special care of the case when the input is 0, as we defined $0\cdot\log_2(0)=0$, which is usually treated as "nan"
Step4: Compute the capacity using numerical integration. We have
\begin{equation}
C_{\text{BI-AWGN}} = -\int_{-\infty}^\infty f_Y(y)\log_2(f_Y(y))\mathrm{d}y - \frac12\log_2(2\pi e\sigma_n^2)
\end{equation}
Step5: This is an alternative way of calculating the capacity by approximating the integral using the Gauss-Hermite Quadrature (https
Step6: Compute the capacity for a range of of $E_s/N_0$ values (given in dB)
Step7: Plot the capacity curves as a function of $E_s/N_0$ (in dB) and $E_b/N_0$ (in dB). In order to calculate $E_b/N_0$, we recall from the lecture that
\begin{equation}
\frac{E_s}{N_0} = r\cdot \frac{E_b}{N_0}\qquad\Rightarrow\qquad\frac{E_b}{N_0} = \frac{1}{r}\cdot \frac{E_s}{N_0}
\end{equation}
Next, we know that the best rate that can be achieved is the capacity, i.e., $r=C$. Hence, we get $\frac{E_b}{N_0}=\frac{1}{C}\cdot\frac{E_s}{N_0}$. Converting to decibels yields
\begin{align}
\frac{E_b}{N_0}\bigg|{\textrm{dB}} &= 10\cdot\log{10}\left(\frac{1}{C}\cdot\frac{E_s}{N_0}\right) \
&= 10\cdot\log_{10}\left(\frac{1}{C}\right) + 10\cdot\log_{10}\left(\frac{E_s}{N_0}\right) \
&= \frac{E_s}{N_0}\bigg|{\textrm{dB}} - 10\cdot\log{10}(C)
\end{align} | Python Code:
import numpy as np
import scipy.integrate as integrate
import matplotlib.pyplot as plt
Explanation: Capacity of the Binary-Input AWGN (BI-AWGN) Channel
This code is provided as supplementary material of the OFC short course SC468
This code illustrates
* Calculating the capacity of the binary input AWGN channel using numerical integration
* Capacity as a function of $E_s/N_0$ and $E_b/N_0$
End of explanation
def f_YgivenX(y,x,sigman):
return np.exp(-((y-x)**2)/(2*sigman**2))/np.sqrt(2*np.pi)/sigman
Explanation: Conditional pdf $f_{Y|X}(y|x)$ for a channel with noise variance (per dimension) $\sigma_n^2$. This is merely the Gaussian pdf with mean $x$ and variance $\sigma_n^2$
End of explanation
def f_Y(y,sigman):
return 0.5*(f_YgivenX(y,+1,sigman)+f_YgivenX(y,-1,sigman))
Explanation: Output pdf $f_Y(y) = \frac12[f_{Y|X}(y|X=+1)+f_{Y|X}(y|X=-1)]$
End of explanation
def integrand(y, sigman):
value = f_Y(y,sigman)
if value < 1e-20:
return_value = 0
else:
return_value = value * np.log2(value)
return return_value
Explanation: This is the function we like to integrate, $f_Y(y)\cdot\log_2(f_Y(y))$. We need to take special care of the case when the input is 0, as we defined $0\cdot\log_2(0)=0$, which is usually treated as "nan"
End of explanation
def C_BIAWGN(sigman):
# numerical integration of the h(Y) part
integral = integrate.quad(integrand, -np.inf, np.inf, args=(sigman))[0]
# take into account h(Y|X)
return -integral - 0.5*np.log2(2*np.pi*np.exp(1)*sigman**2)
Explanation: Compute the capacity using numerical integration. We have
\begin{equation}
C_{\text{BI-AWGN}} = -\int_{-\infty}^\infty f_Y(y)\log_2(f_Y(y))\mathrm{d}y - \frac12\log_2(2\pi e\sigma_n^2)
\end{equation}
End of explanation
# alternative method using Gauss-Hermite Quadrature (see https://en.wikipedia.org/wiki/Gauss%E2%80%93Hermite_quadrature)
# use 40 components to approximate the integral, should be sufficiently exact
x_GH, w_GH = np.polynomial.hermite.hermgauss(40)
print(w_GH)
def C_BIAWGN_GH(sigman):
integral_xplus1 = np.sum(w_GH * [np.log2(f_Y(np.sqrt(2)*sigman*xi + 1, sigman)) for xi in x_GH])
integral_xminus1 = np.sum(w_GH * [np.log2(f_Y(np.sqrt(2)*sigman*xi - 1, sigman)) for xi in x_GH])
integral = (integral_xplus1 + integral_xminus1)/2/np.sqrt(np.pi)
return -integral - 0.5*np.log2(2*np.pi*np.exp(1)*sigman**2)
Explanation: This is an alternative way of calculating the capacity by approximating the integral using the Gauss-Hermite Quadrature (https://en.wikipedia.org/wiki/Gauss%E2%80%93Hermite_quadrature). The Gauss-Hermite quadrature states that
\begin{equation}
\int_{-\infty}^\infty e^{-x^2}f(x)\mathrm{d}x \approx \sum_{i=1}^nw_if(x_i)
\end{equation}
where $w_i$ and $x_i$ are the respective weights and roots that are given by the Hermite polynomials.
We have to rearrange the integral $I = \int_{-\infty}^\infty f_Y(y)\log_2(f_Y(y))\mathrm{d}y$ a little bit to put it into a form suitable for the Gauss-Hermite quadrature
\begin{align}
I &= \frac{1}{2}\sum_{x\in{\pm 1}}\int_{-\infty}^\infty f_{Y|X}(y|X=x)\log_2(f_Y(y))\mathrm{d}y \
&= \frac{1}{2}\sum_{x\in{\pm 1}}\int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma_n}e^{-\frac{(y-x)^2}{2\sigma_n^2}}\log_2(f_Y(y))\mathrm{d}y \
&\stackrel{(a)}{=} \frac{1}{2}\sum_{x\in{\pm 1}}\int_{-\infty}^\infty \frac{1}{\sqrt{\pi}}e^{-z^2}\log_2(f_Y(\sqrt{2}\sigma_n z + x))\mathrm{d}z \
&\approx \frac{1}{2\sqrt{\pi}}\sum_{x\in{\pm 1}} \sum_{i=1}^nw_i \log_2(f_Y(\sqrt{2}\sigma_n x_i + x))
\end{align}
where in $(a)$, we substitute $z = \frac{y-x}{\sqrt{2}\sigma}$
End of explanation
esno_dB_range = np.linspace(-16,10,100)
# convert dB to linear
esno_lin_range = [10**(esno_db/10) for esno_db in esno_dB_range]
# compute sigma_n
sigman_range = [np.sqrt(1/2/esno_lin) for esno_lin in esno_lin_range]
capacity_BIAWGN = [C_BIAWGN(sigman) for sigman in sigman_range]
# capacity of the AWGN channel
capacity_AWGN = [0.5*np.log2(1+1/(sigman**2)) for sigman in sigman_range]
Explanation: Compute the capacity for a range of of $E_s/N_0$ values (given in dB)
End of explanation
fig = plt.figure(1,figsize=(15,7))
plt.subplot(121)
plt.plot(esno_dB_range, capacity_AWGN)
plt.plot(esno_dB_range, capacity_BIAWGN)
plt.xlim((-10,10))
plt.ylim((0,2))
plt.xlabel('$E_s/N_0$ (dB)',fontsize=16)
plt.ylabel('Capacity (bit/channel use)',fontsize=16)
plt.grid(True)
plt.legend(['AWGN','BI-AWGN'],fontsize=14)
# plot Eb/N0 . Note that in this case, the rate that is used for calculating Eb/N0 is the capcity
# Eb/N0 = 1/r (Es/N0)
plt.subplot(122)
plt.plot(esno_dB_range - 10*np.log10(capacity_AWGN), capacity_AWGN)
plt.plot(esno_dB_range - 10*np.log10(capacity_BIAWGN), capacity_BIAWGN)
plt.xlim((-2,10))
plt.ylim((0,2))
plt.xlabel('$E_b/N_0$ (dB)',fontsize=16)
plt.ylabel('Capacity (bit/channel use)',fontsize=16)
plt.grid(True)
from scipy.stats import norm
# first compute the BSC error probability
# the Q function (1-CDF) is also often called survival function (sf)
delta_range = [norm.sf(1/sigman) for sigman in sigman_range]
capacity_BIAWGN_hard = [1+delta*np.log2(delta)+(1-delta)*np.log2(1-delta) for delta in delta_range]
fig = plt.figure(1,figsize=(15,7))
plt.subplot(121)
plt.plot(esno_dB_range, capacity_AWGN)
plt.plot(esno_dB_range, capacity_BIAWGN)
plt.plot(esno_dB_range, capacity_BIAWGN_hard)
plt.xlim((-10,10))
plt.ylim((0,2))
plt.xlabel('$E_s/N_0$ (dB)',fontsize=16)
plt.ylabel('Capacity (bit/channel use)',fontsize=16)
plt.grid(True)
plt.legend(['AWGN','BI-AWGN', 'Hard BI-AWGN'],fontsize=14)
# plot Eb/N0 . Note that in this case, the rate that is used for calculating Eb/N0 is the capcity
# Eb/N0 = 1/r (Es/N0)
plt.subplot(122)
plt.plot(esno_dB_range - 10*np.log10(capacity_AWGN), capacity_AWGN)
plt.plot(esno_dB_range - 10*np.log10(capacity_BIAWGN), capacity_BIAWGN)
plt.plot(esno_dB_range - 10*np.log10(capacity_BIAWGN_hard), capacity_BIAWGN_hard)
plt.xlim((-2,10))
plt.ylim((0,2))
plt.xlabel('$E_b/N_0$ (dB)',fontsize=16)
plt.ylabel('Capacity (bit/channel use)',fontsize=16)
plt.grid(True)
W = 4
Explanation: Plot the capacity curves as a function of $E_s/N_0$ (in dB) and $E_b/N_0$ (in dB). In order to calculate $E_b/N_0$, we recall from the lecture that
\begin{equation}
\frac{E_s}{N_0} = r\cdot \frac{E_b}{N_0}\qquad\Rightarrow\qquad\frac{E_b}{N_0} = \frac{1}{r}\cdot \frac{E_s}{N_0}
\end{equation}
Next, we know that the best rate that can be achieved is the capacity, i.e., $r=C$. Hence, we get $\frac{E_b}{N_0}=\frac{1}{C}\cdot\frac{E_s}{N_0}$. Converting to decibels yields
\begin{align}
\frac{E_b}{N_0}\bigg|{\textrm{dB}} &= 10\cdot\log{10}\left(\frac{1}{C}\cdot\frac{E_s}{N_0}\right) \
&= 10\cdot\log_{10}\left(\frac{1}{C}\right) + 10\cdot\log_{10}\left(\frac{E_s}{N_0}\right) \
&= \frac{E_s}{N_0}\bigg|{\textrm{dB}} - 10\cdot\log{10}(C)
\end{align}
End of explanation |
1,377 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Idea
Get data
- Calculate the name length
- Calculate the chr set
- Calculate the chr set length
- Calculate the ratio for the chr set length and the name length
- Remove the duplicate letter sets
- Create dataframe with index=names, columns=alphabet
- Calculate the letter distribution
- Choose argmin(letter sum); The optimum set must have atleast one of these
- Iterate through all argmin(letter sum) names
Step1: Remove duplicates
Step2: Create letter table
Step3: Find argmin in the letter distribution
Step4: Recursion
Step5: The effective ratio criteria
Step6: The shortest name length criteria
Step7: Save the results
Step8: Check for duplicates
Step9: Validate results | Python Code:
names_df = pd.read_csv("./IMA_mineral_names.txt", sep=',', header=None, names=['names'])
names_df['names'] = names_df['names'].str.strip().str.lower()
names_df['len'] = names_df['names'].str.len()
names_df['tuple'] = names_df['names'].apply(lambda x: tuple(sorted(set(x))))
names_df['setlen'] = names_df['tuple'].apply(lambda x: len(x))
names_df['set_per_len'] = names_df['setlen']/names_df['len']
names_df.head(5)
len(names_df)
Explanation: Idea
Get data
- Calculate the name length
- Calculate the chr set
- Calculate the chr set length
- Calculate the ratio for the chr set length and the name length
- Remove the duplicate letter sets
- Create dataframe with index=names, columns=alphabet
- Calculate the letter distribution
- Choose argmin(letter sum); The optimum set must have atleast one of these
- Iterate through all argmin(letter sum) names:
- Recursion starts here
- Mark all name letters to False
- Update the letter distribution
- Choose argmin(letter sum); The optimum set must have atleast one of these, but due to n cutoff not all combinations are tested.
- Calculate the effective set length
- Calculate the effective ratio
- Choose the n first names with {the highest effective ratio / shortest length}
- Iterate through the chosen names
- The recursion ends here
Read data and calculate some properties
End of explanation
def sort_and_return_smallest(df):
if len(df) == 1:
return df
df = df.sort_values(by=['len', 'names'])
return df.iloc[:1, :]
%time names_set = names_df.groupby(by='tuple', as_index=False).apply(sort_and_return_smallest)
len(names_set)
def sort_and_return_smallest_duplicates(df):
if len(df) == 1:
return list(df['names'])
df = df.sort_values(by=['len', 'names'])
names = df.loc[df['len'] == df['len'].iloc[0], 'names']
return list(names)
%time names_duplicates = names_df.groupby(by='tuple', as_index=False).apply(sort_and_return_smallest_duplicates)
len(names_duplicates)
# In case some of these are in the chosen set
duplicate_name_dict = {}
for value in names_duplicates:
if len(value) > 1:
duplicate_name_dict[value[0]] = value[1:]
names_set.set_index(['names'], inplace=True)
names_set.head()
Explanation: Remove duplicates
End of explanation
letter_df = pd.DataFrame(index=names_set.index, columns=list(string.ascii_lowercase), dtype=bool)
letter_df.loc[:] = False
%%time
for name, set_ in zip(names_set.index, names_set['tuple']):
for letter in set_:
letter_df.loc[name, letter] = True
Explanation: Create letter table
End of explanation
lowest_count_letter = letter_df.sum(0).argmin()
lowest_count_letter
# Get subset based on the chosen letter
subsetlen = letter_df[letter_df[lowest_count_letter]].sum(1)
name_len = subsetlen.index.str.len()
setlen = pd.DataFrame({'set_per_len' : subsetlen/name_len, 'len' : name_len})
setlen.head()
Explanation: Find argmin in the letter distribution
End of explanation
def get_min_set(df, current_items, m=46, sort_by_len=False, n_search=20):
# Gather results
results = []
# Get letter with lowest number of options
letter = df.sum(0)
letter = letter[letter > 0].argmin()
# Get subset based on the chosen letter
subsetlen = df.loc[df[letter], :].sum(1)
name_len = subsetlen.index.str.len()
setlen = pd.DataFrame({'set_per_len' : subsetlen/name_len, 'len' : name_len})
if sort_by_len:
order_of_operations = setlen.sort_values(by=['len', 'set_per_len'], ascending=True).index
else:
order_of_operations = setlen.sort_values(by=['set_per_len', 'len'], ascending=False).index
# Loop over the mineral names with chosen letter
# Ordered based on the (setlen / len)
for i, (name, letter_bool) in enumerate(df.loc[order_of_operations, :].iterrows()):
if i > n_search:
break
if sum(map(len, current_items))+len(name) >= m:
continue
# Get df containing rest of the letters
df_ = df.copy()
df_.loc[:, letter_bool] = False
# If letters are exhausted there is one result
# Check if the result is less than chosen limit m
if df_.sum(0).sum() == 0 and sum(map(len, current_items))+len(name) < m:
# This result is "the most optimal" under these names
current_items_ = current_items + [name]
len_current_items_ = sum(map(len, current_items_))
len_unique = len(set("".join(current_items_)))
results.append((len_current_items_, current_items_))
if len_current_items_ < 41:
print("len", len_current_items_, "len_unique", len_unique, current_items_, "place 1", flush=True)
continue
# Remove mineral names without new letters
df_ = df_.loc[df_.sum(1) != 0, :]
if df_.sum(0).sum() == 0:
if sum(map(len, current_items))+len(name) < m:
unique_letters = sum(map(len, map(set, current_items + [name])))
if unique_letters == len(string.ascii_lowercase):
# Here is one result (?)
current_items_ = current_items + [name]
len_current_items_ = sum(map(len, current_items_))
len_unique = len(set("".join(current_items_)))
results.append((len_current_items_, current_items_))
if len_current_items_ < 41:
print("len", len_current_items_, "len_unique", len_unique, current_items_, "place 1", flush=True)
continue
current_items_ = current_items + [name]
optimal_result = get_min_set(df_, current_items_, m=m, sort_by_len=sort_by_len, n_search=n_search)
if len(optimal_result):
results.extend(optimal_result)
return results
Explanation: Recursion
End of explanation
%%time
res_list = []
order_of_oparations = setlen.loc[letter_df.loc[:, lowest_count_letter], :].sort_values(by=['set_per_len', 'len'], ascending=False).index
for i, (name, letter_bool) in enumerate(letter_df.ix[order_of_oparations].iterrows()):
print(name, i+1, "/", len(order_of_oparations), flush=True)
df_ = letter_df.copy()
df_.loc[:, letter_bool] = False
res = get_min_set(df_, [name], m=45, sort_by_len=False, n_search=20)
res_list.extend(res)
res_df = pd.DataFrame([[item[0]] + item[1] for item in res_list]).sort_values(by=0)
res_df.head()
Explanation: The effective ratio criteria
End of explanation
%%time
res_list_ = []
order_of_oparations = setlen.loc[letter_df.loc[:, lowest_count_letter], :].sort_values(by=['set_per_len', 'len'], ascending=False).index
for i, (name, letter_bool) in enumerate(letter_df.ix[order_of_oparations].iterrows()):
print(name, i+1, "/", len(order_of_oparations), flush=True)
df_ = letter_df.copy()
df_.loc[:, letter_bool] = False
res_ = get_min_set(df_, [name], m=45, sort_by_len=True, n_search=20)
res_list_.extend(res_)
#res_df_ = pd.DataFrame([[item[0]] + item[1] for item in res_list_]).sort_values(by=0)
res_df.shape #, res_df_.shape
Explanation: The shortest name length criteria
End of explanation
%time res_df.to_csv("./example_but_not_optimum_no_duplicates.csv")
optimum = res_df[res_df[0] == res_df.iloc[0, 0]]
Explanation: Save the results
End of explanation
optimum.iloc[:, 1:].applymap(lambda x: duplicate_name_dict.get(x, None))
optimum
Explanation: Check for duplicates
End of explanation
optimum.apply(lambda x: "".join(sorted(set("".join(x.iloc[1:6].values)))) == string.ascii_lowercase, axis=1)
Explanation: Validate results
End of explanation |
1,378 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Metacritic and ROI Analysis
Step1: Load Dataset
Step2: Metacritic Ratings Representation
Step3: ROI Representation
Step4: Save Dataset
Step5: Metacritic VS. ROI
Step6: We can see that the ROI and the ratings are not correlated as the ROI doesn't necessarily increases for good movies
Step7: How to determine the success of a movie ?
Try
Step8: Create Normalized Metacritic Weight Matrix
$$ W(i,j) = \begin{cases}
0 & \text{if } Metacritic_{normed}(i,j) = 0\
1-\frac{abs(Metacritic(i) - Metacritic(j))}{100} & \text{otherwise} \end{cases}$$
Step9: Save as csv
Step10: Embedding | Python Code:
%matplotlib inline
import configparser
import os
import requests
from tqdm import tqdm
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
from scipy import sparse, stats, spatial
import scipy.sparse.linalg
from sklearn import preprocessing, decomposition
import librosa
import IPython.display as ipd
import json
#added by me:
import requests
from pygsp import graphs, filters, plotting
plt.rcParams['figure.figsize'] = (17, 5)
plotting.BACKEND = 'matplotlib'
%pylab inline
pylab.rcParams['figure.figsize'] = (10, 6);
Explanation: Metacritic and ROI Analysis
End of explanation
df = pd.read_csv('Saved_Datasets/NewFeaturesDataset.csv')
#df = df[df['Metacritic'] != 0]
df.head()
Explanation: Load Dataset
End of explanation
unique, counts = np.unique(df['Metacritic'], return_counts=True)
plt.bar(unique,counts,align='center',width=.6);
ratings_nz = np.array(df[df['Metacritic'] != 0]['Metacritic'])
mu = np.mean(ratings_nz)
std = np.std(ratings_nz)
plt.xlabel('Ratings')
plt.ylabel('Counts')
plt.title("Metacritic Ratings ($ \mu=%.2f,$ $\sigma=%.2f $)" %(mu,std));
plt.savefig('images/Metacritic_distribution.png')
Explanation: Metacritic Ratings Representation
End of explanation
plt.hist(df['ROI'],bins='auto');
data = np.array(df['ROI'])
# This is the colormap I'd like to use.
cm = plt.cm.get_cmap('RdYlGn');
# Plot histogram.
n, bins, patches = plt.hist(data, 25, normed=1, color='yellow');
bin_centers = 0.5 * (bins[:-1] + bins[1:]);
# scale values to interval [0,1]
col = bin_centers - min(bin_centers)
col /= max(col)
for c, p in zip(col, patches):
plt.setp(p, 'facecolor', cm(c));
plt.xlabel('ROI');
plt.savefig('images/ROI_regression.png');
plt.show();
np.percentile(df['ROI'], 75)
Explanation: ROI Representation
End of explanation
df.to_csv('Saved_Datasets/NewFeaturesDataset.csv', encoding='utf-8', index=False)
Explanation: Save Dataset
End of explanation
print("%.2f" % (len(df[df['ROI']>1])/len(df)*100))
print("%.2f" % (len(df[df['Metacritic']>50])/len(df)*100))
Explanation: Metacritic VS. ROI
End of explanation
df_sorted = df.sort_values(by=['Metacritic'])
plt.plot(df_sorted['Metacritic'],df_sorted['ROI'])
plt.xlabel('Metacritic Ratings')
plt.ylabel('ROI')
plt.title('Evolution of ROI according to Metacritic ratings');
plt.savefig('images/roi_vs_metacritic.png')
Explanation: We can see that the ROI and the ratings are not correlated as the ROI doesn't necessarily increases for good movies :
End of explanation
df_roi_sorted = df.sort_values(by=['ROI'],ascending=False)
df_met_sorted = df.sort_values(by=['Metacritic'],ascending=False)
mean_roi, mean_met = [], []
for r in np.arange(0.01, 1.0, 0.01):
limit_roi = df_roi_sorted.iloc[int(len(df)*r)]['ROI']
limit_met = df_met_sorted.iloc[int(len(df)*r)]['Metacritic']
success_roi = df[df['ROI'] > limit_roi]
success_met = df[df['Metacritic'] > limit_met]
mean_roi.append([r,np.mean(success_roi['Metacritic'])])
mean_met.append([r,np.mean(success_met['ROI'])])
mean_roi = np.array(mean_roi)
mean_met = np.array(mean_met)
f, axarr = plt.subplots(2, sharex=True)
axarr[0].plot(mean_roi[:,0],mean_roi[:,1]);
axarr[0].set_ylabel('Metacritic Mean')
axarr[1].plot(mean_met[:,0],mean_met[:,1]);
axarr[1].set_xlabel('Success/Failure Ratio')
axarr[1].set_ylabel('ROI')
f.subplots_adjust(hspace=0);
plt.setp([a.get_xticklabels() for a in f.axes[:-1]], visible=False);
ratio = 0.2
df_sorted = df.sort_values(by=['ROI'],ascending=False)
limit_roi = df_sorted.iloc[int(len(df)*ratio)]['ROI']
success = df[df['ROI'] > limit_roi]
failure = df[df['ROI'] <= limit_roi]
print("The ROI needed to be a successful movie is: "+str(limit_roi)[:4])
print("There are "+str(int(len(df)*ratio))+" successful movies in the dataset.")
Explanation: How to determine the success of a movie ?
Try: consider that the 30% of the movies with the highest ROI are the successful movies.
To determine an optimal ratio to use, try to find a high enough ratio which leads to a maximum metacritic mean:
End of explanation
df = pd.read_csv('Saved_Datasets/NewFeaturesDataset.csv')
df = df.drop(df[df.Metacritic == 0].index)
crit_norm = np.array(df['Metacritic'])
w = np.zeros((len(df),len(df)))
for i in range(0,len(df)):
for j in range(i,len(df)):
if (i == j):
w[i,j] = 0
continue
if (crit_norm[i] == 0 or crit_norm[j] == 0):
w[i,j] = w[j,i] = 0
else:
w[i,j] = w[j,i] = 1.0 - (abs(crit_norm[i]-crit_norm[j])/100)
plt.hist(w.reshape(-1), bins=50);
plt.title('Metacritic weights matrix histogram')
plt.savefig('images/metacritic_weights_hist.png')
print('The mean value is: {}'.format(w.mean()))
print('The max value is: {}'.format(w.max()))
print('The min value is: {}'.format(w.min()))
plt.spy(w)
Explanation: Create Normalized Metacritic Weight Matrix
$$ W(i,j) = \begin{cases}
0 & \text{if } Metacritic_{normed}(i,j) = 0\
1-\frac{abs(Metacritic(i) - Metacritic(j))}{100} & \text{otherwise} \end{cases}$$
End of explanation
W = pd.DataFrame(w)
W.head()
W.to_csv('Saved_Datasets/NormalizedMetacriticW.csv', encoding='utf-8', index=False)
Explanation: Save as csv
End of explanation
degrees = np.zeros(len(w))
#reminder: the degrees of a node for a weighted graph are the sum of its weights
for i in range(0, len(w)):
degrees[i] = sum(w[i])
plt.hist(degrees, bins=50);
#reminder: L = D - W for weighted graphs
laplacian = np.diag(degrees) - w
#computation of the normalized Laplacian
laplacian_norm = scipy.sparse.csgraph.laplacian(w, normed = True)
plt.spy(laplacian_norm);
plt.spy(np.diag(degrees))
NEIGHBORS = 300
#sort the order of the weights
sort_order = np.argsort(w, axis = 1)
#declaration of a sorted weight matrix
sorted_weights = np.zeros((len(w), len(w)))
for i in range (0, len(w)):
for j in range(0, len(w)):
if (j >= len(w) - NEIGHBORS):
#copy the k strongest edges for each node
sorted_weights[i, sort_order[i,j]] = w[i,sort_order[i,j]]
else:
#set the other edges to zero
sorted_weights[i, sort_order[i,j]] = 0
#ensure the matrix is symmetric
bigger = sorted_weights.transpose() > sorted_weights
sorted_weights = sorted_weights - sorted_weights*bigger + sorted_weights.transpose()*bigger
np.fill_diagonal(sorted_weights, 0)
plt.spy(sorted_weights)
#reminder: L = D - W for weighted graphs
laplacian = np.diag(degrees) - sorted_weights
#computation of the normalized Laplacian
laplacian_norm = scipy.sparse.csgraph.laplacian(sorted_weights, normed = True)
np.fill_diagonal(laplacian_norm, 1)
plt.spy(laplacian_norm);
laplacian_norm = sparse.csr_matrix(laplacian_norm)
eigenvalues, eigenvectors = sparse.linalg.eigsh(laplacian_norm, k = 10, which = 'SM')
plt.plot(eigenvalues, '.-', markersize=15);
plt.xlabel('')
plt.ylabel('Eigenvalues')
plt.show()
success = preprocessing.LabelEncoder().fit_transform(df['success'])
print(success)
x = eigenvectors[:, 1]
y = eigenvectors[:, 2]
plt.scatter(x, y, c=success, cmap='RdBu', alpha=0.5);
G = graphs.Graph(sorted_weights)
G.compute_laplacian('normalized')
G.compute_fourier_basis(recompute=True)
plt.plot(G.e[0:10]);
G.set_coordinates(G.U[:,1:3])
G.plot()
G.plot_signal(success, vertex_size=20)
Explanation: Embedding
End of explanation |
1,379 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
L7CA - Lesson7 CAM
2018/1/23 02
Step1: The version of resnet that happens to be the best is the preact resnet. They have different internal orderings of conv,pool,res,bn,relu,etc.
We want to take our standard resnet model here and finetune it for dogs and cats. So we need to remove the last layer
Step2: This model won't have a Full layer at the end. Instaed, the last layer has a Conv2d which outputs 2 7x7 filters, which will go through average pooling and come out as 2 numbers. This is a different way of producing two output numbers.
The reason for this we can do Class Activation Maps -- we can ask the model which parts of an image happened to be important
Step3: nn.Conv2d(512, 2, 3, padding=1) is the 4th from last layer, so we'll freeze up to it.
Step4: 2. CAM
Step5: This matrix is produced to create the heat map. It's just equal to the value of the feature matrix feat times the py vector. Where feat is the relu of the last conv layer's activations, and py is simply eual to the predictions.
Multiplying py with feat zeros-out all dog predictions in the 2nd channel of the 2x7x7 tensor, and retrieves the cat predictions in the 1st.
Put another way, in our model the only thing that happened after the last Conv layer was an Average Pooling layer. That layer took the 7x7 grid and averaged out how much each part is cat-like. So the final prediction was the average cattiness of the entire image. Since it had to average it all; I can just take that input matrix (that it would average), and instead resize it to the image size and overlay it on top.
Step6: And the result of the Class Activation Map is a heat map of activations | Python Code:
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
PATH = 'data/dogscats/'
sz = 224
arch = resnet34
bs = 32
m = arch(True)
m
Explanation: L7CA - Lesson7 CAM
2018/1/23 02:05
Continuation of L7CA_lesson7-cifar10.ipynb
Lecture: https://youtu.be/H3g26EVADgY?t=7340
End of explanation
m = nn.Sequential(*children(m)[:-2],
nn.Conv2d(512, 2, 3, padding=1),
nn.AdaptiveAvgPool2d(1), Flatten(), # 2 layers here
nn.LogSoftmax())
Explanation: The version of resnet that happens to be the best is the preact resnet. They have different internal orderings of conv,pool,res,bn,relu,etc.
We want to take our standard resnet model here and finetune it for dogs and cats. So we need to remove the last layer: (fc): Linear(in_features=512, out_features=1000)
Using ConvLearner.pretrained in FastAI deleted the last two layers:
(avgpool): AvgPool2d(kernel_size=7, stride=1, padding=0, ceil_mode=False, count_include_pad=True)
(fc): Linear(in_features=512, out_features=1000)
The FastAI library is also the first/only library to replace the penultimate layer with a concatenated Adaptive + Max pooling layer.
For this exercise we'll do a simple version where we grab all the children of the model, delete the last two layers, and add a convolution which just has two outputs. THen do a average pooling, and a softmax.
End of explanation
tfms = tfms_from_model(arch, sz, aug_tfms=transforms_side_on, max_zoom=1.1)
data = ImageClassifierData.from_paths(PATH, tfms=tfms, bs=bs)
learn = ConvLearner.from_model_data(m, data)
Explanation: This model won't have a Full layer at the end. Instaed, the last layer has a Conv2d which outputs 2 7x7 filters, which will go through average pooling and come out as 2 numbers. This is a different way of producing two output numbers.
The reason for this we can do Class Activation Maps -- we can ask the model which parts of an image happened to be important
End of explanation
learn.freeze_to(-4)
m[-1].trainable
m[-4].trainable
m[-4]
learn.fit(0.01, 1)
learn.fit(0.01, 1, cycle_len=1)
Explanation: nn.Conv2d(512, 2, 3, padding=1) is the 4th from last layer, so we'll freeze up to it.
End of explanation
class SaveFeatures():
features=None
def __init__(self, m): self.hook = m.register_forward_hook(self.hook_fn)
def hook_fn(self, module, input, output): self.features = to_np(output)
def remove(self): self.hook.remove()
x,y = next(iter(data.val_dl))
x,y = x[None, 1], y[None, 1]
vx = Variable(x.cuda(), requires_grad=True)
dx = data.val_ds.denorm(x)[0]
plt.imshow(dx);
sf = SaveFeatures(m[-4])
py = m(Variable(x.cuda()))
sf.remove()
py = np.exp(to_np(py)[0]); py
feat = np.maximum(0, sf.features[0])
feat.shape
Explanation: 2. CAM
End of explanation
f2 = np.dot(np.rollaxis(feat, 0, 3), py)
f2 -= f2.min()
f2 /= f2.max()
f2
Explanation: This matrix is produced to create the heat map. It's just equal to the value of the feature matrix feat times the py vector. Where feat is the relu of the last conv layer's activations, and py is simply eual to the predictions.
Multiplying py with feat zeros-out all dog predictions in the 2nd channel of the 2x7x7 tensor, and retrieves the cat predictions in the 1st.
Put another way, in our model the only thing that happened after the last Conv layer was an Average Pooling layer. That layer took the 7x7 grid and averaged out how much each part is cat-like. So the final prediction was the average cattiness of the entire image. Since it had to average it all; I can just take that input matrix (that it would average), and instead resize it to the image size and overlay it on top.
End of explanation
plt.imshow(dx)
plt.imshow(scipy.misc.imresize(f2, dx.shape), alpha=0.5, cmap='hot');
Explanation: And the result of the Class Activation Map is a heat map of activations:
End of explanation |
1,380 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id='intro'></a>
Introduction
The overall goal of this project is to build a word recognizer for American Sign Language video sequences, demonstrating the power of probabalistic models. In particular, this project employs hidden Markov models (HMM's) to analyze a series of measurements taken from videos of American Sign Language (ASL) collected for research (see the RWTH-BOSTON-104 Database). In this video, the right-hand x and y locations are plotted as the speaker signs the sentence.
The raw data, train, and test sets are pre-defined. You will derive a variety of feature sets (explored in Part 1), as well as implement three different model selection criterion to determine the optimal number of hidden states for each word model (explored in Part 2). Finally, in Part 3 you will implement the recognizer and compare the effects the different combinations of feature sets and model selection criteria.
At the end of each Part, complete the submission cells with implementations, answer all questions, and pass the unit tests. Then submit the completed notebook for review!
<a id='part1_tutorial'></a>
PART 1
Step1: The frame represented by video 98, frame 1 is shown here
Step2: Try it!
Step3: Build the training set
Now that we have a feature list defined, we can pass that list to the build_training method to collect the features for all the words in the training set. Each word in the training set has multiple examples from various videos. Below we can see the unique words that have been loaded into the training set
Step4: The training data in training is an object of class WordsData defined in the asl_data module. in addition to the words list, data can be accessed with the get_all_sequences, get_all_Xlengths, get_word_sequences, and get_word_Xlengths methods. We need the get_word_Xlengths method to train multiple sequences with the hmmlearn library. In the following example, notice that there are two lists; the first is a concatenation of all the sequences(the X portion) and the second is a list of the sequence lengths(the Lengths portion).
Step5: More feature sets
So far we have a simple feature set that is enough to get started modeling. However, we might get better results if we manipulate the raw values a bit more, so we will go ahead and set up some other options now for experimentation later. For example, we could normalize each speaker's range of motion with grouped statistics using Pandas stats functions and pandas groupby. Below is an example for finding the means of all speaker subgroups.
Step6: To select a mean that matches by speaker, use the pandas map method
Step7: Try it!
Step8: <a id='part1_submission'></a>
Features Implementation Submission
Implement four feature sets and answer the question that follows.
- normalized Cartesian coordinates
- use mean and standard deviation statistics and the standard score equation to account for speakers with different heights and arm length
polar coordinates
calculate polar coordinates with Cartesian to polar equations
use the np.arctan2 function and swap the x and y axes to move the $0$ to $2\pi$ discontinuity to 12 o'clock instead of 3 o'clock; in other words, the normal break in radians value from $0$ to $2\pi$ occurs directly to the left of the speaker's nose, which may be in the signing area and interfere with results. By swapping the x and y axes, that discontinuity move to directly above the speaker's head, an area not generally used in signing.
delta difference
as described in Thad's lecture, use the difference in values between one frame and the next frames as features
pandas diff method and fillna method will be helpful for this one
custom features
These are your own design; combine techniques used above or come up with something else entirely. We look forward to seeing what you come up with!
Some ideas to get you started
Step9: Question 1
Step10: <a id='part2_tutorial'></a>
PART 2
Step11: The HMM model has been trained and information can be pulled from the model, including means and variances for each feature and hidden state. The log likelihood for any individual sample or group of samples can also be calculated with the score method.
Step12: Try it!
Experiment by changing the feature set, word, and/or num_hidden_states values in the next cell to see changes in values.
Step14: Visualize the hidden states
We can plot the means and variances for each state and feature. Try varying the number of states trained for the HMM model and examine the variances. Are there some models that are "better" than others? How can you tell? We would like to hear what you think in the classroom online.
Step15: ModelSelector class
Review the ModelSelector class from the codebase found in the my_model_selectors.py module. It is designed to be a strategy pattern for choosing different model selectors. For the project submission in this section, subclass SelectorModel to implement the following model selectors. In other words, you will write your own classes/functions in the my_model_selectors.py module and run them from this notebook
Step16: Cross-validation folds
If we simply score the model with the Log Likelihood calculated from the feature sequences it has been trained on, we should expect that more complex models will have higher likelihoods. However, that doesn't tell us which would have a better likelihood score on unseen data. The model will likely be overfit as complexity is added. To estimate which topology model is better using only the training data, we can compare scores using cross-validation. One technique for cross-validation is to break the training set into "folds" and rotate which fold is left out of training. The "left out" fold scored. This gives us a proxy method of finding the best model to use on "unseen data". In the following example, a set of word sequences is broken into three folds using the scikit-learn Kfold class object. When you implement SelectorCV, you will use this technique.
Step17: Tip
Step18: Question 2
Step19: <a id='part3_tutorial'></a>
PART 3
Step20: Load the test set
The build_test method in ASLdb is similar to the build_training method already presented, but there are a few differences
Step21: <a id='part3_submission'></a>
Recognizer Implementation Submission
For the final project submission, students must implement a recognizer following guidance in the my_recognizer.py module. Experiment with the four feature sets and the three model selection methods (that's 12 possible combinations). You can add and remove cells for experimentation or run the recognizers locally in some other way during your experiments, but retain the results for your discussion. For submission, you will provide code cells of only three interesting combinations for your discussion (see questions below). At least one of these should produce a word error rate of less than 60%, i.e. WER < 0.60 .
Tip
Step22: Question 3
Step23: <a id='part4_info'></a>
PART 4 | Python Code:
import math
import numpy as np
import pandas as pd
from asl_data import AslDb
asl = AslDb() # initializes the database
asl.df.head() # displays the first five rows of the asl database, indexed by video and frame
asl.df.ix[98,1] # look at the data available for an individual frame
Explanation: <a id='intro'></a>
Introduction
The overall goal of this project is to build a word recognizer for American Sign Language video sequences, demonstrating the power of probabalistic models. In particular, this project employs hidden Markov models (HMM's) to analyze a series of measurements taken from videos of American Sign Language (ASL) collected for research (see the RWTH-BOSTON-104 Database). In this video, the right-hand x and y locations are plotted as the speaker signs the sentence.
The raw data, train, and test sets are pre-defined. You will derive a variety of feature sets (explored in Part 1), as well as implement three different model selection criterion to determine the optimal number of hidden states for each word model (explored in Part 2). Finally, in Part 3 you will implement the recognizer and compare the effects the different combinations of feature sets and model selection criteria.
At the end of each Part, complete the submission cells with implementations, answer all questions, and pass the unit tests. Then submit the completed notebook for review!
<a id='part1_tutorial'></a>
PART 1: Data
Features Tutorial
Load the initial database
A data handler designed for this database is provided in the student codebase as the AslDb class in the asl_data module. This handler creates the initial pandas dataframe from the corpus of data included in the data directory as well as dictionaries suitable for extracting data in a format friendly to the hmmlearn library. We'll use those to create models in Part 2.
To start, let's set up the initial database and select an example set of features for the training set. At the end of Part 1, you will create additional feature sets for experimentation.
End of explanation
asl.df['grnd-ry'] = asl.df['right-y'] - asl.df['nose-y']
asl.df.head() # the new feature 'grnd-ry' is now in the frames dictionary
Explanation: The frame represented by video 98, frame 1 is shown here:
Feature selection for training the model
The objective of feature selection when training a model is to choose the most relevant variables while keeping the model as simple as possible, thus reducing training time. We can use the raw features already provided or derive our own and add columns to the pandas dataframe asl.df for selection. As an example, in the next cell a feature named 'grnd-ry' is added. This feature is the difference between the right-hand y value and the nose y value, which serves as the "ground" right y value.
End of explanation
from asl_utils import test_features_tryit
# TODO add df columns for 'grnd-rx', 'grnd-ly', 'grnd-lx' representing differences between hand and nose locations
asl.df['grnd-lx'] = asl.df['left-x'] - asl.df['nose-x']
asl.df['grnd-ly'] = asl.df['left-y'] - asl.df['nose-y']
asl.df['grnd-rx'] = asl.df['right-x'] - asl.df['nose-x']
asl.df['grnd-ry'] = asl.df['right-y'] - asl.df['nose-y']
asl.df.head()
# test the code
test_features_tryit(asl)
# collect the features into a list
features_ground = ['grnd-rx','grnd-ry','grnd-lx','grnd-ly']
#show a single set of features for a given (video, frame) tuple
[asl.df.ix[98,1][v] for v in features_ground]
Explanation: Try it!
End of explanation
training = asl.build_training(features_ground)
print("Training words: {}".format(training.words))
Explanation: Build the training set
Now that we have a feature list defined, we can pass that list to the build_training method to collect the features for all the words in the training set. Each word in the training set has multiple examples from various videos. Below we can see the unique words that have been loaded into the training set:
End of explanation
training.get_word_Xlengths('CHOCOLATE')
Explanation: The training data in training is an object of class WordsData defined in the asl_data module. in addition to the words list, data can be accessed with the get_all_sequences, get_all_Xlengths, get_word_sequences, and get_word_Xlengths methods. We need the get_word_Xlengths method to train multiple sequences with the hmmlearn library. In the following example, notice that there are two lists; the first is a concatenation of all the sequences(the X portion) and the second is a list of the sequence lengths(the Lengths portion).
End of explanation
df_means = asl.df.groupby('speaker').mean()
df_means
Explanation: More feature sets
So far we have a simple feature set that is enough to get started modeling. However, we might get better results if we manipulate the raw values a bit more, so we will go ahead and set up some other options now for experimentation later. For example, we could normalize each speaker's range of motion with grouped statistics using Pandas stats functions and pandas groupby. Below is an example for finding the means of all speaker subgroups.
End of explanation
asl.df['left-x-mean']= asl.df['speaker'].map(df_means['left-x'])
asl.df.head()
Explanation: To select a mean that matches by speaker, use the pandas map method:
End of explanation
from asl_utils import test_std_tryit
# TODO Create a dataframe named `df_std` with standard deviations grouped by speaker
df_std = asl.df.groupby('speaker').std()
# test the code
test_std_tryit(df_std)
Explanation: Try it!
End of explanation
# TODO add features for normalized by speaker values of left, right, x, y
# Name these 'norm-rx', 'norm-ry', 'norm-lx', and 'norm-ly'
# using Z-score scaling (X-Xmean)/Xstd
# Left mean values
asl.df['mean-lx'] = asl.df['speaker'].map(df_means['left-x'])
asl.df['mean-ly'] = asl.df['speaker'].map(df_means['left-y'])
# Left standard deviation values
asl.df['std-lx'] = asl.df['speaker'].map(df_std['left-x'])
asl.df['std-ly'] = asl.df['speaker'].map(df_std['left-y'])
# Right mean values
asl.df['mean-rx'] = asl.df['speaker'].map(df_means['right-x'])
asl.df['mean-ry'] = asl.df['speaker'].map(df_means['right-y'])
# Right standard deviation values
asl.df['std-rx'] = asl.df['speaker'].map(df_std['right-x'])
asl.df['std-ry'] = asl.df['speaker'].map(df_std['right-y'])
# Calculating normalized values using the standard score equation
asl.df['norm-lx'] = (asl.df['left-x'] - asl.df['mean-lx']) / asl.df['std-lx']
asl.df['norm-ly'] = (asl.df['left-y'] - asl.df['mean-ly']) / asl.df['std-ly']
asl.df['norm-rx'] = (asl.df['right-x'] - asl.df['mean-rx']) / asl.df['std-rx']
asl.df['norm-ry'] = (asl.df['right-y'] - asl.df['mean-ry']) / asl.df['std-ry']
features_norm = ['norm-rx', 'norm-ry', 'norm-lx', 'norm-ly']
import math
# TODO add features for polar coordinate values where the nose is the origin
# Name these 'polar-rr', 'polar-rtheta', 'polar-lr', and 'polar-ltheta'
# Note that 'polar-rr' and 'polar-rtheta' refer to the radius and angle
# Left polar coordinate values
asl.df['polar-lr'] = ((asl.df['grnd-lx'] ** 2) + (asl.df['grnd-ly'] ** 2)) ** (0.5)
asl.df['polar-ltheta'] = np.arctan2(asl.df['grnd-lx'], asl.df['grnd-ly'])
# Right polar coordinate values
asl.df['polar-rr'] = ((asl.df['grnd-rx'] ** 2) + (asl.df['grnd-ry'] ** 2)) ** (0.5)
asl.df['polar-rtheta'] = np.arctan2(asl.df['grnd-rx'], asl.df['grnd-ry'])
features_polar = ['polar-rr', 'polar-rtheta', 'polar-lr', 'polar-ltheta']
# TODO add features for left, right, x, y differences by one time step, i.e. the "delta" values discussed in the lecture
# Name these 'delta-rx', 'delta-ry', 'delta-lx', and 'delta-ly'
# Left delta values
asl.df['delta-lx'] = asl.df['left-x'].diff().fillna(0.0)
asl.df['delta-ly'] = asl.df['left-y'].diff().fillna(0.0)
# Right delta values
asl.df['delta-rx'] = asl.df['right-x'].diff().fillna(0.0)
asl.df['delta-ry'] = asl.df['right-y'].diff().fillna(0.0)
features_delta = ['delta-rx', 'delta-ry', 'delta-lx', 'delta-ly']
# TODO add features of your own design, which may be a combination of the above or something else
# Name these whatever you would like
# Normalized values using the feature scaling equation
# Speaker min and max values
df_min = asl.df.groupby('speaker').min()
df_max= asl.df.groupby('speaker').max()
# Features min and max values
asl.df['left-x-min'] = asl.df['speaker'].map(df_min['left-x'])
asl.df['left-y-min'] = asl.df['speaker'].map(df_min['left-y'])
asl.df['right-x-min'] = asl.df['speaker'].map(df_min['right-x'])
asl.df['right-y-min'] = asl.df['speaker'].map(df_min['right-y'])
asl.df['left-x-max'] = asl.df['speaker'].map(df_max['left-x'])
asl.df['left-y-max'] = asl.df['speaker'].map(df_max['left-y'])
asl.df['right-x-max'] = asl.df['speaker'].map(df_max['right-x'])
asl.df['right-y-max'] = asl.df['speaker'].map(df_max['right-y'])
# Feature scaling using the rescaling method: x' = x - min(x) / max(x) - min(x)
asl.df['resc-lx'] = (asl.df['left-x'] - asl.df['left-x-min']) / (asl.df['left-x-max'] - asl.df['left-x-min'])
asl.df['resc-ly'] = (asl.df['left-y'] - asl.df['left-y-min']) / (asl.df['left-y-max'] - asl.df['left-y-min'])
asl.df['resc-rx'] = (asl.df['right-x'] - asl.df['right-x-min']) / (asl.df['right-x-max'] - asl.df['right-x-min'])
asl.df['resc-ry'] = (asl.df['right-y'] - asl.df['right-y-min']) / (asl.df['right-y-max'] - asl.df['right-y-min'])
# TODO define a list named 'features_custom' for building the training set
features_custom = ['resc-lx', 'resc-ly', 'resc-rx', 'resc-ry']
Explanation: <a id='part1_submission'></a>
Features Implementation Submission
Implement four feature sets and answer the question that follows.
- normalized Cartesian coordinates
- use mean and standard deviation statistics and the standard score equation to account for speakers with different heights and arm length
polar coordinates
calculate polar coordinates with Cartesian to polar equations
use the np.arctan2 function and swap the x and y axes to move the $0$ to $2\pi$ discontinuity to 12 o'clock instead of 3 o'clock; in other words, the normal break in radians value from $0$ to $2\pi$ occurs directly to the left of the speaker's nose, which may be in the signing area and interfere with results. By swapping the x and y axes, that discontinuity move to directly above the speaker's head, an area not generally used in signing.
delta difference
as described in Thad's lecture, use the difference in values between one frame and the next frames as features
pandas diff method and fillna method will be helpful for this one
custom features
These are your own design; combine techniques used above or come up with something else entirely. We look forward to seeing what you come up with!
Some ideas to get you started:
normalize using a feature scaling equation
normalize the polar coordinates
adding additional deltas
End of explanation
import unittest
# import numpy as np
class TestFeatures(unittest.TestCase):
def test_features_ground(self):
sample = (asl.df.ix[98, 1][features_ground]).tolist()
self.assertEqual(sample, [9, 113, -12, 119])
def test_features_norm(self):
sample = (asl.df.ix[98, 1][features_norm]).tolist()
np.testing.assert_almost_equal(sample, [ 1.153, 1.663, -0.891, 0.742], 3)
def test_features_polar(self):
sample = (asl.df.ix[98,1][features_polar]).tolist()
np.testing.assert_almost_equal(sample, [113.3578, 0.0794, 119.603, -0.1005], 3)
def test_features_delta(self):
sample = (asl.df.ix[98, 0][features_delta]).tolist()
self.assertEqual(sample, [0, 0, 0, 0])
sample = (asl.df.ix[98, 18][features_delta]).tolist()
self.assertTrue(sample in [[-16, -5, -2, 4], [-14, -9, 0, 0]], "Sample value found was {}".format(sample))
suite = unittest.TestLoader().loadTestsFromModule(TestFeatures())
unittest.TextTestRunner().run(suite)
Explanation: Question 1: What custom features did you choose for the features_custom set and why?
Answer 1:
I choose to use feature rescaling because is widely used, it's simple and it helps to reduce noisy features in support vector machines, which are models that I've used before. Also, being simple means computationally simple.
<a id='part1_test'></a>
Features Unit Testing
Run the following unit tests as a sanity check on the defined "ground", "norm", "polar", and 'delta"
feature sets. The test simply looks for some valid values but is not exhaustive. However, the project should not be submitted if these tests don't pass.
End of explanation
import warnings
from hmmlearn.hmm import GaussianHMM
def train_a_word(word, num_hidden_states, features):
warnings.filterwarnings("ignore", category=DeprecationWarning)
training = asl.build_training(features)
X, lengths = training.get_word_Xlengths(word)
model = GaussianHMM(n_components=num_hidden_states, n_iter=1000).fit(X, lengths)
logL = model.score(X, lengths)
return model, logL
demoword = 'BOOK'
model, logL = train_a_word(demoword, 3, features_ground)
print("Number of states trained in model for {} is {}".format(demoword, model.n_components))
print("logL = {}".format(logL))
Explanation: <a id='part2_tutorial'></a>
PART 2: Model Selection
Model Selection Tutorial
The objective of Model Selection is to tune the number of states for each word HMM prior to testing on unseen data. In this section you will explore three methods:
- Log likelihood using cross-validation folds (CV)
- Bayesian Information Criterion (BIC)
- Discriminative Information Criterion (DIC)
Train a single word
Now that we have built a training set with sequence data, we can "train" models for each word. As a simple starting example, we train a single word using Gaussian hidden Markov models (HMM). By using the fit method during training, the Baum-Welch Expectation-Maximization (EM) algorithm is invoked iteratively to find the best estimate for the model for the number of hidden states specified from a group of sample seequences. For this example, we assume the correct number of hidden states is 3, but that is just a guess. How do we know what the "best" number of states for training is? We will need to find some model selection technique to choose the best parameter.
End of explanation
def show_model_stats(word, model):
print("Number of states trained in model for {} is {}".format(word, model.n_components))
variance=np.array([np.diag(model.covars_[i]) for i in range(model.n_components)])
for i in range(model.n_components): # for each hidden state
print("hidden state #{}".format(i))
print("mean = ", model.means_[i])
print("variance = ", variance[i])
print()
show_model_stats(demoword, model)
Explanation: The HMM model has been trained and information can be pulled from the model, including means and variances for each feature and hidden state. The log likelihood for any individual sample or group of samples can also be calculated with the score method.
End of explanation
my_testword = 'CHOCOLATE'
model, logL = train_a_word(my_testword, 3, features_delta) # Experiment here with different parameters
show_model_stats(my_testword, model)
print("logL = {}".format(logL))
Explanation: Try it!
Experiment by changing the feature set, word, and/or num_hidden_states values in the next cell to see changes in values.
End of explanation
%matplotlib inline
%load_ext autoreload
%autoreload 2
import math
from matplotlib import (cm, pyplot as plt, mlab)
def visualize(word, model):
visualize the input model for a particular word
variance=np.array([np.diag(model.covars_[i]) for i in range(model.n_components)])
figures = []
for parm_idx in range(len(model.means_[0])):
xmin = int(min(model.means_[:,parm_idx]) - max(variance[:,parm_idx]))
xmax = int(max(model.means_[:,parm_idx]) + max(variance[:,parm_idx]))
fig, axs = plt.subplots(model.n_components, sharex=True, sharey=False)
colours = cm.rainbow(np.linspace(0, 1, model.n_components))
for i, (ax, colour) in enumerate(zip(axs, colours)):
x = np.linspace(xmin, xmax, 100)
mu = model.means_[i,parm_idx]
sigma = math.sqrt(np.diag(model.covars_[i])[parm_idx])
ax.plot(x, mlab.normpdf(x, mu, sigma), c=colour)
ax.set_title("{} feature {} hidden state #{}".format(word, parm_idx, i))
ax.grid(True)
figures.append(plt)
for p in figures:
p.show()
visualize(my_testword, model)
Explanation: Visualize the hidden states
We can plot the means and variances for each state and feature. Try varying the number of states trained for the HMM model and examine the variances. Are there some models that are "better" than others? How can you tell? We would like to hear what you think in the classroom online.
End of explanation
from my_model_selectors import SelectorConstant
training = asl.build_training(features_delta) # Experiment here with different feature sets defined in part 1
word = 'CHOCOLATE' # Experiment here with different words
model = SelectorConstant(training.get_all_sequences(), training.get_all_Xlengths(), word, n_constant=3).select()
print("Number of states trained in model for {} is {}".format(word, model.n_components))
Explanation: ModelSelector class
Review the ModelSelector class from the codebase found in the my_model_selectors.py module. It is designed to be a strategy pattern for choosing different model selectors. For the project submission in this section, subclass SelectorModel to implement the following model selectors. In other words, you will write your own classes/functions in the my_model_selectors.py module and run them from this notebook:
SelectorCV: Log likelihood with CV
SelectorBIC: BIC
SelectorDIC: DIC
You will train each word in the training set with a range of values for the number of hidden states, and then score these alternatives with the model selector, choosing the "best" according to each strategy. The simple case of training with a constant value for n_components can be called using the provided SelectorConstant subclass as follow:
End of explanation
from sklearn.model_selection import KFold
training = asl.build_training(features_custom) # Experiment here with different feature sets
word = 'VEGETABLE' # Experiment here with different words
word_sequences = training.get_word_sequences(word)
split_method = KFold()
for cv_train_idx, cv_test_idx in split_method.split(word_sequences):
print("Train fold indices:{} Test fold indices:{}".format(cv_train_idx, cv_test_idx)) # view indices of the folds
Explanation: Cross-validation folds
If we simply score the model with the Log Likelihood calculated from the feature sequences it has been trained on, we should expect that more complex models will have higher likelihoods. However, that doesn't tell us which would have a better likelihood score on unseen data. The model will likely be overfit as complexity is added. To estimate which topology model is better using only the training data, we can compare scores using cross-validation. One technique for cross-validation is to break the training set into "folds" and rotate which fold is left out of training. The "left out" fold scored. This gives us a proxy method of finding the best model to use on "unseen data". In the following example, a set of word sequences is broken into three folds using the scikit-learn Kfold class object. When you implement SelectorCV, you will use this technique.
End of explanation
words_to_train = ['FISH', 'BOOK', 'VEGETABLE', 'FUTURE', 'JOHN']
import timeit
%load_ext autoreload
%autoreload 2
# TODO: Implement SelectorCV in my_model_selector.py
from my_model_selectors import SelectorCV
training = asl.build_training(features_ground) # Experiment here with different feature sets defined in part 1
sequences = training.get_all_sequences()
Xlengths = training.get_all_Xlengths()
for word in words_to_train:
start = timeit.default_timer()
model = SelectorCV(sequences, Xlengths, word,
min_n_components=2, max_n_components=15, random_state = 14).select()
end = timeit.default_timer()-start
if model is not None:
print("Training complete for {} with {} states with time {} seconds".format(word, model.n_components, end))
else:
print("Training failed for {}".format(word))
# TODO: Implement SelectorBIC in module my_model_selectors.py
from my_model_selectors import SelectorBIC
training = asl.build_training(features_ground) # Experiment here with different feature sets defined in part 1
sequences = training.get_all_sequences()
Xlengths = training.get_all_Xlengths()
for word in words_to_train:
start = timeit.default_timer()
model = SelectorBIC(sequences, Xlengths, word,
min_n_components=2, max_n_components=15, random_state = 14).select()
end = timeit.default_timer()-start
if model is not None:
print("Training complete for {} with {} states with time {} seconds".format(word, model.n_components, end))
else:
print("Training failed for {}".format(word))
# TODO: Implement SelectorDIC in module my_model_selectors.py
from my_model_selectors import SelectorDIC
training = asl.build_training(features_ground) # Experiment here with different feature sets defined in part 1
sequences = training.get_all_sequences()
Xlengths = training.get_all_Xlengths()
for word in words_to_train:
start = timeit.default_timer()
model = SelectorDIC(sequences, Xlengths, word,
min_n_components=2, max_n_components=15, random_state = 14).select()
end = timeit.default_timer()-start
if model is not None:
print("Training complete for {} with {} states with time {} seconds".format(word, model.n_components, end))
else:
print("Training failed for {}".format(word))
Explanation: Tip: In order to run hmmlearn training using the X,lengths tuples on the new folds, subsets must be combined based on the indices given for the folds. A helper utility has been provided in the asl_utils module named combine_sequences for this purpose.
Scoring models with other criterion
Scoring model topologies with BIC balances fit and complexity within the training set for each word. In the BIC equation, a penalty term penalizes complexity to avoid overfitting, so that it is not necessary to also use cross-validation in the selection process. There are a number of references on the internet for this criterion. These slides include a formula you may find helpful for your implementation.
The advantages of scoring model topologies with DIC over BIC are presented by Alain Biem in this reference (also found here). DIC scores the discriminant ability of a training set for one word against competing words. Instead of a penalty term for complexity, it provides a penalty if model liklihoods for non-matching words are too similar to model likelihoods for the correct word in the word set.
<a id='part2_submission'></a>
Model Selection Implementation Submission
Implement SelectorCV, SelectorBIC, and SelectorDIC classes in the my_model_selectors.py module. Run the selectors on the following five words. Then answer the questions about your results.
Tip: The hmmlearn library may not be able to train or score all models. Implement try/except contructs as necessary to eliminate non-viable models from consideration.
End of explanation
from asl_test_model_selectors import TestSelectors
suite = unittest.TestLoader().loadTestsFromModule(TestSelectors())
unittest.TextTestRunner().run(suite)
Explanation: Question 2: Compare and contrast the possible advantages and disadvantages of the various model selectors implemented.
Answer 2:
Overall SelectorCV and SelectorBIC offered much better results than SelectorDIC. Although Running time and states was very similar between SelectorCV and Selector BIC, I'd choose SeletorCV because some test performed in other computers showed less states than BIC and also in some talks in the forums many other students agreed that SelectorCV performs better than the other selectors.
<a id='part2_test'></a>
Model Selector Unit Testing
Run the following unit tests as a sanity check on the implemented model selectors. The test simply looks for valid interfaces but is not exhaustive. However, the project should not be submitted if these tests don't pass.
End of explanation
# autoreload for automatically reloading changes made in my_model_selectors and my_recognizer
from my_model_selectors import SelectorConstant
def train_all_words(features, model_selector):
training = asl.build_training(features) # Experiment here with different feature sets defined in part 1
sequences = training.get_all_sequences()
Xlengths = training.get_all_Xlengths()
model_dict = {}
for word in training.words:
model = model_selector(sequences, Xlengths, word,
n_constant=3).select()
model_dict[word]=model
return model_dict
models = train_all_words(features_ground, SelectorConstant)
print("Number of word models returned = {}".format(len(models)))
Explanation: <a id='part3_tutorial'></a>
PART 3: Recognizer
The objective of this section is to "put it all together". Using the four feature sets created and the three model selectors, you will experiment with the models and present your results. Instead of training only five specific words as in the previous section, train the entire set with a feature set and model selector strategy.
Recognizer Tutorial
Train the full training set
The following example trains the entire set with the example features_ground and SelectorConstant features and model selector. Use this pattern for you experimentation and final submission cells.
End of explanation
test_set = asl.build_test(features_ground)
print("Number of test set items: {}".format(test_set.num_items))
print("Number of test set sentences: {}".format(len(test_set.sentences_index)))
Explanation: Load the test set
The build_test method in ASLdb is similar to the build_training method already presented, but there are a few differences:
- the object is type SinglesData
- the internal dictionary keys are the index of the test word rather than the word itself
- the getter methods are get_all_sequences, get_all_Xlengths, get_item_sequences and get_item_Xlengths
End of explanation
# TODO implement the recognize method in my_recognizer
from my_recognizer import recognize
from asl_utils import show_errors
# TODO Choose a feature set and model selector
features = features_norm # change as needed
model_selector = SelectorCV # change as needed
# TODO Recognize the test set and display the result with the show_errors method
models = train_all_words(features, model_selector)
test_set = asl.build_test(features)
probabilities, guesses = recognize(models, test_set)
show_errors(guesses, test_set)
# TODO Choose a feature set and model selector
features = features_norm # change as needed
model_selector = SelectorBIC # change as needed
# TODO Recognize the test set and display the result with the show_errors method
models = train_all_words(features, model_selector)
test_set = asl.build_test(features)
probabilities, guesses = recognize(models, test_set)
show_errors(guesses, test_set)
# TODO Choose a feature set and model selector
features = features_norm # change as needed
model_selector = SelectorDIC # change as needed
# TODO Recognize the test set and display the result with the show_errors method
models = train_all_words(features, model_selector)
test_set = asl.build_test(features)
probabilities, guesses = recognize(models, test_set)
show_errors(guesses, test_set)
Explanation: <a id='part3_submission'></a>
Recognizer Implementation Submission
For the final project submission, students must implement a recognizer following guidance in the my_recognizer.py module. Experiment with the four feature sets and the three model selection methods (that's 12 possible combinations). You can add and remove cells for experimentation or run the recognizers locally in some other way during your experiments, but retain the results for your discussion. For submission, you will provide code cells of only three interesting combinations for your discussion (see questions below). At least one of these should produce a word error rate of less than 60%, i.e. WER < 0.60 .
Tip: The hmmlearn library may not be able to train or score all models. Implement try/except contructs as necessary to eliminate non-viable models from consideration.
End of explanation
from asl_test_recognizer import TestRecognize
suite = unittest.TestLoader().loadTestsFromModule(TestRecognize())
unittest.TextTestRunner().run(suite)
Explanation: Question 3: Summarize the error results from three combinations of features and model selectors. What was the "best" combination and why? What additional information might we use to improve our WER? For more insight on improving WER, take a look at the introduction to Part 4.
Answer 3:
After testing all combinations I noticed that using Features Polar gives better results. The Best combination I think would be Features Polar with Selector CV. Features Polar with Selector BIC could also be a good choice.
This challenge reminds me of the short story "The Gold-Bug" by Edgar Allan Poe. In this history the protagonists find a simple substitution cipher that can be solved using letter frequencies. So having this in mind I would suggest to do something similar computig the probability of some words to appear in phrase or sequence (I'm still not sure the current method used to archieve this in AI).
Features Custom. SelectorCV
WER = 0.6741573033707865
Features Custom. SelectorBIC
WER = 0.6741573033707865
Features Custom. SelectorDIC
WER = 0.6741573033707865
Features Ground. SelectorCV
WER = 0.6685393258426966
Features Ground. SelectorBIC
WER = 0.6685393258426966
Features Ground. SelectorDIC
WER = 0.6685393258426966
Features Polar. SelectorCV
WER = 0.6179775280898876
Features Polar. SelectorBIC
WER = 0.6179775280898876
Features Polar. SelectorDIC
WER = 0.6179775280898876
Features Delta. SelectorCV
WER = 0.6404494382022472
Features Delta. SelectorBIC
WER = 0.6404494382022472
Features Delta. SelectorDIC
WER = 0.6404494382022472
Features Norm. SelectorCV
WER = 0.6235955056179775
Features Norm. SelectorBIC
WER = 0.6235955056179775
Features Norm. SelectorDIC
WER = 0.6235955056179775
<a id='part3_test'></a>
Recognizer Unit Tests
Run the following unit tests as a sanity check on the defined recognizer. The test simply looks for some valid values but is not exhaustive. However, the project should not be submitted if these tests don't pass.
End of explanation
# create a DataFrame of log likelihoods for the test word items
df_probs = pd.DataFrame(data=probabilities)
df_probs.head()
Explanation: <a id='part4_info'></a>
PART 4: (OPTIONAL) Improve the WER with Language Models
We've squeezed just about as much as we can out of the model and still only get about 50% of the words right! Surely we can do better than that. Probability to the rescue again in the form of statistical language models (SLM). The basic idea is that each word has some probability of occurrence within the set, and some probability that it is adjacent to specific other words. We can use that additional information to make better choices.
Additional reading and resources
Introduction to N-grams (Stanford Jurafsky slides)
Speech Recognition Techniques for a Sign Language Recognition System, Philippe Dreuw et al see the improved results of applying LM on this data!
SLM data for this ASL dataset
Optional challenge
The recognizer you implemented in Part 3 is equivalent to a "0-gram" SLM. Improve the WER with the SLM data provided with the data set in the link above using "1-gram", "2-gram", and/or "3-gram" statistics. The probabilities data you've already calculated will be useful and can be turned into a pandas DataFrame if desired (see next cell).
Good luck! Share your results with the class!
End of explanation |
1,381 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Wi-Fi Fingerprinting Experiments
Import modules and set up the environment
Step1: Helper Functions
Step2: Load the model classes
A class responsible for loading a JSON file (or all the JSON files in a given directory) into a Python dictionary
Step3: A class that takes a set of Python dictionaries containing Wi-Fi logging data loaded from JSON files collected by the YanuX Scavenger Android application
Step4: Initialize Input & Output Data Directories and other parameters
Step5: Create the output directory if it doesn't exist
Step6: Load Data from the Input Data Directory
Load all files from the data folder.
The logs currently placed there were collected using the Yanux Scavenger Android application on April 28<sup>th</sup>, 2016 using an LG Nexus 5 running Androdid Marshmallow 6.0.1
Step7: Wi-Fi Readings
Number of Recorded Samples per Location
Step8: Store the data into a Pandas Dataframe, in which each Wi-Fi result reading is represented by a single line
Step9: Identify the unique MAC Addresses present in the recorded data. Each one represents a single Wi-Fi Access Point.
Step10: Similarly, store the data into a Pandas Dataframe in which each line represents a single sampling cycle with n different readings for each of the Access Points within range. Those readings are stored as columns along each sample.
Step11: Data Set Statistics
Number of Results
Step12: Number of Unique Mac Addresses
Step13: How often has each Access Point been detected
Step14: How many Wi-Fi results were gathered at each location
Step15: How many APs were detected at each location
Step16: The coordinates of the points where data was captured
Step17: Signal Strength Distribution
Step18: Set a train and test scenario to be used by default when testing.
Step19: Playground
Base Example
Step20: # Neighbors & Distance Weights
Step21: Metric
Just test a few different distance statistics to assess if there is a better alternative than the plain old euclidean distance. The tested statistics include
Step22: Feature Scaling
Test different data scaling and normalization approaches to find out if any of them provides a clear advantage over the others.
Step23: NaN filler values
Test which is the signal strength value that should be considered for Access Points that are currently out of range. This is needed as part of the process of computing the distance/similarity between different fingerprints.
Step24: Impact of orientation in the results
Step25: Impact of the spacing between reference points in the results
Step26: Impact of the amount of available data in the results
Step27: Save all the data that was collected into an Excel file
Step28: Grid Search - Automatically searching for the best estimator parameters | Python Code:
# Python Standard Library
import getopt
import os
import sys
import math
import time
import collections
import random
# IPython
from IPython.display import display
# pandas
import pandas as pd
pd.set_option("display.max_rows", 10000)
pd.set_option("display.max_columns", 10000)
# Matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
from matplotlib.ticker import MultipleLocator
# seaborn
import seaborn as sns
sns.set_style("whitegrid")
sns.despine()
# NumPy
import numpy as np
# SciPy
import scipy as sp
from scipy.stats import gaussian_kde
# StatsModels
import statsmodels.api as sm
# scikit-learn
import sklearn
from sklearn import metrics
from sklearn import preprocessing
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.model_selection import LeaveOneGroupOut
from sklearn.neighbors import KNeighborsRegressor
from sklearn.model_selection import cross_val_predict
from sklearn.pipeline import make_pipeline
Explanation: Wi-Fi Fingerprinting Experiments
Import modules and set up the environment
End of explanation
def experiment_plots(results, save_to=None, figsize=(8, 8)):
fig, axarr = plt.subplots(2, 1, figsize=figsize)
for key, result in results.items():
max_error = math.ceil(result["error"].max())
kde = gaussian_kde(result["error"].values)
X_plot=np.linspace(0, max_error, 1000)
axarr[0].plot(X_plot, kde.evaluate(X_plot), "-", label=key)
axarr[0].set_xlabel("Error (e) in meters (m)")
axarr[0].set_ylabel(r"$F_X(e)$")
axarr[0].xaxis.set_major_locator(MultipleLocator(0.5))
axarr[0].set_xlim(0, result["error"].quantile(q=0.9975))
axarr[0].legend()
for key, result in results.items():
ecdf = sm.distributions.ECDF(result["error"])
x = np.linspace(min(result["error"]), max(result["error"]))
y = ecdf(x)
axarr[1].plot(x, y, label=key)
axarr[1].set_xlabel("Error (e) in meters (m)")
axarr[1].set_ylabel(r"$f_X(e)$")
axarr[1].xaxis.set_major_locator(MultipleLocator(0.5))
axarr[1].yaxis.set_major_locator(MultipleLocator(0.1))
axarr[1].set_xlim(0, result["error"].quantile(q=0.9975))
axarr[1].set_ylim(0)
axarr[1].legend()
fig.tight_layout()
if save_to is not None:
fig.savefig(output_data_directory+"/"+save_to, dpi=300)
plt.show()
def experiment_statistics(result):
statistics = collections.OrderedDict([
("mae", result["error"].abs().mean()),
("rmse", np.sqrt((result["error"]**2).mean())),
("sd", result["error"].std()),
("p50", result["error"].quantile(q=0.50)),
("p75", result["error"].quantile(q=0.75)),
("p90", result["error"].quantile(q=0.90)),
("p95", result["error"].quantile(q=0.95)),
("min", result["error"].min()),
("max", result["error"].max()),
])
return statistics
def knn_experiment(data, test_data, train_cols, coord_cols,
scaler=None, n_neighbors=5, weights="uniform",
algorithm="auto", leaf_size=30, p=2, metric="minkowski",
metric_params=None, n_jobs=1):
result = None
knn = KNeighborsRegressor(n_neighbors=n_neighbors, weights=weights, algorithm=algorithm,
leaf_size=leaf_size, p=p, metric=metric,
metric_params=metric_params, n_jobs=n_jobs)
if scaler is not None:
estimator = make_pipeline(scaler, knn)
else:
estimator = knn
locations = data.groupby(coord_cols).indices.keys()
for coords in locations:
train_data = data[(data[coord_cols[0]] != coords[0]) |
(data[coord_cols[1]] != coords[1])].reset_index(drop=True)
target_values = test_data[(test_data[coord_cols[0]] == coords[0]) &
(test_data[coord_cols[1]] == coords[1])].reset_index(drop=True)
estimator.fit(train_data[train_cols], train_data[coord_cols])
predictions = pd.DataFrame(estimator.predict(target_values[train_cols]), columns=coord_cols)
curr_result = target_values[coord_cols].join(predictions, rsuffix="_predicted")
error = pd.DataFrame((predictions[coord_cols] - curr_result[coord_cols]).apply(np.linalg.norm, axis=1),
columns=["error"])
curr_result = pd.concat([curr_result, error], axis=1)
result = pd.concat([result, curr_result])
return result
def knn_experiment_cv(data, cross_validation, train_cols, coord_cols,
scaler=None, n_neighbors=5, weights='uniform',
algorithm="auto", leaf_size=30, p=2, metric="minkowski",
metric_params=None, n_jobs=1):
result = None
knn = KNeighborsRegressor(n_neighbors=n_neighbors, weights=weights, algorithm=algorithm,
leaf_size=leaf_size, p=p, metric=metric,
metric_params=metric_params, n_jobs=n_jobs)
if scaler is not None:
estimator = make_pipeline(scaler, knn)
else:
estimator = knn
X = data[train_cols]
y = data[coord_cols]
predictions = pd.DataFrame(cross_val_predict(estimator, X, y, cv=cross_validation), columns=coord_cols)
result = y.join(predictions, rsuffix="_predicted")
error = pd.DataFrame((predictions[coord_cols] - result[coord_cols]).apply(np.linalg.norm, axis=1), columns=["error"])
result = pd.concat([result, error], axis=1)
return result
Explanation: Helper Functions
End of explanation
from yanux.cruncher.model.loader import JsonLoader
Explanation: Load the model classes
A class responsible for loading a JSON file (or all the JSON files in a given directory) into a Python dictionary
End of explanation
from yanux.cruncher.model.wifi import WifiLogs
Explanation: A class that takes a set of Python dictionaries containing Wi-Fi logging data loaded from JSON files collected by the YanuX Scavenger Android application
End of explanation
input_data_directory = "data"
output_data_directory = "out"
statistics_excel_writer = pd.ExcelWriter(output_data_directory+"/statistics.xlsx")
Explanation: Initialize Input & Output Data Directories and other parameters
End of explanation
if not os.path.exists(output_data_directory):
os.makedirs(output_data_directory)
Explanation: Create the output directory if it doesn't exist
End of explanation
json_loader = JsonLoader(input_data_directory+"/wifi-fingerprints")
wifi_logs = WifiLogs(json_loader.json_data)
Explanation: Load Data from the Input Data Directory
Load all files from the data folder.
The logs currently placed there were collected using the Yanux Scavenger Android application on April 28<sup>th</sup>, 2016 using an LG Nexus 5 running Androdid Marshmallow 6.0.1
End of explanation
num_samples_per_location = int(len(wifi_logs.wifi_samples()) / len(wifi_logs.locations))
num_samples_per_location
Explanation: Wi-Fi Readings
Number of Recorded Samples per Location
End of explanation
wifi_results_columns = ["filename", "place", "floor", "x", "y", "orientation", "sample_id", "mac_address",
"timestamp", "signal_strength"]
wifi_results = pd.DataFrame(wifi_logs.wifi_results(), columns=wifi_results_columns)
wifi_results.to_csv(output_data_directory + "/wifi_results.csv")
Explanation: Store the data into a Pandas Dataframe, in which each Wi-Fi result reading is represented by a single line
End of explanation
mac_addresses = wifi_results.mac_address.unique()
Explanation: Identify the unique MAC Addresses present in the recorded data. Each one represents a single Wi-Fi Access Point.
End of explanation
wifi_samples_columns = ["filename", "place", "floor", "x", "y", "orientation", "sample_id", "timestamp"]
wifi_samples_columns.extend(mac_addresses)
wifi_samples = pd.DataFrame(wifi_logs.wifi_samples(), columns=wifi_samples_columns)
wifi_samples = wifi_samples.sort_values(["filename", "x", "y", "floor", "sample_id"]).reset_index(drop=True)
wifi_samples.to_csv(output_data_directory + "/wifi_samples.csv")
Explanation: Similarly, store the data into a Pandas Dataframe in which each line represents a single sampling cycle with n different readings for each of the Access Points within range. Those readings are stored as columns along each sample.
End of explanation
len(wifi_results)
Explanation: Data Set Statistics
Number of Results
End of explanation
len(wifi_results.mac_address.unique())
Explanation: Number of Unique Mac Addresses
End of explanation
wifi_results_mac_address_group = wifi_results.groupby("mac_address")
wifi_results_mac_address_group.size().plot(kind="bar")
wifi_results_mac_address_group.size()
wifi_results_mac_address_group.size().mean()
Explanation: How often has each Access Point been detected
End of explanation
wifi_results_coord_group = wifi_results.groupby(["x", "y"])
wifi_results_coord_group.size().plot(kind="bar")
wifi_results_coord_group.size()
wifi_results_coord_group.size().describe()
Explanation: How many Wi-Fi results were gathered at each location
End of explanation
wifi_ap_per_location = wifi_samples.groupby(["x","y"]).min()[wifi_results_mac_address_group.size().keys()].count(axis=1)
wifi_ap_per_location.plot(kind="bar")
wifi_ap_per_location
wifi_ap_per_location.describe()
Explanation: How many APs were detected at each location
End of explanation
coords = wifi_results[["x","y"]].drop_duplicates().sort_values(by=["x","y"]).reset_index(drop=True)
coords_plot_size = (min(coords["x"].min(),coords["y"].min()), max(coords["x"].max(),coords["y"].max()))
#TODO: If I end up using it in the document, then I should refactor the plot to use matplotlib directly to tweak a few things.
coords.plot(figsize=(16,5), x="x",y="y", style="o", grid=True, legend=False,
xlim=coords_plot_size, ylim=coords_plot_size,
xticks=np.arange(coords_plot_size[0]-1, coords_plot_size[1]+1, 1),
yticks=np.arange(coords_plot_size[0]-1, coords_plot_size[1]+1, 1)).axis('equal')
Explanation: The coordinates of the points where data was captured
End of explanation
wifi_results.hist(column="signal_strength")
Explanation: Signal Strength Distribution
End of explanation
train_cols = mac_addresses
coord_cols = ["x","y"]
default_data_scenario = wifi_samples.copy()
default_data_scenario_groups = default_data_scenario["x"].map(str)+","+default_data_scenario["y"].map(str)
Explanation: Set a train and test scenario to be used by default when testing.
End of explanation
n_neighbors=15
weights="distance"
metric="braycurtis"
nan_filler = default_data_scenario[mac_addresses].min().min()*1.001
scaler = preprocessing.StandardScaler()
cross_validation = LeaveOneGroupOut()
curr_data = default_data_scenario.fillna(nan_filler)
curr_result = knn_experiment_cv(curr_data,
cross_validation.split(curr_data[mac_addresses],
curr_data[coord_cols],
groups=default_data_scenario_groups),
mac_addresses,
coord_cols,
scaler=scaler,
algorithm="brute",
n_neighbors=n_neighbors,
weights=weights,
metric=metric)
curr_statistics = experiment_statistics(curr_result)
curr_result.to_csv(output_data_directory+"/results-base.csv")
statistics_table = pd.DataFrame([curr_statistics], columns=list(curr_statistics.keys()))
statistics_table.to_csv(output_data_directory+"/statistics-base.csv")
statistics_table.to_excel(statistics_excel_writer, "base")
#show table
display(statistics_table)
#plots
experiment_plots({'Base Example':curr_result})
Explanation: Playground
Base Example
End of explanation
n_neighbors=np.arange(1,31,1)
weights=["uniform", "distance"]
metric="braycurtis"
nan_filler = default_data_scenario[mac_addresses].min().min()*1.001
scaler = preprocessing.StandardScaler()
cross_validation = LeaveOneGroupOut()
curr_data = default_data_scenario.fillna(nan_filler)
# Just a statistics accumulator
statistics = []
for k in n_neighbors:
for w in weights:
curr_result = knn_experiment_cv(curr_data,
cross_validation.split(curr_data[mac_addresses],
curr_data[coord_cols],
groups=default_data_scenario_groups),
mac_addresses,
coord_cols,
scaler=scaler,
algorithm="brute",
n_neighbors=k,
weights=w,
metric=metric)
curr_statistics = experiment_statistics(curr_result)
curr_statistics["k"] = k
curr_statistics["weights"] = w
statistics.append(curr_statistics)
cols = ["k","weights"] + list(curr_statistics.keys())[:-2]
statistics_table = pd.DataFrame(statistics, columns=cols)
statistics_table.to_csv(output_data_directory + "/statistics-neighbors-weights.csv")
statistics_table.to_excel(statistics_excel_writer, "neighbors-weights")
#show table
display(statistics_table.sort_values(cols[3:]))
# Plotting Error statistics
fig, ax = plt.subplots(figsize=(8, 5))
index = n_neighbors
ax.plot(index, statistics_table[statistics_table["weights"] == "uniform"]["mae"].tolist(),
color="b", ls="-", label="Uniform (MAE)")
ax.plot(index, statistics_table[statistics_table["weights"] == "distance"]["mae"].tolist(),
color="r", ls="-", label="Distance (MAE)")
ax.plot(index, statistics_table[statistics_table["weights"] == "uniform"]["rmse"].tolist(),
color="b", ls="--", label="Uniform (RMSE)")
ax.plot(index, statistics_table[statistics_table["weights"] == "distance"]["rmse"].tolist(),
color="r", ls="--", label="Distance (RMSE)")
ax.xaxis.set_major_locator(MultipleLocator(1))
ax.yaxis.set_major_locator(MultipleLocator(0.05))
ax.set_xlabel("Number of Neighbours (k)")
ax.set_ylabel("Error (e) in meters (m)")
plt.legend()
plt.tight_layout()
plt.savefig(output_data_directory+"/plot-neighbors_weights.pdf", dpi=300)
plt.show()
Explanation: # Neighbors & Distance Weights
End of explanation
n_neighbors=15
weights="distance"
distance_statistics=["euclidean", "manhattan", "canberra", "braycurtis"]
nan_filler = default_data_scenario[mac_addresses].min().min()*1.001
scaler = preprocessing.StandardScaler()
cross_validation = LeaveOneGroupOut()
curr_data = default_data_scenario.fillna(nan_filler)
# Results and statistics accumulators
results = {}
statistics = []
for metric in distance_statistics:
curr_result = knn_experiment_cv(curr_data,
cross_validation.split(curr_data[mac_addresses],
curr_data[coord_cols],
groups=default_data_scenario_groups),
mac_addresses,
coord_cols,
scaler=scaler,
algorithm="brute",
n_neighbors=n_neighbors,
weights=weights,
metric=metric)
results[metric] = curr_result
curr_statistics = experiment_statistics(curr_result)
curr_statistics["metric"] = metric
statistics.append(curr_statistics)
cols = ["metric"] + list(curr_statistics.keys())[:-1]
statistics_table = pd.DataFrame(statistics, columns=cols)
statistics_table.to_csv(output_data_directory + "/statistics-metric.csv")
statistics_table.to_excel(statistics_excel_writer, "metric")
#show table
display(statistics_table.sort_values(cols[2:]))
#plots
experiment_plots(results, "plot-metric.pdf")
Explanation: Metric
Just test a few different distance statistics to assess if there is a better alternative than the plain old euclidean distance. The tested statistics include:
- Euclidean Distance
- sqrt(sum((x - y)^2))
- Manhattan Distance
- sum(|x - y|)
- Chebyshev Distance
- sum(max(|x - y|))
- Hamming Distance
- N_unequal(x, y) / N_tot
- Canberra Distance
- sum(|x - y| / (|x| + |y|))
- Braycurtis Similarity
- sum(|x - y|) / (sum(|x|) + sum(|y|))
End of explanation
n_neighbors=15
weights="distance"
metric="braycurtis"
nan_filler= default_data_scenario[mac_addresses].min().min()*1.001
cross_validation = LeaveOneGroupOut()
scalers = {"No Scaling": None,
"Rescaling": preprocessing.MinMaxScaler(),
"Standardization": preprocessing.StandardScaler()}
# Results and statistics accumulators
results = {}
statistics = []
for scaler_name, scaler in scalers.items():
curr_data = default_data_scenario.fillna(nan_filler)
curr_result = knn_experiment_cv(curr_data,
cross_validation.split(curr_data[mac_addresses],
curr_data[coord_cols],
groups=default_data_scenario_groups),
mac_addresses,
coord_cols,
scaler=scaler,
algorithm="brute",
n_neighbors=n_neighbors,
weights=weights,
metric=metric)
results[scaler_name] = curr_result
curr_statistics = experiment_statistics(results[scaler_name])
curr_statistics["scaler"] = scaler_name
statistics.append(curr_statistics)
cols = ["scaler"] + list(curr_statistics.keys())[:-1]
statistics_table = pd.DataFrame(statistics, columns=cols)
statistics_table.to_csv(output_data_directory + "/statistics-feature_scaling.csv")
statistics_table.to_excel(statistics_excel_writer, "feature_scaling")
#show table
display(statistics_table.sort_values(cols[2:]))
#plots
experiment_plots(results, "plot-feature_scaling.pdf")
Explanation: Feature Scaling
Test different data scaling and normalization approaches to find out if any of them provides a clear advantage over the others.
End of explanation
n_neighbors=15
weights="distance"
metric="braycurtis"
min_rssi_value = default_data_scenario[mac_addresses].min().min()
nan_fillers = [min_rssi_value,min_rssi_value*1.001,min_rssi_value*1.010,min_rssi_value*1.100,min_rssi_value*1.500]
scaler = preprocessing.StandardScaler()
cross_validation = LeaveOneGroupOut()
# Results and statistics accumulators
results = {}
statistics = []
for nf in nan_fillers:
curr_data = default_data_scenario.fillna(nf)
curr_result = knn_experiment_cv(curr_data,
cross_validation.split(curr_data[mac_addresses],
curr_data[coord_cols],
groups=default_data_scenario_groups),
mac_addresses,
coord_cols,
scaler=scaler,
algorithm="brute",
n_neighbors=n_neighbors,
weights=weights,
metric=metric)
results[nf] = curr_result
curr_statistics = experiment_statistics(curr_result)
curr_statistics["nan_filler"] = nf
statistics.append(curr_statistics)
cols = ["nan_filler"] + list(curr_statistics.keys())[:-1]
statistics_table = pd.DataFrame(statistics, columns=cols)
statistics_table.to_csv(output_data_directory + "/statistics-nan_filler.csv")
statistics_table.to_excel(statistics_excel_writer, "nan_filler")
#show table
display(statistics_table.sort_values(cols[2:]))
#plots
experiment_plots(results, "plot-nan_filler.pdf")
Explanation: NaN filler values
Test which is the signal strength value that should be considered for Access Points that are currently out of range. This is needed as part of the process of computing the distance/similarity between different fingerprints.
End of explanation
filename_prefixes = ["left-to-right-point", "right-to-left-point"]
filename_prefix_data_scenarios = {}
#filename_prefix_data_scenarios["all"] = default_data_scenario
for filename_prefix in filename_prefixes:
filename_prefix_data_scenarios[filename_prefix] = default_data_scenario[wifi_samples["filename"].str.startswith(filename_prefix)].reset_index(drop=True)
filename_prefix_test_data_scenarios = {}
filename_prefix_test_data_scenarios["all"] = default_data_scenario
for filename_prefix in filename_prefixes:
filename_prefix_test_data_scenarios[filename_prefix] = default_data_scenario[wifi_samples["filename"].str.startswith(filename_prefix)].reset_index(drop=True)
n_neighbors=15
weights="distance"
metric="braycurtis"
nan_filler = default_data_scenario[mac_addresses].min().min()*1.001
scaler = preprocessing.StandardScaler()
# Results and statistics accumulators
results = {}
statistics = []
for train_data_keys, train_data in filename_prefix_data_scenarios.items():
for test_data_keys, test_data in filename_prefix_test_data_scenarios.items():
curr_data = train_data.fillna(nan_filler)
curr_test_data = test_data.fillna(nan_filler)
curr_result = knn_experiment(curr_data,
curr_test_data,
mac_addresses,
coord_cols,
scaler=scaler,
algorithm="brute",
n_neighbors=n_neighbors,
weights=weights,
metric=metric)
label = "Train: "+train_data_keys+" Test: "+test_data_keys
results[label] = curr_result
curr_statistics = experiment_statistics(curr_result)
curr_statistics["orientation"] = label
statistics.append(curr_statistics)
cols = ["orientation"] + list(curr_statistics.keys())[:-1]
statistics_table = pd.DataFrame(statistics, columns=cols)
statistics_table.to_csv(output_data_directory + "/statistics-orientation.csv")
statistics_table.to_excel(statistics_excel_writer, "orientation")
#show table
display(statistics_table.sort_values(cols[2:]))
#plots
experiment_plots(results, "plot-orientation.pdf")
Explanation: Impact of orientation in the results
End of explanation
subset_reference_points_scenarios = {}
coords_indices = default_data_scenario.groupby(coord_cols).indices
odd_coords_keys = list(coords_indices.keys())[0::2]
odd_ids = []
for key in odd_coords_keys:
odd_ids.extend(coords_indices[key])
even_coords_keys = list(coords_indices.keys())[1::2]
even_ids = []
for key in even_coords_keys:
even_ids.extend(coords_indices[key])
subset_reference_points_scenarios["odd"] = default_data_scenario.loc[odd_ids].reset_index(drop=True)
subset_reference_points_scenarios["even"] = default_data_scenario.loc[even_ids].reset_index(drop=True)
subset_reference_points_scenarios["all"] = default_data_scenario
n_neighbors=15
weights="distance"
metric="braycurtis"
nan_filler = default_data_scenario[mac_addresses].min().min()*1.001
scaler = preprocessing.StandardScaler()
# Results and statistics accumulators
results = {}
statistics = []
for train_data_keys, train_data in subset_reference_points_scenarios.items():
curr_data = train_data.fillna(nan_filler)
curr_test_data = default_data_scenario.fillna(nan_filler)
curr_result = knn_experiment(curr_data,
curr_test_data,
mac_addresses,
coord_cols,
scaler=scaler,
algorithm="brute",
n_neighbors=n_neighbors,
weights=weights,
metric=metric)
results[train_data_keys] = curr_result
curr_statistics = experiment_statistics(curr_result)
curr_statistics["reference_points_spacing"] = train_data_keys
statistics.append(curr_statistics)
cols = ["reference_points_spacing"] + list(curr_statistics.keys())[:-1]
statistics_table = pd.DataFrame(statistics, columns=cols)
statistics_table.to_csv(output_data_directory + "/statistics-reference_points_spacing.csv")
statistics_table.to_excel(statistics_excel_writer, "reference_points_spacing")
#show table
display(statistics_table.sort_values(cols[2:]))
#plots
experiment_plots(results, "plot-reference_points_spacing.pdf")
Explanation: Impact of the spacing between reference points in the results
End of explanation
n_neighbors=15
weights="distance"
metric="braycurtis"
nan_filler = default_data_scenario[mac_addresses].min().min()*1.001
scaler = preprocessing.StandardScaler()
partial_data = [0.9, 0.7, 0.5, 0.3, 0.1]
repetitions = 5
train_data = default_data_scenario[mac_addresses].copy()
target_values = default_data_scenario[coord_cols].copy()
target_values["label"] = default_data_scenario["x"].map(str) + "," + default_data_scenario["y"].map(str)+ "," + default_data_scenario["filename"].map(str)
# Results and statistics accumulators
results = {}
statistics = []
for partial in partial_data:
curr_result = pd.DataFrame()
for repetition in range(repetitions):
X_train, X_test, y_train, y_test = train_test_split(train_data,
target_values,
test_size=1-partial,
stratify=target_values["label"].values)
#train data
train_split_data = pd.concat([y_train, X_train], axis=1).reset_index(drop=True)
#test data
#test_split_data = pd.concat([y_test, X_test], axis=1).reset_index(drop=True)
test_split_data = default_data_scenario
curr_data = train_split_data.fillna(nan_filler)
curr_test_data = test_split_data.fillna(nan_filler)
curr_result = curr_result.append(knn_experiment(curr_data,
curr_test_data,
mac_addresses,
coord_cols,
scaler=scaler,
algorithm="brute",
n_neighbors=n_neighbors,
weights=weights,
metric=metric), ignore_index=True)
results[partial] = curr_result
curr_statistics = experiment_statistics(curr_result)
curr_statistics["partial_data"] = partial
statistics.append(curr_statistics)
cols = ["partial_data"] + list(curr_statistics.keys())[:-1]
statistics_table = pd.DataFrame(statistics, columns=cols)
statistics_table.to_csv(output_data_directory + "/statistics-partial_data.csv")
statistics_table.to_excel(statistics_excel_writer, "partial_data")
#show table
display(statistics_table.sort_values(cols[2:]))
#plots
experiment_plots(results, "plot-partial_data.pdf")
Explanation: Impact of the amount of available data in the results
End of explanation
statistics_excel_writer.save()
Explanation: Save all the data that was collected into an Excel file
End of explanation
k_neighbors_values = range(1,31,1)
weights_values = [
"uniform",
"distance"
]
metric_values = [
"euclidean",
"manhattan",
"canberra",
"braycurtis"
]
algorithm_values = ["brute"]
nan_filler = default_data_scenario[mac_addresses].min().min()*1.001
curr_data = default_data_scenario.fillna(nan_filler)
param_grid = {
"kneighborsregressor__n_neighbors": list(k_neighbors_values),
"kneighborsregressor__weights": weights_values,
"kneighborsregressor__metric": metric_values,
"kneighborsregressor__algorithm": algorithm_values,
}
scaler = preprocessing.StandardScaler()
cross_validation = LeaveOneGroupOut()
estimator = make_pipeline(preprocessing.StandardScaler(), KNeighborsRegressor())
grid = GridSearchCV(estimator,
param_grid=param_grid,
cv=cross_validation,
n_jobs=-1,
scoring=sklearn.metrics.make_scorer(sklearn.metrics.mean_squared_error,
greater_is_better=False,
multioutput="uniform_average"))
grid.fit(curr_data[mac_addresses], curr_data[coord_cols], groups=default_data_scenario_groups)
print("Best parameters set found on development set:")
print(grid.best_params_)
print("Grid scores on development set:")
gridcv_results = pd.DataFrame(grid.cv_results_)
gridcv_results[['mean_test_score', 'std_test_score', 'params']]
Explanation: Grid Search - Automatically searching for the best estimator parameters
End of explanation |
1,382 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training a ConvNet PyTorch
In this notebook, you'll learn how to use the powerful PyTorch framework to specify a conv net architecture and train it on the CIFAR-10 dataset.
Step2: What's this PyTorch business?
You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.
For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks
Step3: For now, we're going to use a CPU-friendly datatype. Later, we'll switch to a datatype that will move all our computations to the GPU and measure the speedup.
Step4: Example Model
Some assorted tidbits
Let's start by looking at a simple model. First, note that PyTorch operates on Tensors, which are n-dimensional arrays functionally analogous to numpy's ndarrays, with the additional feature that they can be used for computations on GPUs.
We'll provide you with a Flatten function, which we explain here. Remember that our image data (and more relevantly, our intermediate feature maps) are initially N x C x H x W, where
Step5: The example model itself
The first step to training your own model is defining its architecture.
Here's an example of a convolutional neural network defined in PyTorch -- try to understand what each line is doing, remembering that each layer is composed upon the previous layer. We haven't trained anything yet - that'll come next - for now, we want you to understand how everything gets set up. nn.Sequential is a container which applies each layer
one after the other.
In that example, you see 2D convolutional layers (Conv2d), ReLU activations, and fully-connected layers (Linear). You also see the Cross-Entropy loss function, and the Adam optimizer being used.
Make sure you understand why the parameters of the Linear layer are 5408 and 10.
Step6: PyTorch supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful). One note
Step7: To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes)
Step8: GPU!
Now, we're going to switch the dtype of the model and our data to the GPU-friendly tensors, and see what happens... everything is the same, except we are casting our model and input tensors as this new dtype instead of the old one.
If this returns false, or otherwise fails in a not-graceful way (i.e., with some error message), you may not have an NVIDIA GPU available on your machine. If you're running locally, we recommend you switch to Google Cloud and follow the instructions to set up a GPU there. If you're already on Google Cloud, something is wrong -- make sure you followed the instructions on how to request and use a GPU on your instance. If you did, post on Piazza or come to Office Hours so we can help you debug.
Step9: Run the following cell to evaluate the performance of the forward pass running on the CPU
Step10: ... and now the GPU
Step11: You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use the GPU datatype for your model and your tensors
Step12: Now you've seen how the training process works in PyTorch. To save you writing boilerplate code, we're providing the following helper functions to help you train for multiple epochs and check the accuracy of your model
Step13: Check the accuracy of the model.
Let's see the train and check_accuracy code in action -- feel free to use these methods when evaluating the models you develop below.
You should get a training loss of around 1.2-1.4, and a validation accuracy of around 50-60%. As mentioned above, if you re-run the cells, you'll be training more epochs, so your performance will improve past these numbers.
But don't worry about getting these numbers better -- this was just practice before you tackle designing your own model.
Step14: Don't forget the validation set!
And note that you can use the check_accuracy function to evaluate on either the test set or the validation set, by passing either loader_test or loader_val as the second argument to check_accuracy. You should not touch the test set until you have finished your architecture and hyperparameter tuning, and only run the test set once at the end to report a final value.
Train a great model on CIFAR-10!
Now it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves >=70% accuracy on the CIFAR-10 validation set. You can use the check_accuracy and train functions from above.
Things you should try
Step15: Describe what you did
In the cell below you should write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.
Tell us here!
Test set -- run this only once
Now that we've gotten a result we're happy with, we test our final model on the test set (which you should store in best_model). This would be the score we would achieve on a competition. Think about how this compares to your validation set accuracy. | Python Code:
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
from torch.utils.data import DataLoader
from torch.utils.data import sampler
import torchvision.datasets as dset
import torchvision.transforms as T
import numpy as np
import timeit
import os
os.chdir(os.getcwd() + '/..')
Explanation: Training a ConvNet PyTorch
In this notebook, you'll learn how to use the powerful PyTorch framework to specify a conv net architecture and train it on the CIFAR-10 dataset.
End of explanation
class ChunkSampler(sampler.Sampler):
Samples elements sequentially from some offset.
Arguments:
num_samples: # of desired datapoints
start: offset where we should start selecting from
def __init__(self, num_samples, start = 0):
self.num_samples = num_samples
self.start = start
def __iter__(self):
return iter(range(self.start, self.start + self.num_samples))
def __len__(self):
return self.num_samples
NUM_TRAIN = 49000
NUM_VAL = 1000
cifar10_train = dset.CIFAR10('datasets', train=True, download=True,
transform=T.ToTensor())
loader_train = DataLoader(cifar10_train, batch_size=64, sampler=ChunkSampler(NUM_TRAIN, 0))
cifar10_val = dset.CIFAR10('datasets', train=True, download=True,
transform=T.ToTensor())
loader_val = DataLoader(cifar10_val, batch_size=64, sampler=ChunkSampler(NUM_VAL, NUM_TRAIN))
cifar10_test = dset.CIFAR10('datasets', train=False, download=True,
transform=T.ToTensor())
loader_test = DataLoader(cifar10_test, batch_size=64)
Explanation: What's this PyTorch business?
You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.
For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, PyTorch (or TensorFlow, if you switch over to that notebook).
Why?
Our code will now run on GPUs! Much faster training. When using a framework like PyTorch or TensorFlow you can harness the power of the GPU for your own custom neural network architectures without having to write CUDA code directly (which is beyond the scope of this class).
We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand.
We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :)
We want you to be exposed to the sort of deep learning code you might run into in academia or industry.
How will I learn PyTorch?
If you've used Torch before, but are new to PyTorch, this tutorial might be of use: http://pytorch.org/tutorials/beginner/former_torchies_tutorial.html
Otherwise, this notebook will walk you through much of what you need to do to train models in Torch. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here.
Load Datasets
We load the CIFAR-10 dataset. This might take a couple minutes the first time you do it, but the files should stay cached after that.
End of explanation
dtype = torch.FloatTensor # the CPU datatype
# Constant to control how frequently we print train loss
print_every = 100
# This is a little utility that we'll use to reset the model
# if we want to re-initialize all our parameters
def reset(m):
if hasattr(m, 'reset_parameters'):
m.reset_parameters()
Explanation: For now, we're going to use a CPU-friendly datatype. Later, we'll switch to a datatype that will move all our computations to the GPU and measure the speedup.
End of explanation
class Flatten(nn.Module):
def forward(self, x):
N, C, H, W = x.size() # read in N, C, H, W
return x.view(N, -1) # "flatten" the C * H * W values into a single vector per image
Explanation: Example Model
Some assorted tidbits
Let's start by looking at a simple model. First, note that PyTorch operates on Tensors, which are n-dimensional arrays functionally analogous to numpy's ndarrays, with the additional feature that they can be used for computations on GPUs.
We'll provide you with a Flatten function, which we explain here. Remember that our image data (and more relevantly, our intermediate feature maps) are initially N x C x H x W, where:
* N is the number of datapoints
* C is the number of channels
* H is the height of the intermediate feature map in pixels
* W is the height of the intermediate feature map in pixels
This is the right way to represent the data when we are doing something like a 2D convolution, that needs spatial understanding of where the intermediate features are relative to each other. When we input data into fully connected affine layers, however, we want each datapoint to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. So, we use a "Flatten" operation to collapse the C x H x W values per representation into a single long vector. The Flatten function below first reads in the N, C, H, and W values from a given batch of data, and then returns a "view" of that data. "View" is analogous to numpy's "reshape" method: it reshapes x's dimensions to be N x ??, where ?? is allowed to be anything (in this case, it will be C x H x W, but we don't need to specify that explicitly).
End of explanation
# Here's where we define the architecture of the model...
simple_model = nn.Sequential(
nn.Conv2d(3, 32, kernel_size=7, stride=2),
nn.ReLU(inplace=True),
Flatten(), # see above for explanation
nn.Linear(5408, 10), # affine layer
)
# Set the type of all data in this model to be FloatTensor
simple_model.type(dtype)
loss_fn = nn.CrossEntropyLoss().type(dtype)
optimizer = optim.Adam(simple_model.parameters(), lr=1e-2) # lr sets the learning rate of the optimizer
Explanation: The example model itself
The first step to training your own model is defining its architecture.
Here's an example of a convolutional neural network defined in PyTorch -- try to understand what each line is doing, remembering that each layer is composed upon the previous layer. We haven't trained anything yet - that'll come next - for now, we want you to understand how everything gets set up. nn.Sequential is a container which applies each layer
one after the other.
In that example, you see 2D convolutional layers (Conv2d), ReLU activations, and fully-connected layers (Linear). You also see the Cross-Entropy loss function, and the Adam optimizer being used.
Make sure you understand why the parameters of the Linear layer are 5408 and 10.
End of explanation
fixed_model_base = nn.Sequential( # You fill this in!
nn.Conv2d(3, 32, kernel_size=7, stride=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(32),
nn.MaxPool2d(2, stride=2),
Flatten(),
nn.Linear(5408, 1024),
nn.ReLU(inplace=True),
nn.Linear(1024, 10),
)
fixed_model = fixed_model_base.type(dtype)
loss_fn = nn.CrossEntropyLoss().type(dtype)
optimizer = optim.RMSprop(fixed_model.parameters())
Explanation: PyTorch supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful). One note: what we call in the class "spatial batch norm" is called "BatchNorm2D" in PyTorch.
Layers: http://pytorch.org/docs/nn.html
Activations: http://pytorch.org/docs/nn.html#non-linear-activations
Loss functions: http://pytorch.org/docs/nn.html#loss-functions
Optimizers: http://pytorch.org/docs/optim.html#algorithms
Training a specific model
In this section, we're going to specify a model for you to construct. The goal here isn't to get good performance (that'll be next), but instead to get comfortable with understanding the PyTorch documentation and configuring your own model.
Using the code provided above as guidance, and using the following PyTorch documentation, specify a model with the following architecture:
7x7 Convolutional Layer with 32 filters and stride of 1
ReLU Activation Layer
Spatial Batch Normalization Layer
2x2 Max Pooling layer with a stride of 2
Affine layer with 1024 output units
ReLU Activation Layer
Affine layer from 1024 input units to 10 outputs
And finally, set up a cross-entropy loss function and the RMSprop learning rule.
End of explanation
## Now we're going to feed a random batch into the model you defined and make sure the output is the right size
x = torch.randn(64, 3, 32, 32).type(dtype)
x_var = Variable(x.type(dtype)) # Construct a PyTorch Variable out of your input data
ans = fixed_model(x_var) # Feed it through the model!
# Check to make sure what comes out of your model
# is the right dimensionality... this should be True
# if you've done everything correctly
np.array_equal(np.array(ans.size()), np.array([64, 10]))
Explanation: To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes):
End of explanation
# Verify that CUDA is properly configured and you have a GPU available
torch.cuda.is_available()
import copy
gpu_dtype = torch.cuda.FloatTensor
fixed_model_gpu = copy.deepcopy(fixed_model_base).type(gpu_dtype)
x_gpu = torch.randn(64, 3, 32, 32).type(gpu_dtype)
x_var_gpu = Variable(x.type(gpu_dtype)) # Construct a PyTorch Variable out of your input data
ans = fixed_model_gpu(x_var_gpu) # Feed it through the model!
# Check to make sure what comes out of your model
# is the right dimensionality... this should be True
# if you've done everything correctly
np.array_equal(np.array(ans.size()), np.array([64, 10]))
Explanation: GPU!
Now, we're going to switch the dtype of the model and our data to the GPU-friendly tensors, and see what happens... everything is the same, except we are casting our model and input tensors as this new dtype instead of the old one.
If this returns false, or otherwise fails in a not-graceful way (i.e., with some error message), you may not have an NVIDIA GPU available on your machine. If you're running locally, we recommend you switch to Google Cloud and follow the instructions to set up a GPU there. If you're already on Google Cloud, something is wrong -- make sure you followed the instructions on how to request and use a GPU on your instance. If you did, post on Piazza or come to Office Hours so we can help you debug.
End of explanation
%%timeit
ans = fixed_model(x_var)
Explanation: Run the following cell to evaluate the performance of the forward pass running on the CPU:
End of explanation
%%timeit
torch.cuda.synchronize() # Make sure there are no pending GPU computations
ans = fixed_model_gpu(x_var_gpu) # Feed it through the model!
torch.cuda.synchronize() # Make sure there are no pending GPU computations
Explanation: ... and now the GPU:
End of explanation
loss_fn = nn.CrossEntropyLoss().type(dtype)
optimizer = optim.RMSprop(simple_model.parameters(), lr=1e-3)
# This sets the model in "training" mode. This is relevant for some layers that may have different behavior
# in training mode vs testing mode, such as Dropout and BatchNorm.
fixed_model.train()
# Load one batch at a time.
for t, (x, y) in enumerate(loader_train):
x_var = Variable(x.type(dtype))
y_var = Variable(y.type(dtype).long())
# This is the forward pass: predict the scores for each class, for each x in the batch.
scores = fixed_model(x_var)
# Use the correct y values and the predicted y values to compute the loss.
loss = loss_fn(scores, y_var)
if (t + 1) % print_every == 0:
print('t = %d, loss = %.4f' % (t + 1, loss.data[0]))
# Zero out all of the gradients for the variables which the optimizer will update.
optimizer.zero_grad()
# This is the backwards pass: compute the gradient of the loss with respect to each
# parameter of the model.
loss.backward()
# Actually update the parameters of the model using the gradients computed by the backwards pass.
optimizer.step()
Explanation: You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use the GPU datatype for your model and your tensors: as a reminder that is torch.cuda.FloatTensor (in our notebook here as gpu_dtype)
Train the model.
Now that you've seen how to define a model and do a single forward pass of some data through it, let's walk through how you'd actually train one whole epoch over your training data (using the simple_model we provided above).
Make sure you understand how each PyTorch function used below corresponds to what you implemented in your custom neural network implementation.
Note that because we are not resetting the weights anywhere below, if you run the cell multiple times, you are effectively training multiple epochs (so your performance should improve).
First, set up an RMSprop optimizer (using a 1e-3 learning rate) and a cross-entropy loss function:
End of explanation
def train(model, loss_fn, optimizer, num_epochs = 1):
for epoch in range(num_epochs):
print('Starting epoch %d / %d' % (epoch + 1, num_epochs))
model.train()
for t, (x, y) in enumerate(loader_train):
x_var = Variable(x.type(dtype))
y_var = Variable(y.type(dtype).long())
scores = model(x_var)
loss = loss_fn(scores, y_var)
if (t + 1) % print_every == 0:
print('t = %d, loss = %.4f' % (t + 1, loss.data[0]))
optimizer.zero_grad()
loss.backward()
optimizer.step()
def check_accuracy(model, loader):
if loader.dataset.train:
print('Checking accuracy on validation set')
else:
print('Checking accuracy on test set')
num_correct = 0
num_samples = 0
model.eval() # Put the model in test mode (the opposite of model.train(), essentially)
for x, y in loader:
x_var = Variable(x.type(dtype), volatile=True)
scores = model(x_var)
_, preds = scores.data.cpu().max(1)
num_correct += (preds == y).sum()
num_samples += preds.size(0)
acc = float(num_correct) / num_samples
print('Got %d / %d correct (%.2f)' % (num_correct, num_samples, 100 * acc))
Explanation: Now you've seen how the training process works in PyTorch. To save you writing boilerplate code, we're providing the following helper functions to help you train for multiple epochs and check the accuracy of your model:
End of explanation
torch.manual_seed(12345)
fixed_model.apply(reset)
train(fixed_model, loss_fn, optimizer, num_epochs=1)
check_accuracy(fixed_model, loader_val)
Explanation: Check the accuracy of the model.
Let's see the train and check_accuracy code in action -- feel free to use these methods when evaluating the models you develop below.
You should get a training loss of around 1.2-1.4, and a validation accuracy of around 50-60%. As mentioned above, if you re-run the cells, you'll be training more epochs, so your performance will improve past these numbers.
But don't worry about getting these numbers better -- this was just practice before you tackle designing your own model.
End of explanation
# Train your model here, and make sure the output of this cell is the accuracy of your best model on the
# train, val, and test sets. Here's some code to get you started. The output of this cell should be the training
# and validation accuracy on your best model (measured by validation accuracy).
model = None
loss_fn = None
optimizer = None
train(model, loss_fn, optimizer, num_epochs=1)
check_accuracy(model, loader_val)
Explanation: Don't forget the validation set!
And note that you can use the check_accuracy function to evaluate on either the test set or the validation set, by passing either loader_test or loader_val as the second argument to check_accuracy. You should not touch the test set until you have finished your architecture and hyperparameter tuning, and only run the test set once at the end to report a final value.
Train a great model on CIFAR-10!
Now it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves >=70% accuracy on the CIFAR-10 validation set. You can use the check_accuracy and train functions from above.
Things you should try:
Filter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
Number of filters: Above we used 32 filters. Do more or fewer do better?
Pooling vs Strided Convolution: Do you use max pooling or just stride convolutions?
Batch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization after affine layers. Do your networks train faster?
Network architecture: The network above has two layers of trainable parameters. Can you do better with a deep network? Good architectures to try include:
[conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
[conv-relu-conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
[batchnorm-relu-conv]xN -> [affine]xM -> [softmax or SVM]
Global Average Pooling: Instead of flattening and then having multiple affine layers, perform convolutions until your image gets small (7x7 or so) and then perform an average pooling operation to get to a 1x1 image picture (1, 1 , Filter#), which is then reshaped into a (Filter#) vector. This is used in Google's Inception Network (See Table 1 for their architecture).
Regularization: Add l2 weight regularization, or perhaps use Dropout.
Tips for training
For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:
If the parameters are working well, you should see improvement within a few hundred iterations
Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
You should use the validation set for hyperparameter search, and save your test set for evaluating your architecture on the best parameters as selected by the validation set.
Going above and beyond
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.
Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.
Model ensembles
Data augmentation
New Architectures
ResNets where the input from the previous layer is added to the output.
DenseNets where inputs into previous layers are concatenated together.
This blog has an in-depth overview
If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below.
What we expect
At the very least, you should be able to train a ConvNet that gets at least 70% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.
You should use the space below to experiment and train your network.
Have fun and happy training!
End of explanation
best_model = None
check_accuracy(best_model, loader_test)
Explanation: Describe what you did
In the cell below you should write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.
Tell us here!
Test set -- run this only once
Now that we've gotten a result we're happy with, we test our final model on the test set (which you should store in best_model). This would be the score we would achieve on a competition. Think about how this compares to your validation set accuracy.
End of explanation |
1,383 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CM360 Report
Create a CM report from a JSON definition.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter CM360 Report Recipe Parameters
Add a an account as [account_id]@[profile_id]
Fetch the report JSON definition. Arguably could be better.
The account is automatically added to the report definition.
Modify the values below for your use case, can be done multiple times, then click play.
Step3: 4. Execute CM360 Report
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: CM360 Report
Create a CM report from a JSON definition.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'auth_read':'user', # Credentials used for reading data.
'account':'',
'body':'{}',
'delete':False,
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter CM360 Report Recipe Parameters
Add a an account as [account_id]@[profile_id]
Fetch the report JSON definition. Arguably could be better.
The account is automatically added to the report definition.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'dcm':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},
'report':{
'account':{'field':{'name':'account','kind':'string','order':1,'default':''}},
'body':{'field':{'name':'body','kind':'json','order':2,'default':'{}'}}
},
'delete':{'field':{'name':'delete','kind':'boolean','order':3,'default':False}}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute CM360 Report
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
1,384 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
From Tensor SkFlow
Step1: Load Iris Data
Step2: Initialize a deep neural network autoencoder
Step3: Fit with Iris data | Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import random
from sklearn.pipeline import Pipeline
from chainer import optimizers
from commonml.skchainer import MeanSquaredErrorRegressor, AutoEncoder
from tensorflow.contrib.learn import datasets
import logging
logging.basicConfig(format='%(levelname)s : %(message)s', level=logging.INFO)
logging.root.level = 20
Explanation: From Tensor SkFlow: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/skflow/dnn_autoencoder_iris.py
Import
End of explanation
iris = datasets.load_iris()
Explanation: Load Iris Data
End of explanation
autoencoder = Pipeline([('autoencoder1',
AutoEncoder(4, 10, MeanSquaredErrorRegressor, dropout_ratio=0, optimizer=optimizers.AdaGrad(lr=0.1),
batch_size=128, n_epoch=100, gpu=0)),
('autoencoder2',
AutoEncoder(10, 20, MeanSquaredErrorRegressor, dropout_ratio=0, optimizer=optimizers.AdaGrad(lr=0.1),
batch_size=128, n_epoch=100, gpu=0))])
Explanation: Initialize a deep neural network autoencoder
End of explanation
transformed = autoencoder.fit_transform(iris.data)
print(transformed)
Explanation: Fit with Iris data
End of explanation |
1,385 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
Step2: Convolution
Step4: Aside
Step5: Convolution
Step6: Max pooling
Step7: Max pooling
Step8: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory
Step9: Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
Step10: Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file cs231n/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug
Step11: Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer.
Step12: Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
Step13: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting
Step14: Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set
Step15: Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following
Step16: Spatial Batch Normalization
We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.
Spatial batch normalization
Step17: Spatial batch normalization
Step18: Experiment!
Experiment and try to get the best performance that you can on CIFAR-10 using a ConvNet. Here are some ideas to get you started | Python Code:
# As usual, a bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.cnn import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from cs231n.layers import *
from cs231n.fast_layers import *
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
Explanation: Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
End of explanation
x_shape = (2, 3, 4, 4)
w_shape = (3, 3, 4, 4)
x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)
w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=3)
conv_param = {'stride': 2, 'pad': 1}
out, _ = conv_forward_naive(x, w, b, conv_param)
correct_out = np.array([[[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]]])
# Compare your output to ours; difference should be around 1e-8
print 'Testing conv_forward_naive'
print 'difference: ', rel_error(out, correct_out)
Explanation: Convolution: Naive forward pass
The core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive.
You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.
You can test your implementation by running the following:
End of explanation
from scipy.misc import imread, imresize
kitten, puppy = imread('kitten.jpg'), imread('puppy.jpg')
# kitten is wide, and puppy is already square
d = kitten.shape[1] - kitten.shape[0]
kitten_cropped = kitten[:, d/2:-d/2, :]
img_size = 200 # Make this smaller if it runs too slow
x = np.zeros((2, 3, img_size, img_size))
x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))
x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))
# Set up a convolutional weights holding 2 filters, each 3x3
w = np.zeros((2, 3, 3, 3))
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]
w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]
w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = np.array([0, 128])
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})
def imshow_noax(img, normalize=True):
Tiny helper to show images as uint8 and remove axis labels
if normalize:
img_max, img_min = np.max(img), np.min(img)
img = 255.0 * (img - img_min) / (img_max - img_min)
plt.imshow(img.astype('uint8'))
plt.gca().axis('off')
# Show the original images and the results of the conv operation
plt.subplot(2, 3, 1)
imshow_noax(puppy, normalize=False)
plt.title('Original image')
plt.subplot(2, 3, 2)
imshow_noax(out[0, 0])
plt.title('Grayscale')
plt.subplot(2, 3, 3)
imshow_noax(out[0, 1])
plt.title('Edges')
plt.subplot(2, 3, 4)
imshow_noax(kitten_cropped, normalize=False)
plt.subplot(2, 3, 5)
imshow_noax(out[1, 0])
plt.subplot(2, 3, 6)
imshow_noax(out[1, 1])
plt.show()
Explanation: Aside: Image processing via convolutions
As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.
End of explanation
#Could not understand the backward pass of this----may be some other day
x = np.random.randn(4, 3, 5, 5)
w = np.random.randn(2, 3, 3, 3)
b = np.random.randn(2,)
dout = np.random.randn(4, 2, 5, 5)
conv_param = {'stride': 1, 'pad': 1}
dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)
out, cache = conv_forward_naive(x, w, b, conv_param)
dx, dw, db = conv_backward_naive(dout, cache)
# Your errors should be around 1e-9'
print 'Testing conv_backward_naive function'
print 'dx error: ', rel_error(dx, dx_num)
print 'dw error: ', rel_error(dw, dw_num)
print 'db error: ', rel_error(db, db_num)
Explanation: Convolution: Naive backward pass
Implement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don't need to worry too much about computational efficiency.
When you are done, run the following to check your backward pass with a numeric gradient check.
End of explanation
x_shape = (2, 3, 4, 4)
x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = max_pool_forward_naive(x, pool_param)
correct_out = np.array([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]])
# Compare your output with ours. Difference should be around 1e-8.
print 'Testing max_pool_forward_naive function:'
print 'difference: ', rel_error(out, correct_out)
Explanation: Max pooling: Naive forward
Implement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don't worry too much about computational efficiency.
Check your implementation by running the following:
End of explanation
x = np.random.randn(3, 2, 8, 8)
dout = np.random.randn(3, 2, 4, 4)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)
out, cache = max_pool_forward_naive(x, pool_param)
dx = max_pool_backward_naive(dout, cache)
# Your error should be around 1e-12
print 'Testing max_pool_backward_naive function:'
print 'dx error: ', rel_error(dx, dx_num)
Explanation: Max pooling: Naive backward
Implement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency.
Check your implementation with numeric gradient checking by running the following:
End of explanation
from cs231n.fast_layers import conv_forward_fast, conv_backward_fast
from time import time
x = np.random.randn(100, 3, 31, 31)
w = np.random.randn(25, 3, 3, 3)
b = np.random.randn(25,)
dout = np.random.randn(100, 25, 16, 16)
conv_param = {'stride': 2, 'pad': 1}
t0 = time()
out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)
t1 = time()
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
t2 = time()
print 'Testing conv_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'Difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)
t1 = time()
dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting conv_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
print 'dw difference: ', rel_error(dw_naive, dw_fast)
print 'db difference: ', rel_error(db_naive, db_fast)
from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast
x = np.random.randn(100, 3, 32, 32)
dout = np.random.randn(100, 3, 16, 16)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
t0 = time()
out_naive, cache_naive = max_pool_forward_naive(x, pool_param)
t1 = time()
out_fast, cache_fast = max_pool_forward_fast(x, pool_param)
t2 = time()
print 'Testing pool_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'fast: %fs' % (t2 - t1)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive = max_pool_backward_naive(dout, cache_naive)
t1 = time()
dx_fast = max_pool_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting pool_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
Explanation: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:
bash
python setup.py build_ext --inplace
The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.
NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.
You can compare the performance of the naive and fast versions of these layers by running the following:
End of explanation
from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward
x = np.random.randn(2, 3, 16, 16)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)
dx, dw, db = conv_relu_pool_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)
print 'Testing conv_relu_pool'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
from cs231n.layer_utils import conv_relu_forward, conv_relu_backward
x = np.random.randn(2, 3, 8, 8)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
out, cache = conv_relu_forward(x, w, b, conv_param)
dx, dw, db = conv_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)
print 'Testing conv_relu:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
Explanation: Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
End of explanation
model = ThreeLayerConvNet()
N = 50
X = np.random.randn(N, 3, 32, 32)
y = np.random.randint(10, size=N)
loss, grads = model.loss(X, y)
print 'Initial loss (no regularization): ', loss
model.reg = 0.5
loss, grads = model.loss(X, y)
print 'Initial loss (with regularization): ', loss
Explanation: Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file cs231n/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug:
Sanity check loss
After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.
End of explanation
num_inputs = 2
input_dim = (3, 16, 16)
reg = 0.0
num_classes = 10
X = np.random.randn(num_inputs, *input_dim)
y = np.random.randint(num_classes, size=num_inputs)
model = ThreeLayerConvNet(num_filters=3, filter_size=3,
input_dim=input_dim, hidden_dim=7,
dtype=np.float64)
loss, grads = model.loss(X, y)
for param_name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)
e = rel_error(param_grad_num, grads[param_name])
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
Explanation: Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer.
End of explanation
num_train = 100
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
model = ThreeLayerConvNet(weight_scale=1e-2)
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=1)
solver.train()
Explanation: Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
End of explanation
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, 'o')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-o')
plt.plot(solver.val_acc_history, '-o')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
Explanation: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:
End of explanation
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)
solver = Solver(model, data,
num_epochs=1, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=20)
solver.train()
Explanation: Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:
End of explanation
from cs231n.vis_utils import visualize_grid
grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))
plt.imshow(grid.astype('uint8'))
plt.axis('off')
plt.gcf().set_size_inches(5, 5)
plt.show()
Explanation: Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following:
End of explanation
# Check the training-time forward pass by checking means and variances
# of features both before and after spatial batch normalization
N, C, H, W = 2, 3, 4, 5
x = 4 * np.random.randn(N, C, H, W) + 10
print 'Before spatial batch normalization:'
print ' Shape: ', x.shape
print ' Means: ', x.mean(axis=(0, 2, 3))
print ' Stds: ', x.std(axis=(0, 2, 3))
# Means should be close to zero and stds close to one
gamma, beta = np.ones(C), np.zeros(C)
bn_param = {'mode': 'train'}
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print 'After spatial batch normalization:'
print ' Shape: ', out.shape
print ' Means: ', out.mean(axis=(0, 2, 3))
print ' Stds: ', out.std(axis=(0, 2, 3))
# Means should be close to beta and stds close to gamma
gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8])
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print 'After spatial batch normalization (nontrivial gamma, beta):'
print ' Shape: ', out.shape
print ' Means: ', out.mean(axis=(0, 2, 3))
print ' Stds: ', out.std(axis=(0, 2, 3))
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, C, H, W = 10, 4, 11, 12
bn_param = {'mode': 'train'}
gamma = np.ones(C)
beta = np.zeros(C)
for t in xrange(50):
x = 2.3 * np.random.randn(N, C, H, W) + 13
spatial_batchnorm_forward(x, gamma, beta, bn_param)
bn_param['mode'] = 'test'
x = 2.3 * np.random.randn(N, C, H, W) + 13
a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print 'After spatial batch normalization (test-time):'
print ' means: ', a_norm.mean(axis=(0, 2, 3))
print ' stds: ', a_norm.std(axis=(0, 2, 3))
Explanation: Spatial Batch Normalization
We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.
Spatial batch normalization: forward
In the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following:
End of explanation
N, C, H, W = 2, 3, 4, 5
x = 5 * np.random.randn(N, C, H, W) + 12
gamma = np.random.randn(C)
beta = np.random.randn(C)
dout = np.random.randn(N, C, H, W)
bn_param = {'mode': 'train'}
fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache)
print 'dx error: ', rel_error(dx_num, dx)
print 'dgamma error: ', rel_error(da_num, dgamma)
print 'dbeta error: ', rel_error(db_num, dbeta)
Explanation: Spatial batch normalization: backward
In the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check:
End of explanation
# Train a really good model on CIFAR-10
Explanation: Experiment!
Experiment and try to get the best performance that you can on CIFAR-10 using a ConvNet. Here are some ideas to get you started:
Things you should try:
Filter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
Number of filters: Above we used 32 filters. Do more or fewer do better?
Batch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization aafter affine layers. Do your networks train faster?
Network architecture: The network above has two layers of trainable parameters. Can you do better with a deeper network? You can implement alternative architectures in the file cs231n/classifiers/convnet.py. Some good architectures to try include:
[conv-relu-pool]xN - conv - relu - [affine]xM - [softmax or SVM]
[conv-relu-pool]XN - [affine]XM - [softmax or SVM]
[conv-relu-conv-relu-pool]xN - [affine]xM - [softmax or SVM]
Tips for training
For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:
If the parameters are working well, you should see improvement within a few hundred iterations
Remember the course-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
Going above and beyond
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.
Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
Alternative activation functions such as leaky ReLU, parametric ReLU, or MaxOut.
Model ensembles
Data augmentation
If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below.
What we expect
At the very least, you should be able to train a ConvNet that gets at least 65% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.
You should use the space below to experiment and train your network. The final cell in this notebook should contain the training, validation, and test set accuracies for your final trained network. In this notebook you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.
Have fun and happy training!
End of explanation |
1,386 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: script available on GitHub
Installation and Setup
Installation
Refer to this how-to.
Manage Python Packages
Python has its own package manager "pip" to keep Python self-contained. pip also allows access to new packges even if your OS is out-of-date. If you installed Python using Anaconda, then your package manager will be "conda", which also has a nice documentation.
Running Python
interactive mode
Open a terminal and type "python"
run a python script
Put Python code (eg. print("hello world")) in a file, say "hello.py".
Invoke the Python interpreter with the file as its first argument.
bash
python hello.py
recommended editors
If you plan to do computational research in the future, please pick either emacs or vim. They are excellent command-line editors with many academic users. Command-line editors have steeper learning curves than GUI-based ones, but you can do way more with them over time (good luck using GUI editors on supercomputers). Many excellent online tutorials exist.
Atom
Sublime
Emacs
Vim
Rules!
Rule #1
Step2: Rule #2
Step3: more advanced functions can be accessed using the numpy package
Step4: Loop and Condition
Step5: python uses indentation to determine blocks, you could easily have done
python
my_index = 1
while my_index < 4
Step6: Functions
define modular functions to maximize code reusability and readability
Step7: Tuples, Lists, Dictionaries
list
Step8: List Comprehension
Step9: List splicing
Step10: gotcha!
Step11: Variables and Scope
The scope of a variable is the union of all places in the code where the variable can be accessed. Variables in a function are "local" and cannot be access by other parts of the program unless returned.
Step12: Classes
Classes help bundle together related variables and functions. Well-designed classes are sensible abstract objects that will allow higher level programming without the need to worry about details of implementation.
fun fact
Step14: Basic Plotting
Step15: Intermediate Use Cases
vectorized operations with numpy array
python for loops are VERY slow
numpy vectorized operations are about as fast as fortran (LAPACK under the hood)
Step16: 3 orders of magnitude speed difference!
particle swarm optimization example
Step17: Text Parsing
plain text file
Step18: Database | Python Code:
# I don't know how to write a program but I am charming,
# so I will write down the equations to be implemented
# and find a friend to write it :)
It is annoying to have to start each comment with a #,
triple quotation allows multi-line comments.
It is always a good idea to write lots of comment to lay out the
cohesive idea you had while starting to write a piece of code.
More often than not, we forget that impressive grand plan we started
with as we fight with syntax error and other nitty-gritty of
talking to a computer.
;
Explanation: script available on GitHub
Installation and Setup
Installation
Refer to this how-to.
Manage Python Packages
Python has its own package manager "pip" to keep Python self-contained. pip also allows access to new packges even if your OS is out-of-date. If you installed Python using Anaconda, then your package manager will be "conda", which also has a nice documentation.
Running Python
interactive mode
Open a terminal and type "python"
run a python script
Put Python code (eg. print("hello world")) in a file, say "hello.py".
Invoke the Python interpreter with the file as its first argument.
bash
python hello.py
recommended editors
If you plan to do computational research in the future, please pick either emacs or vim. They are excellent command-line editors with many academic users. Command-line editors have steeper learning curves than GUI-based ones, but you can do way more with them over time (good luck using GUI editors on supercomputers). Many excellent online tutorials exist.
Atom
Sublime
Emacs
Vim
Rules!
Rule #1: Write Comments
The more the better!
End of explanation
1 + 1
2*3
2**3
7/2 # gotcha !
7./2
5%2 # modulo
Explanation: Rule #2: Follow Best Practices
An excellent paper by Greg Wilson et. al. concisely summarizes the best practices of scientific computing. I will steal the most relavant section from the summary paragraph here:
Write programs for people, not computers
A program should not require its readers to hold more than a handful of facts in memory at once.
Make names consistent, distinctive, and meaningful.
Make code style and formatting consistent.
Let the computer do the work.
Make the computer repeat tasks.
Save recent commands in a file for re-use.
Use a build tool (or Jupyter notebook) to automate and save workflows.
Make incremental changes.
Work in small steps with frequent feedback and course correction.
Use a version control system (eg. git,subversion)
Upload all work into version control system
Don't repeat yourself (or others)
Every piece of data must have a single authoritative representation in the system.
Modularize code rather than copying and pasting.
Re-use code (yours or others) instead of rewriting it.
Plan for mistakes
Add assertions to programs to check their operation.
Use an off-the-shelf unit testing library.
Turn bugs into test cases.
Basic Use Cases
Much of the following can be found on "A Beginner's Python Tutorial"
Using python as a calculator
basic arithmetics are built in
End of explanation
import numpy as np
np.exp(1j)
np.cos(1) + 1j*np.sin(1)
np.sqrt(144)
Explanation: more advanced functions can be accessed using the numpy package
End of explanation
for my_index in [1,2,3]:
print(my_index)
# end for
for my_index in range(3):
print(my_index)
# end for
# while loop may not terminate
my_index = 1
while my_index < 4:
print(my_index)
my_index += 1 # try comment this out... JK don't do it!
# end while
Explanation: Loop and Condition
End of explanation
# for loop always terminates, thus it is preferred
for my_index in range(10):
if (my_index>0) and (my_index<=3):
print(my_index)
elif (my_index>3):
break
# end if
# end for
Explanation: python uses indentation to determine blocks, you could easily have done
python
my_index = 1
while my_index < 4:
print(my_index)
my_index += 1
that would be a big oopsy
introducing break
End of explanation
def boltzmann_factor(energy_in_J,temperature_in_K):
# 1 joule = 7.243e22 K *kB
kB = 1.38e-23 # m^2 kg/ s^2 K
return np.exp(-float(energy_in_J)/kB/temperature_in_K)
# end def
def fermi_dirac_dist(energy_in_J,temperature_in_K,chemical_pot_in_J):
denomimator = 1.0/boltzmann_factor(
energy_in_J-chemical_pot_in_J
,temperature_in_K
) + 1.0
return 1.0/denomimator
# end def
def bose_einstein_dist(energy_in_J,temperature_in_K,chemical_pot_in_J):
denomimator = 1.0/boltzmann_factor(
energy_in_J-chemical_pot_in_J
,temperature_in_K
) - 1.0
return 1.0/denomimator
# end def
# 50% occupation near chemical potential
fermi_dirac_dist(1.01e-22,300,1e-22)
# divergent occupation near chemical potential
bose_einstein_dist(1.01e-22,300,1e-22)
Explanation: Functions
define modular functions to maximize code reusability and readability
End of explanation
mylist = [5,4,2,3,1]
for item in mylist:
print(item)
# end for
mylist[2] = 100
mylist.insert(0,50)
for i in range(len(mylist)):
print( mylist[i] )
# end for
mytuple = (5,4,2,3,1)
for item in mytuple:
print(item)
# end for
mytuple[2] = 100
# oopsy-daisies
mydict = {0:5,1:4,2:2,3:3,4:1}
for i in range(len(mydict)):
print( mydict[i] )
# end for
mydict = {
"name":"Paul"
,"favorite number":42
,"where abouts":"elusive"
,"hobbies":["coffee","code"]
}
mydict.keys()
mydict["where abouts"]
mydict["new entry"] = False
for key,value in mydict.iteritems():
print( "%s : %s" % (str(key),str(value)) )
# end for
Explanation: Tuples, Lists, Dictionaries
list: iterable, extendable, mutable and ordered array of elements
tuple: immutable list
dictionary: iterable, extendable, mutable and un-ordered key-value pairs
End of explanation
mylist = [5,4,2,3,1]
[item**2 for item in mylist]
square_and_shift = lambda x,y:x**2+y
[square_and_shift(item,50) for item in mylist]
Explanation: List Comprehension
End of explanation
# from index 1 to -2 (wrap around)
mylist[1:-2]
# all even indices
mylist[::2]
# all odd indices
mylist[1::2]
Explanation: List splicing
End of explanation
mylist = [5,4,2,3,1]
entry = [1,2]
mylist.append(entry)
# only a reference to entry is saved, NOT a copy, which means ...
# entry can be changed elsewhere without mylist knowing
mylist
entry[0] = 10
mylist
# use a deep copy to avoid the above problem
from copy import deepcopy
mylist = [5,4,2,3,1]
entry = [1,2]
mylist.append( deepcopy(entry) )
entry[0] = 10
mylist
Explanation: gotcha!
End of explanation
demon_burn_my_soul = 50.0
def firey_hell(demon_burn_my_soul):
demon_burn_my_soul += 10.
firey_hell(20)
print(demon_burn_my_soul)
# you can use a global variable, but this is NOT recommended
# see classes for better solution
global demon_burn_my_soul
demon_burn_my_soul = 50.0
def firey_hell():
# side effect! bad! bad! bad!
global demon_burn_my_soul
demon_burn_my_soul += 10.
firey_hell()
print(demon_burn_my_soul)
Explanation: Variables and Scope
The scope of a variable is the union of all places in the code where the variable can be accessed. Variables in a function are "local" and cannot be access by other parts of the program unless returned.
End of explanation
class RockStar:
def __init__(self):
self.demon_burn_my_soul = 50.0
# end def init
def firey_hell(self):
self.demon_burn_my_soul += 10.0
# end def
def cry_my_veins(self):
return self.demon_burn_my_soul
# end def cry_my_veins
# end class RockStar
me = RockStar()
me.cry_my_veins()
me.firey_hell()
me.cry_my_veins()
Explanation: Classes
Classes help bundle together related variables and functions. Well-designed classes are sensible abstract objects that will allow higher level programming without the need to worry about details of implementation.
fun fact
End of explanation
trace_text = -7.436823 -7.290942 -7.271528 -7.282786 -7.283622 -7.268156 -7.401003
-7.304412 -7.211659 -7.231061 -7.27238 -7.287718 -7.240896 -7.121189
-7.098841 -7.169402 -7.16689 -7.161854 -7.204029 -7.284694 -7.260288
-7.368507 -7.472383 -7.442443 -7.448409 -7.409199 -7.353145 -7.242572
-7.277459 -7.24589 -7.159036 -7.268178 -7.234837 -7.165567 -7.165357
-7.137534 -7.231942 -7.225935 -7.16142 -7.183465 -7.257877 -7.279006
-7.284249 -7.306481 -7.240192 -7.286245 -7.316336 -7.251441 -7.192566
-7.191351 -7.065362 -7.050815 -7.116456 -7.186705 -7.242357 -7.240123
-7.284564 -7.385903 -7.468834 -7.427641 -7.378051 -7.315574 -7.287397
-7.262906 -7.197077 -7.187754 -7.136347 -7.149802 -7.301047 -7.281932
-7.353314 -7.434607 -7.375526 -7.397572 -7.433974 -7.477175 -7.471739
-7.474228 -7.51791 -7.525722 -7.52028 -7.534158 -7.539559 -7.53915
-7.533163 -7.426446 -7.417031 -7.475554 -7.41521 -7.377752 -7.319138
-7.20372 -7.294216 -7.290163 -7.310827 -7.302531 -7.339285 -7.252367
-7.232718 -7.275662
trace = map(float,trace_text.split())
import matplotlib.pyplot as plt
%matplotlib inline
# Jupyter-specific magic command, ignore for regular script
stuff = plt.plot(trace)
# plt.show() needed for regular script
# suppose the entries have error
err = np.std(trace) * np.random.rand(len(trace))
plt.errorbar(range(len(trace)),trace,err)
# see trend (correlation) with exponential smoothing
import pandas as pd
data = pd.Series(trace)
plt.plot( trace )
plt.plot( data.ewm(span=5).mean(),ls="--",lw=2 )
Explanation: Basic Plotting
End of explanation
import numpy as np
def get_mat_vec(nsize):
mat = np.random.rand(nsize,nsize)
vec = np.random.rand(nsize)
return mat,vec
# end def
def mat_vec_np(mat,vec):
prod = np.dot(mat,vec)
return prod
# end def
def mat_vec_naive(mat,vec):
prod = np.zeros(nsize)
for i in range(nsize):
for j in range(nsize):
prod[i] += mat[i,j]*vec[j]
# end for j
# end for i
return prod
# end def
# verify correctness
nsize = 100
mat,vec = get_mat_vec(nsize)
p1 = mat_vec_np(mat,vec)
p2 = mat_vec_naive(mat,vec)
np.allclose(p1,p2)
# time it
nsize = 1000
mat,vec = get_mat_vec(nsize)
%timeit mat_vec_np(mat,vec)
%timeit -n 10 mat_vec_naive(mat,vec)
Explanation: Intermediate Use Cases
vectorized operations with numpy array
python for loops are VERY slow
numpy vectorized operations are about as fast as fortran (LAPACK under the hood)
End of explanation
import numpy as np
from copy import deepcopy
import matplotlib.pyplot as plt
%matplotlib inline
def rastrigin2d(rvec,A=10.):
ndim = len(rvec)
const = A * ndim
tosum = rvec**2. - A*np.cos(2*np.pi*rvec)
return const + tosum.sum()
# end def
# put function on a grid for visualization
minx = -5.12
maxx = 5.15
nx = 100
x = np.linspace(minx,maxx,nx)
grid = np.apply_along_axis(rastrigin2d,1
,[np.array([myx,myy]) for myx in x for myy in x] ) # vectorized
grid = grid.reshape(nx,nx) # reshape for plotting
# visualize
fig = plt.figure()
ax = fig.add_subplot(111,aspect=1)
ax.set_xlabel("x")
ax.set_ylabel("y")
cs = ax.contourf(x,x,grid.T,cmap=plt.cm.magma)
# transpose is needed because matrix index direction and plot axes
# directions are opposite of one another.
plt.colorbar(cs)
# below I will use pso to find the minimum of this function
# initialize population
pop_size = 20
dim = 2
pop = (maxx-minx) * np.random.rand(pop_size,dim) + minx
# find personal best
individual_best = np.apply_along_axis(rastrigin2d,1,pop) # vectorized
individual_best_pos = deepcopy(pop) # deep copy for array of arrays
# find population best
min_idx = np.argmin(individual_best) # find minimum index
global_best = individual_best[min_idx].copy() # find minimum
global_best_pos = pop[min_idx].copy() # shalow copy sufficient for array
# initialize hopping sizes and directions
max_hop = 0.3
hop = max_hop * np.random.rand(pop_size,dim)
background = plt.figure()
ax = background.add_subplot(111,aspect=1)
ax.set_xlabel("x")
ax.set_ylabel("y")
cs = ax.contourf(x,x,grid.T,alpha=0.3,cmap=plt.cm.magma)
ax.scatter(pop.T[0],pop.T[1],label="current positions")
ax.scatter(individual_best_pos.T[0],individual_best_pos.T[1]
,c="g",alpha=0.7,label="individual best",marker='^',s=40)
ax.scatter(global_best_pos[0],global_best_pos[1],color="r"
,label="global best",marker="*",s=80)
ax.legend(scatterpoints = 1,fontsize=10,loc="best")
background.colorbar(cs)
c1 = 2
c2 = 2
max_it = 5
for istep in range(max_it):
# evaluate fitness of population
fitness = np.apply_along_axis(rastrigin2d,1,pop)
# calculate global best
min_idx = np.argmin(fitness)
current_best = fitness[min_idx]
if current_best < global_best:
global_best = current_best
global_best_pos = pop[min_idx].copy()
# end if
# update individual best
idx = np.where( np.array(fitness) < np.array(individual_best) )
individual_best[idx] = fitness[idx]
individual_best_pos[idx] = deepcopy( pop[idx] )
# update hopping
hop += c1*np.random.rand()*(individual_best_pos-pop) + \
c2*np.random.rand()*(global_best_pos-pop)
idx = np.where( abs(hop) > max_hop )
hop[idx] = np.sign(hop[idx])*max_hop
# update populaton
pop += hop
# end for istep
background = plt.figure()
ax = background.add_subplot(111,aspect=1)
ax.set_xlabel("x")
ax.set_ylabel("y")
cs = ax.contourf(x,x,grid.T,alpha=0.3,cmap=plt.cm.magma)
ax.scatter(pop.T[0],pop.T[1],label="current positions")
ax.scatter(individual_best_pos.T[0],individual_best_pos.T[1]
,c="g",alpha=0.7,label="individual best",marker='^',s=40)
ax.scatter(global_best_pos[0],global_best_pos[1],color="r"
,label="global best",marker="*",s=80)
ax.legend(scatterpoints = 1,fontsize=10,loc="best")
background.colorbar(cs)
global_best
global_best_pos
Explanation: 3 orders of magnitude speed difference!
particle swarm optimization example
End of explanation
%%writefile output.txt
# I am an ugly output file, but I have many hidden treasures
BEGIN PASSWORDS OF EVERYONE
test123
1234567890
abcde
hello
passwd
password
END PASSWORDS OF EVERYONE
data follows
3
1.0 2.0 3.0
4.0 5.0 6.0
7.0 8.0 9.0
# one text block
fhandle = open("output.txt",'r')
# now you have to parse this ugly text
fhandle.read()
# line by line
fhandle = open("output.txt",'r')
for line in fhandle:
print line
# smart search
from mmap import mmap
fhandle = open("output.txt",'r+')
mm = mmap(fhandle.fileno(),0) # 0 means read from beginning
# read block
begin_idx = mm.find("BEGIN")
end_idx = mm.find("END")
good_lines= mm[begin_idx:end_idx].split("\n")[1:-1]
good_lines
# read data section
mm.seek(0) # go to beginning of file
idx = mm.find("data follows")
mm.seek(idx) # goto data line
mm.readline() # skip header
ndata = int(mm.readline())
data = []
for idata in range(ndata):
data.append( map(float,mm.readline().split()) )
# end for idata
data
Explanation: Text Parsing
plain text file
End of explanation
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# loaded databases is easy
dft = pd.read_json("dft.json")
qmc = pd.read_json("qmc.json")
# first thing to do is look at it? not so useful
dft
# look at columns
dft.columns
# access interesting columns
dft[["energy","pressure"]]
# plot energy vs. displacement
xlabel = "istep"
ylabel = "energy"
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.scatter(dft[xlabel],dft[ylabel])
plt.ylim(-3.8665,-3.865)
dmc = qmc[qmc["iqmc"]==4]
vmc = qmc[qmc["iqmc"]==0]
xlabel = "istep"
ylabel = "LocalEnergy_mean"
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.scatter(dmc[xlabel],dmc[ylabel])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlabel("displacement (bohr)")
ax.set_ylabel("Varaince (Ha$^2$)")
marker_style = {0:"s",1:"^"}
colors = {0:"g",1:"r"}
rjas = 1 # use reference jastrow
for rorb in [0,1]:
mydf = vmc[ vmc["rorb"] == rorb ]
ax.errorbar(mydf["disp"],mydf["Variance_mean"],mydf["Variance_error"].values,ls="",marker=marker_style[rorb]
,color=colors[rorb],label="ref. orb %d"%rorb)
# end for
ax.legend(loc="best",scatterpoints=1)
#ax.set_ylim(1.1,1.5)
fig.tight_layout()
#plt.savefig("variance_vs_disp-rjas1.eps")
# drop bad runs
sel1 = (qmc["imode"]==5) & (qmc["istep"]==-2)
qmc = qmc.drop(qmc[sel1].index)
sel2 = (qmc["imode"]==10) & (qmc["istep"]==2)
qmc = qmc.drop(qmc[sel2].index)
dmc = qmc[qmc["iqmc"]==4]
vmc = qmc[qmc["iqmc"]==0]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlabel("displacement (bohr)")
ax.set_ylabel("Varaince (Ha$^2$)")
marker_style = {0:"s",1:"^"}
colors = {0:"g",1:"r"}
rjas = 1 # use reference jastrow
for rorb in [0,1]:
mydf = vmc[ vmc["rorb"] == rorb ]
ax.errorbar(mydf["disp"],mydf["Variance_mean"],mydf["Variance_error"].values,ls="",marker=marker_style[rorb]
,color=colors[rorb],label="ref. orb %d"%rorb)
# end for
ax.legend(loc="best",scatterpoints=1)
#ax.set_ylim(1.1,1.5)
fig.tight_layout()
#plt.savefig("variance_vs_disp-rjas1.eps")
dmc.groupby(["rorb","rjas"]).apply(np.mean)[["LocalEnergy_mean","Variance_mean"]]
Explanation: Database
End of explanation |
1,387 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Artificial Intelligence & Machine Learning
Tugas 3
Step1: 1. Dynamic Programming (5 poin)
Seorang pria di Australia pada tahun 2017 memesan 200 McNuggets melalui drive-through hingga diliput oleh media setempat. Asumsikan bahwa McDonald's bersedia memenuhi permintaan tersebut dan dalam menu terdapat kombinasi paket McNuggets berisi 3, 6, 10, dan 24. Buatlah program dinamis untuk menghitung berapa jumlah paket McNuggets minimum yang dapat diberikan kepada pria tersebut!
Step2: 2. Search (10 poin)
Diberikan peta UK sebagai berikut.
Step3: Soal 2.1 (2 poin)
Gunakan algoritma UCS dari networkx untuk mencari jalan dari London ke Edinburgh.
Step4: Soal 2.2 (4 poin)
Gunakan algoritma A* dari networkx untuk mencari jalan dari London ke Edinburgh. Implementasikan fungsi heuristik berdasarkan variabel heuristics yang diberikan.
Step5: Soal 2.3 (2 poin)
Berapa jarak tempuh dari jalur terpendek London ke Edinburgh dari soal 2.2?
Step6: Soal 2.4 (2 poin)
Apakah hasil pada soal 2.1 dan 2.2 sama? Mengapa?
Jawaban Anda di sini
3. Reinforcement Learning (10 poin)
Game yang akan dimainkan kali ini adalah Frozen Lake. Terjemahan bebas dari dokumentasi
Step7: Soal 3.2.1 (2 poin)
Simulasikan permainan ini dengan algoritma random dan SARSA. Bandingkan rata-rata utilities yang didapatkan.
Step8: Soal 3.2.2 (2 poin)
Gambarkan perubahan nilai utilities dari algoritma random dan SARSA dengan rolling mean 100 episodes. Apa yang dapat Anda amati?
Petunjuk
Step9: Soal 3.3 (2 poin)
Tampilkan optimal policy untuk setiap state dari algoritma SARSA. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
plt.rcParams = plt.rcParamsOrig
Explanation: Artificial Intelligence & Machine Learning
Tugas 3: Search & Reinforcement Learning
Mekanisme
Anda hanya diwajibkan untuk mengumpulkan file ini saja ke uploader yang disediakan di https://elearning.uai.ac.id/. Ganti nama file ini saat pengumpulan menjadi tugas3_NIM.ipynb.
Keterlambatan: Pengumpulan tugas yang melebihi tenggat yang telah ditentukan tidak akan diterima. Keterlambatan akan berakibat pada nilai nol untuk tugas ini.
Kolaborasi: Anda diperbolehkan untuk berdiskusi dengan teman Anda, tetapi dilarang keras menyalin kode maupun tulisan dari teman Anda. Kecurangan akan berakibat pada nilai nol untuk tugas ini.
Petunjuk
Anda diperbolehkan (jika dirasa perlu) untuk mengimpor modul tambahan untuk tugas ini. Namun, seharusnya modul yang tersedia sudah cukup untuk memenuhi kebutuhan Anda. Untuk kode yang Anda ambil dari sumber lain, cantumkan URL menuju referensi tersebut jika diambil dari internet!
Perhatikan poin untuk tiap soal! Semakin kecil poinnya, berarti kode yang diperlukan untuk menjawab soal tersebut seharusnya semakin sedikit!
End of explanation
def dynamic_prog_mcnuggets(total_mcnuggets: int, packs: list):
pass # Kode Anda di sini
dynamic_prog_mcnuggets(200, [3, 6, 10, 24])
Explanation: 1. Dynamic Programming (5 poin)
Seorang pria di Australia pada tahun 2017 memesan 200 McNuggets melalui drive-through hingga diliput oleh media setempat. Asumsikan bahwa McDonald's bersedia memenuhi permintaan tersebut dan dalam menu terdapat kombinasi paket McNuggets berisi 3, 6, 10, dan 24. Buatlah program dinamis untuk menghitung berapa jumlah paket McNuggets minimum yang dapat diberikan kepada pria tersebut!
End of explanation
import networkx as nx
import urllib
locs = pd.read_csv('https://raw.githubusercontent.com/aliakbars/uai-ai/master/datasets/uk-coordinates.csv')
heuristics = pd.read_csv('https://raw.githubusercontent.com/aliakbars/uai-ai/master/datasets/uk-heuristics.csv')
G = nx.read_gpickle(urllib.request.urlopen("https://raw.githubusercontent.com/aliakbars/uai-ai/master/datasets/uk.pickle"))
def draw_map(G, locs):
pos = locs.set_index('city_name').apply(
lambda x: (x['longitude'], x['latitude']),
axis=1
).to_dict()
fig, ax = plt.subplots(figsize=(7, 7))
nx.draw(
G, pos,
with_labels=True,
edge_color='#DDDDDD',
node_color='#A0CBE2',
node_size=300,
font_size=10,
ax=ax
)
labels = nx.get_edge_attributes(G, 'weight')
labels = {k: np.round(v).astype(int) for k, v in labels.items()}
nx.draw_networkx_edge_labels(
G, pos,
edge_labels=labels,
ax=ax
)
draw_map(G, locs)
Explanation: 2. Search (10 poin)
Diberikan peta UK sebagai berikut.
End of explanation
# Kode Anda di sini
Explanation: Soal 2.1 (2 poin)
Gunakan algoritma UCS dari networkx untuk mencari jalan dari London ke Edinburgh.
End of explanation
def heuristic(source, target):
pass # Kode Anda di sini
# Kode Anda di sini
Explanation: Soal 2.2 (4 poin)
Gunakan algoritma A* dari networkx untuk mencari jalan dari London ke Edinburgh. Implementasikan fungsi heuristik berdasarkan variabel heuristics yang diberikan.
End of explanation
# Kode Anda di sini
Explanation: Soal 2.3 (2 poin)
Berapa jarak tempuh dari jalur terpendek London ke Edinburgh dari soal 2.2?
End of explanation
from tqdm.notebook import trange
import gym
class Agent:
def __init__(self, env, algo="random", eps=0.2, eta=0.1, gamma=1):
self.env = env
self.s = env.reset() # inisialisasi state
self.q = np.zeros((env.observation_space.n, env.action_space.n)) # inisialisasi semua nilai pada matriks Q = 0
self.eps = eps # probabilitas eksplorasi
self.eta = eta # learning rate
self.gamma = gamma # discount factor
self.algo = algo
def update_q(self, s, a, r, s_, a_):
# Implementasikan SARSA pada bagian ini
self.q[s, a] = ... # Kode Anda di sini
def choose_action(self, s):
if self.algo == "random":
return self.env.action_space.sample()
elif self.algo == "sarsa":
... # Kode Anda di sini
else:
raise NotImplementedError()
def play(self):
a = self.choose_action(self.s)
state, reward, done, _ = self.env.step(a)
action = self.choose_action(state) # melihat aksi selanjutnya
self.update_q(self.s, a, reward, state, action)
self.s = state # state saat ini diperbarui
return done, reward
def reset(self):
self.s = self.env.reset()
Explanation: Soal 2.4 (2 poin)
Apakah hasil pada soal 2.1 dan 2.2 sama? Mengapa?
Jawaban Anda di sini
3. Reinforcement Learning (10 poin)
Game yang akan dimainkan kali ini adalah Frozen Lake. Terjemahan bebas dari dokumentasi:
Musim dingin telah tiba. Anda dan teman Anda sedang bermain frisbee dan tanpa sengaja Anda melempar cakram frisbee ke tengah danau. Sebagian besar danau sudah beku, tetapi ada beebrapa lubang karena esnya telah mencair. Jika Anda terjatuh ke lubang, maka Anda akan tercebur ke air yang sangat dingin. Anda harus mengambil cakram tersebut dengan menyeberangi danau. Namun, es yang Anda pijak licin, sehingga Anda tidak selalu bisa berjalan ke arah yang Anda tuju. Anda diminta mencari strategi (policy) berupa jalan yang aman menuju ke ubin tujuan.
Peta danau:
SFFF
FHFH
FFFH
HFFG
Soal 3.1 (4 poin)
Di dalam kelas yang sudah didefinisikan di bawah ini:
1. Implementasikan SARSA untuk memperbarui nilai Q. (2 poin)
2. Implementasikan algoritma $\epsilon$-greedy untuk memilih aksi. Petunjuk: Manfaatkan np.random.random(). (2 poin)
End of explanation
def simulate(algo, num_episodes=10000):
np.random.seed(101)
env = gym.make("FrozenLake-v0")
agent = Agent(env, algo)
utilities = []
for i in trange(num_episodes):
while True:
done, reward = agent.play()
if done:
utilities.append(reward)
agent.reset()
break
env.close()
return agent, utilities
# Kode Anda di sini
Explanation: Soal 3.2.1 (2 poin)
Simulasikan permainan ini dengan algoritma random dan SARSA. Bandingkan rata-rata utilities yang didapatkan.
End of explanation
# Kode Anda di sini
Explanation: Soal 3.2.2 (2 poin)
Gambarkan perubahan nilai utilities dari algoritma random dan SARSA dengan rolling mean 100 episodes. Apa yang dapat Anda amati?
Petunjuk: Cari tentang "pandas rolling mean".
End of explanation
# Kode Anda di sini
Explanation: Soal 3.3 (2 poin)
Tampilkan optimal policy untuk setiap state dari algoritma SARSA.
End of explanation |
1,388 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
\title{myHDL Sawtooth Wave Generator based on the Phase Accumulation method}
\author{Steven K Armour}
\maketitle
This is a simple SawTooth wave generator based on the phase accumulation method inspired by George Pantazopoulos implementation SawWaveGen in http
Step1: Helper functions
Step2: Architecture Setup
Step3: Symbolic Derivation
Step5: myHDL Implementation
Step6: myHDL Testing
Step7: myHDL to Verilog | Python Code:
from myhdl import *
import pandas as pd
from myhdlpeek import Peeker
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sympy import *
init_printing()
Explanation: \title{myHDL Sawtooth Wave Generator based on the Phase Accumulation method}
\author{Steven K Armour}
\maketitle
This is a simple SawTooth wave generator based on the phase accumulation method inspired by George Pantazopoulos implementation SawWaveGen in http://old.myhdl.org/doku.php/projects:dsx1000
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Libraries-used" data-toc-modified-id="Libraries-used-1"><span class="toc-item-num">1 </span>Libraries used</a></span></li><li><span><a href="#Helper-functions" data-toc-modified-id="Helper-functions-2"><span class="toc-item-num">2 </span>Helper functions</a></span></li><li><span><a href="#Architecture-Setup" data-toc-modified-id="Architecture-Setup-3"><span class="toc-item-num">3 </span>Architecture Setup</a></span></li><li><span><a href="#Symbolic--Derivation" data-toc-modified-id="Symbolic--Derivation-4"><span class="toc-item-num">4 </span>Symbolic Derivation</a></span></li><li><span><a href="#myHDL-Implementation" data-toc-modified-id="myHDL-Implementation-5"><span class="toc-item-num">5 </span>myHDL Implementation</a></span></li><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-6"><span class="toc-item-num">6 </span>myHDL Testing</a></span></li><li><span><a href="#myHDL-to-Verilog" data-toc-modified-id="myHDL-to-Verilog-7"><span class="toc-item-num">7 </span>myHDL to Verilog</a></span></li></ul></div>
Libraries used
End of explanation
#helper functions to read in the .v and .vhd generated files into python
def VerilogTextReader(loc, printresult=True):
with open(f'{loc}.v', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***Verilog modual from {loc}.v***\n\n', VerilogText)
return VerilogText
def VHDLTextReader(loc, printresult=True):
with open(f'{loc}.vhd', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***VHDL modual from {loc}.vhd***\n\n', VerilogText)
return VerilogText
Explanation: Helper functions
End of explanation
BitWidth=16
#the max in excluded in intbv
MaxV=int(2**(BitWidth-1)); MinV=-int(2**(BitWidth-1))
a=intbv(0)[BitWidth:]; a=a.signed()
len(a), a.min, MinV, a.max, MaxV
Explanation: Architecture Setup
End of explanation
t, T=symbols('t, T', real=True)
y=Function('y')(t)
yEq=Eq(y, (t/T)-floor(t/T)); yEq
ft, fc, W=symbols('f_t, f_c, W')
PhaseMax=(ft*2**W)//fc; PhaseMax
Targets={ft:440, W:BitWidth}; Targets[fc]=100e3
Targets
PM=PhaseMax.subs(Targets)
f'PhaseMax={PM}'
yN=lambdify((t, T), yEq.rhs, dummify=False)
TN=1/100e3; TN
tN=np.linspace(0, .1, PM//4)
fig, axBot=plt.subplots(ncols=1, nrows=1)
axTop=axBot.twiny()
axBot.plot(tN, yN(tN, TN))
axBot.set_xlabel('Time [s]')
axTop.plot(yN(tN, TN))
axTop.set_xlabel('n');
Explanation: Symbolic Derivation
End of explanation
@block
def SawToothGen(y, clk, rst, Freq, ClkFreq):
Inputs:
clk (bool): system clock
rst (bool): reset signal
Ouputs:
y(2's): SawWave Ouput
Parmeters:
Freq(float): Target Freq
ClkFreq(float): System Clock Freq
#Registor to store the phase; aka a counter
Phase=Signal(intbv(0)[BitWidth:])
# Make phase (Counter) limit
PhaseCeil=int((Freq*2**BitWidth)//ClkFreq)
@always(clk.posedge)
def logic():
if rst:
Phase.next=0
y.next=0
else:
if Phase==PhaseCeil-1:
y.next=0
Phase.next=0
else:
y.next=y+1
Phase.next=Phase+1
return instances()
Explanation: myHDL Implementation
End of explanation
Peeker.clear()
y=Signal(intbv(0)[BitWidth:]); Peeker(y, 'y')
#Phase=Signal(modbv(0, max=5)); Peeker(Phase, 'P')
clk, rst=[Signal(bool(0)) for _ in range(2)]
Peeker(clk, 'clk'); Peeker(rst, 'rst')
DUT=SawToothGen(y, clk, rst, 440, 100e3)
def SawToothGen_TB():
@always(delay(1)) ## delay in nano seconds
def clkGen():
clk.next = not clk
@instance
def Stimules():
for i in range(8*PM):
yield clk.posedge
for i in range(4):
if i <2:
rst.next=True
else:
rst.next=False
yield clk.posedge
raise StopSimulation
return instances()
sim=Simulation(DUT, SawToothGen_TB(), *Peeker.instances()).run()
#Peeker.to_wavedrom()
Simdata=Peeker.to_dataframe()
Simdata=Simdata[Simdata.clk!=0]
Simdata.reset_index(drop=True, inplace=True)
Simdata
Simdata.plot(y='y')
y=Simdata[Simdata.rst!=1]['y']
fy=np.fft.fftshift(np.fft.fft(y, len(y)))
fs=np.fft.fftfreq(len(y))
n=np.where(fs>=0)
plt.plot(fs[n], np.abs(fy[n]))
plt.twinx()
plt.plot(fs[n], np.angle(fy[n], deg=True), color='g', alpha=.3)
f=fs.max()*100e3; f
Explanation: myHDL Testing
End of explanation
DUT.convert()
VerilogTextReader('SawTooth');
Explanation: myHDL to Verilog
End of explanation |
1,389 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Now let's look at some classification methods.
Nearest Neighbor
Step2: Exercise
Step3: Exercise
Step4: Now write a loop that does this using 100 different randomly generated datasets, and plot the mean across datasets. This will take a couple of minutes to run.
Step5: Linear discriminant analysis
Step6: Logistic regression
Step7: Support vector machines
Step8: Exercise
Step9: Exercise
Step10: Plot the boundary for the classifier with the best performance | Python Code:
# adapted from http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html#example-neighbors-plot-classification-py
n_neighbors = 30
# step size in the mesh
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00'])
clf = sklearn.neighbors.KNeighborsClassifier(n_neighbors, weights='uniform')
clf.fit(d, cl)
def plot_cls_with_decision_surface(d,cl,clf,h = .25 ):
Plot the decision boundary. For that, we will assign a color to each
point in the mesh [x_min, m_max]x[y_min, y_max].
h= step size in the grid
x_min, x_max = d[:, 0].min() - 1, d[:, 0].max() + 1
y_min, y_max = d[:, 1].min() - 1, d[:, 1].max() + 1
xx, yy = numpy.meshgrid(numpy.arange(x_min, x_max, h),
numpy.arange(y_min, y_max, h))
Z = clf.predict(numpy.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(d[:, 0], d[:, 1], c=cl, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plot_cls_with_decision_surface(d,cl,clf)
Explanation: Now let's look at some classification methods.
Nearest Neighbor
End of explanation
def classify(d,cl,clf,cv):
pred=numpy.zeros(n)
for train,test in cv:
clf.fit(d[train,:],cl[train])
pred[test]=clf.predict(d[test,:])
return sklearn.metrics.accuracy_score(cl,pred),sklearn.metrics.confusion_matrix(cl,pred)
clf=sklearn.neighbors.KNeighborsClassifier(n_neighbors, weights='uniform')
# use stratified k-fold crossvalidation, which keeps the proportion of classes roughly
# equal across folds
cv=sklearn.cross_validation.StratifiedKFold(cl, 8)
acc,confusion=classify(d,cl,clf,cv)
print acc
print confusion
Explanation: Exercise: Change the number of nearest neighbors and see how it changes the surface.
Now let's write a function to perform cross-validation and compute prediction accuracy.
End of explanation
accuracy_knn=numpy.zeros(30)
for i in range(1,31):
clf=sklearn.neighbors.KNeighborsClassifier(i, weights='uniform')
accuracy_knn[i-1],_=classify(d,cl,clf,cv)
plt.plot(range(1,31),accuracy_knn)
Explanation: Exercise: Loop through different levels of n_neighbors (from 1 to 30) and compute the accuracy.
End of explanation
accuracy_knn=numpy.zeros((100,30))
for x in range(100):
ds_cl,ds_x=make_class_data(multiplier=[1.1,1.1],N=n)
ds_cv=sklearn.cross_validation.StratifiedKFold(ds_cl, 8)
for i in range(1,31):
clf=sklearn.neighbors.KNeighborsClassifier(i, weights='uniform')
accuracy_knn[x,i-1],_=classify(ds_x,ds_cl,clf,ds_cv)
plt.plot(range(1,31),numpy.mean(accuracy_knn,0))
plt.xlabel('number of nearest neighbors')
plt.ylabel('accuracy')
Explanation: Now write a loop that does this using 100 different randomly generated datasets, and plot the mean across datasets. This will take a couple of minutes to run.
End of explanation
clf=sklearn.lda.LDA()
cv=sklearn.cross_validation.LeaveOneOut(n)
acc,confusion=classify(d,cl,clf,cv)
print acc
print confusion
plot_cls_with_decision_surface(d,cl,clf)
Explanation: Linear discriminant analysis
End of explanation
clf=sklearn.linear_model.LogisticRegression(C=0.5)
acc,confusion=classify(d,cl,clf,cv)
print acc
print confusion
plot_cls_with_decision_surface(d,cl,clf)
Explanation: Logistic regression
End of explanation
clf=sklearn.svm.SVC(kernel='linear')
acc,confusion=classify(d,cl,clf,cv)
print acc
print confusion
plot_cls_with_decision_surface(d,cl,clf)
Explanation: Support vector machines
End of explanation
clf=sklearn.svm.SVC(kernel='rbf')
acc,confusion=classify(d,cl,clf,cv)
print acc
print confusion
plot_cls_with_decision_surface(d,cl,clf)
Explanation: Exercise: Implement the example above using a nonlinear SVM with a radial basis kernel.
End of explanation
gammavals=numpy.arange(0.0,0.2,0.01)
accuracy_rbf=numpy.zeros(len(gammavals))
for i in range(len(gammavals)):
clf=sklearn.svm.SVC(kernel='rbf',gamma=gammavals[i])
accuracy_rbf[i],_=classify(d,cl,clf,cv)
plt.plot(gammavals,accuracy_rbf)
plt.xlabel('gamma')
plt.ylabel('accuracy')
Explanation: Exercise: Try the RBF SVC using a range of values for the gamma parameter.
NOTE: For real data analysis, you cannot determine the best value of gamma this way, because you would be peeking at the test data which will make your results overly optimistic. Instead, you would need to use nested cross-validation loops; we will come to this later
End of explanation
maxgamma=gammavals[numpy.where(accuracy_rbf==numpy.max(accuracy_rbf))]
if len(maxgamma)>1:
maxgamma=maxgamma[0]
print 'Best gamma:', maxgamma
clf=sklearn.svm.SVC(kernel='rbf',gamma=maxgamma)
acc,_=classify(d,cl,clf,cv)
print 'Accuracy:',acc
plot_cls_with_decision_surface(d,cl,clf)
Explanation: Plot the boundary for the classifier with the best performance
End of explanation |
1,390 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Continuous training pipeline with Kubeflow Pipeline and AI Platform
Learning Objectives
Step6: NOTE
Step7: The custom components execute in a container image defined in base_image/Dockerfile.
Step8: The training step in the pipeline employes the AI Platform Training component to schedule a AI Platform Training job in a custom training container. The custom training image is defined in trainer_image/Dockerfile.
Step9: Building and deploying the pipeline
Before deploying to AI Platform Pipelines, the pipeline DSL has to be compiled into a pipeline runtime format, also refered to as a pipeline package. The runtime format is based on Argo Workflow, which is expressed in YAML.
Configure environment settings
Update the below constants with the settings reflecting your lab environment.
REGION - the compute region for AI Platform Training and Prediction
ARTIFACT_STORE - the GCS bucket created during installation of AI Platform Pipelines. The bucket name will be similar to qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default.
ENDPOINT - set the ENDPOINT constant to the endpoint to your AI Platform Pipelines instance. Then endpoint to the AI Platform Pipelines instance can be found on the AI Platform Pipelines page in the Google Cloud Console.
Open the SETTINGS for your instance
Use the value of the host variable in the Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD section of the SETTINGS window.
Run gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.
Step10: HINT
Step11: Build the trainer image
Step12: Note
Step13: Build the base image for custom components
Step14: Compile the pipeline
You can compile the DSL using an API from the KFP SDK or using the KFP compiler.
To compile the pipeline DSL using the KFP compiler.
Set the pipeline's compile time settings
The pipeline can run using a security context of the GKE default node pool's service account or the service account defined in the user-gcp-sa secret of the Kubernetes namespace hosting KFP. If you want to use the user-gcp-sa service account you change the value of USE_KFP_SA to True.
Note that the default AI Platform Pipelines configuration does not define the user-gcp-sa secret.
Step15: Use the CLI compiler to compile the pipeline
Exercise
Compile the covertype_training_pipeline.py with the dsl-compile command line
Step16: The result is the covertype_training_pipeline.yaml file.
Step17: Deploy the pipeline package
Exercise
Upload the pipeline to the Kubeflow cluster using the kfp command line
Step18: Submitting pipeline runs
You can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run.
List the pipelines in AI Platform Pipelines
Step19: Submit a run
Find the ID of the covertype_continuous_training pipeline you uploaded in the previous step and update the value of PIPELINE_ID .
Step20: Exercise
Run the pipeline using the kfp command line. Here are some of the variable
you will have to use to pass to the pipeline | Python Code:
!grep 'BASE_IMAGE =' -A 5 pipeline/covertype_training_pipeline.py
Explanation: Continuous training pipeline with Kubeflow Pipeline and AI Platform
Learning Objectives:
1. Learn how to use Kubeflow Pipeline (KFP) pre-build components (BiqQuery, AI Platform training and predictions)
1. Learn how to use KFP lightweight python components
1. Learn how to build a KFP with these components
1. Learn how to compile, upload, and run a KFP with the command line
In this lab, you will build, deploy, and run a KFP pipeline that orchestrates BigQuery and AI Platform services to train, tune, and deploy a scikit-learn model.
Understanding the pipeline design
The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the covertype_training_pipeline.py file that we will generate below.
The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.
End of explanation
%%writefile ./pipeline/covertype_training_pipeline.py
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
KFP orchestrating BigQuery and Cloud AI Platform services.
import os
from helper_components import evaluate_model
from helper_components import retrieve_best_run
from jinja2 import Template
import kfp
from kfp.components import func_to_container_op
from kfp.dsl.types import Dict
from kfp.dsl.types import GCPProjectID
from kfp.dsl.types import GCPRegion
from kfp.dsl.types import GCSPath
from kfp.dsl.types import String
from kfp.gcp import use_gcp_secret
# Defaults and environment settings
BASE_IMAGE = os.getenv('BASE_IMAGE')
TRAINER_IMAGE = os.getenv('TRAINER_IMAGE')
RUNTIME_VERSION = os.getenv('RUNTIME_VERSION')
PYTHON_VERSION = os.getenv('PYTHON_VERSION')
COMPONENT_URL_SEARCH_PREFIX = os.getenv('COMPONENT_URL_SEARCH_PREFIX')
USE_KFP_SA = os.getenv('USE_KFP_SA')
TRAINING_FILE_PATH = 'datasets/training/data.csv'
VALIDATION_FILE_PATH = 'datasets/validation/data.csv'
TESTING_FILE_PATH = 'datasets/testing/data.csv'
# Parameter defaults
SPLITS_DATASET_ID = 'splits'
HYPERTUNE_SETTINGS =
{
"hyperparameters": {
"goal": "MAXIMIZE",
"maxTrials": 6,
"maxParallelTrials": 3,
"hyperparameterMetricTag": "accuracy",
"enableTrialEarlyStopping": True,
"params": [
{
"parameterName": "max_iter",
"type": "DISCRETE",
"discreteValues": [500, 1000]
},
{
"parameterName": "alpha",
"type": "DOUBLE",
"minValue": 0.0001,
"maxValue": 0.001,
"scaleType": "UNIT_LINEAR_SCALE"
}
]
}
}
# Helper functions
def generate_sampling_query(source_table_name, num_lots, lots):
Prepares the data sampling query.
sampling_query_template =
SELECT *
FROM
`{{ source_table }}` AS cover
WHERE
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), {{ num_lots }}) IN ({{ lots }})
query = Template(sampling_query_template).render(
source_table=source_table_name, num_lots=num_lots, lots=str(lots)[1:-1])
return query
# Create component factories
component_store = # TO DO: Complete the command
bigquery_query_op = # TO DO: Use the pre-build bigquery/query component
mlengine_train_op = # TO DO: Use the pre-build ml_engine/train
mlengine_deploy_op = # TO DO: Use the pre-build ml_engine/deploy component
retrieve_best_run_op = # TO DO: Package the retrieve_best_run function into a lightweight component
evaluate_model_op = # TO DO: Package the evaluate_model function into a lightweight component
@kfp.dsl.pipeline(
name='Covertype Classifier Training',
description='The pipeline training and deploying the Covertype classifierpipeline_yaml'
)
def covertype_train(project_id,
region,
source_table_name,
gcs_root,
dataset_id,
evaluation_metric_name,
evaluation_metric_threshold,
model_id,
version_id,
replace_existing_version,
hypertune_settings=HYPERTUNE_SETTINGS,
dataset_location='US'):
Orchestrates training and deployment of an sklearn model.
# Create the training split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[1, 2, 3, 4])
training_file_path = '{}/{}'.format(gcs_root, TRAINING_FILE_PATH)
create_training_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=training_file_path,
dataset_location=dataset_location)
# Create the validation split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[8])
validation_file_path = '{}/{}'.format(gcs_root, VALIDATION_FILE_PATH)
create_validation_split = # TODO - use the bigquery_query_op
# Create the testing split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[9])
testing_file_path = '{}/{}'.format(gcs_root, TESTING_FILE_PATH)
create_testing_split = # TO DO: Use the bigquery_query_op
# Tune hyperparameters
tune_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--hptune', 'True'
]
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir/hypertune',
kfp.dsl.RUN_ID_PLACEHOLDER)
hypertune = # TO DO: Use the mlengine_train_op
# Retrieve the best trial
get_best_trial = retrieve_best_run_op(
project_id, hypertune.outputs['job_id'])
# Train the model on a combined training and validation datasets
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir', kfp.dsl.RUN_ID_PLACEHOLDER)
train_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--alpha',
get_best_trial.outputs['alpha'], '--max_iter',
get_best_trial.outputs['max_iter'], '--hptune', 'False'
]
train_model = # TO DO: Use the mlengine_train_op
# Evaluate the model on the testing split
eval_model = evaluate_model_op(
dataset_path=str(create_testing_split.outputs['output_gcs_path']),
model_path=str(train_model.outputs['job_dir']),
metric_name=evaluation_metric_name)
# Deploy the model if the primary metric is better than threshold
with kfp.dsl.Condition(eval_model.outputs['metric_value'] > evaluation_metric_threshold):
deploy_model = mlengine_deploy_op(
model_uri=train_model.outputs['job_dir'],
project_id=project_id,
model_id=model_id,
version_id=version_id,
runtime_version=RUNTIME_VERSION,
python_version=PYTHON_VERSION,
replace_existing_version=replace_existing_version)
# Configure the pipeline to run using the service account defined
# in the user-gcp-sa k8s secret
if USE_KFP_SA == 'True':
kfp.dsl.get_pipeline_conf().add_op_transformer(
use_gcp_secret('user-gcp-sa'))
Explanation: NOTE: Because there are no environment variables set, therefore covertype_training_pipeline.py file is missing; we will create it in the next step.
The pipeline uses a mix of custom and pre-build components.
Pre-build components. The pipeline uses the following pre-build components that are included with the KFP distribution:
BigQuery query component
AI Platform Training component
AI Platform Deploy component
Custom components. The pipeline uses two custom helper components that encapsulate functionality not available in any of the pre-build components. The components are implemented using the KFP SDK's Lightweight Python Components mechanism. The code for the components is in the helper_components.py file:
Retrieve Best Run. This component retrieves a tuning metric and hyperparameter values for the best run of a AI Platform Training hyperparameter tuning job.
Evaluate Model. This component evaluates a sklearn trained model using a provided metric and a testing dataset.
Exercise
Complete TO DOs the pipeline file below.
<ql-infobox><b>NOTE:</b> If you need help, you may take a look at the complete solution by navigating to mlops-on-gcp > workshops > kfp-caip-sklearn > lab-02-kfp-pipeline and opening lab-02.ipynb.
</ql-infobox>
End of explanation
!cat base_image/Dockerfile
Explanation: The custom components execute in a container image defined in base_image/Dockerfile.
End of explanation
!cat trainer_image/Dockerfile
Explanation: The training step in the pipeline employes the AI Platform Training component to schedule a AI Platform Training job in a custom training container. The custom training image is defined in trainer_image/Dockerfile.
End of explanation
!gsutil ls
Explanation: Building and deploying the pipeline
Before deploying to AI Platform Pipelines, the pipeline DSL has to be compiled into a pipeline runtime format, also refered to as a pipeline package. The runtime format is based on Argo Workflow, which is expressed in YAML.
Configure environment settings
Update the below constants with the settings reflecting your lab environment.
REGION - the compute region for AI Platform Training and Prediction
ARTIFACT_STORE - the GCS bucket created during installation of AI Platform Pipelines. The bucket name will be similar to qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default.
ENDPOINT - set the ENDPOINT constant to the endpoint to your AI Platform Pipelines instance. Then endpoint to the AI Platform Pipelines instance can be found on the AI Platform Pipelines page in the Google Cloud Console.
Open the SETTINGS for your instance
Use the value of the host variable in the Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD section of the SETTINGS window.
Run gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.
End of explanation
REGION = 'us-central1'
ENDPOINT = '337dd39580cbcbd2-dot-us-central2.pipelines.googleusercontent.com' # TO DO: REPLACE WITH YOUR ENDPOINT
ARTIFACT_STORE_URI = 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default' # TO DO: REPLACE WITH YOUR ARTIFACT_STORE NAME
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
Explanation: HINT:
For ENDPOINT, use the value of the host variable in the Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SDK section of the SETTINGS window.
For ARTIFACT_STORE_URI, copy the bucket name which starts with the qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default prefix from the previous cell output. Your copied value should look like 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default'
End of explanation
IMAGE_NAME='trainer_image'
TAG='latest'
TRAINER_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
Explanation: Build the trainer image
End of explanation
!gcloud builds submit --timeout 15m --tag $TRAINER_IMAGE trainer_image
Explanation: Note: Please ignore any incompatibility ERROR that may appear for the packages visions as it will not affect the lab's functionality.
End of explanation
IMAGE_NAME='base_image'
TAG='latest'
BASE_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
!gcloud builds submit --timeout 15m --tag $BASE_IMAGE base_image
Explanation: Build the base image for custom components
End of explanation
USE_KFP_SA = False
COMPONENT_URL_SEARCH_PREFIX = 'https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/'
RUNTIME_VERSION = '1.15'
PYTHON_VERSION = '3.7'
%env USE_KFP_SA={USE_KFP_SA}
%env BASE_IMAGE={BASE_IMAGE}
%env TRAINER_IMAGE={TRAINER_IMAGE}
%env COMPONENT_URL_SEARCH_PREFIX={COMPONENT_URL_SEARCH_PREFIX}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERSION={PYTHON_VERSION}
Explanation: Compile the pipeline
You can compile the DSL using an API from the KFP SDK or using the KFP compiler.
To compile the pipeline DSL using the KFP compiler.
Set the pipeline's compile time settings
The pipeline can run using a security context of the GKE default node pool's service account or the service account defined in the user-gcp-sa secret of the Kubernetes namespace hosting KFP. If you want to use the user-gcp-sa service account you change the value of USE_KFP_SA to True.
Note that the default AI Platform Pipelines configuration does not define the user-gcp-sa secret.
End of explanation
# TO DO: Your code goes here
Explanation: Use the CLI compiler to compile the pipeline
Exercise
Compile the covertype_training_pipeline.py with the dsl-compile command line:
<ql-infobox><b>NOTE:</b> If you need help, you may take a look at the complete solution by navigating to mlops-on-gcp > workshops > kfp-caip-sklearn > lab-02-kfp-pipeline and opening lab-02.ipynb.
</ql-infobox>
End of explanation
!head covertype_training_pipeline.yaml
Explanation: The result is the covertype_training_pipeline.yaml file.
End of explanation
PIPELINE_NAME='covertype_continuous_training'
# TO DO: Your code goes here
Explanation: Deploy the pipeline package
Exercise
Upload the pipeline to the Kubeflow cluster using the kfp command line:
<ql-infobox><b>NOTE:</b> If you need help, you may take a look at the complete solution by navigating to mlops-on-gcp > workshops > kfp-caip-sklearn > lab-02-kfp-pipeline and opening lab-02.ipynb.
</ql-infobox>
End of explanation
!kfp --endpoint $ENDPOINT pipeline list
Explanation: Submitting pipeline runs
You can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run.
List the pipelines in AI Platform Pipelines
End of explanation
PIPELINE_ID='0918568d-758c-46cf-9752-e04a4403cd84' # TO DO: REPLACE WITH YOUR PIPELINE ID
EXPERIMENT_NAME = 'Covertype_Classifier_Training'
RUN_ID = 'Run_001'
SOURCE_TABLE = 'covertype_dataset.covertype'
DATASET_ID = 'splits'
EVALUATION_METRIC = 'accuracy'
EVALUATION_METRIC_THRESHOLD = '0.69'
MODEL_ID = 'covertype_classifier'
VERSION_ID = 'v01'
REPLACE_EXISTING_VERSION = 'True'
GCS_STAGING_PATH = '{}/staging'.format(ARTIFACT_STORE_URI)
Explanation: Submit a run
Find the ID of the covertype_continuous_training pipeline you uploaded in the previous step and update the value of PIPELINE_ID .
End of explanation
# TO DO: Your code goes here
Explanation: Exercise
Run the pipeline using the kfp command line. Here are some of the variable
you will have to use to pass to the pipeline:
EXPERIMENT_NAME is set to the experiment used to run the pipeline. You can choose any name you want. If the experiment does not exist it will be created by the command
RUN_ID is the name of the run. You can use an arbitrary name
PIPELINE_ID is the id of your pipeline. Use the value retrieved by the kfp pipeline list command
GCS_STAGING_PATH is the URI to the Cloud Storage location used by the pipeline to store intermediate files. By default, it is set to the staging folder in your artifact store.
REGION is a compute region for AI Platform Training and Prediction.
<ql-infobox><b>NOTE:</b> If you need help, you may take a look at the complete solution by navigating to mlops-on-gcp > workshops > kfp-caip-sklearn > lab-02-kfp-pipeline and opening lab-02.ipynb.
</ql-infobox>
End of explanation |
1,391 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introducing the Keras Functional API
Learning Objectives
1. Understand embeddings and how to create them with the feature column API
1. Understand Deep and Wide models and when to use them
1. Understand the Keras functional API and how to build a deep and wide model with it
Introduction
In the last notebook, we learned about the Keras Sequential API. The Keras Functional API provides an alternate way of building models which is more flexible. With the Functional API, we can build models with more complex topologies, multiple input or output layers, shared layers or non-sequential data flows (e.g. residual layers).
In this notebook we'll use what we learned about feature columns to build a Wide & Deep model. Recall, that the idea behind Wide & Deep models is to join the two methods of learning through memorization and generalization by making a wide linear model and a deep learning model to accommodate both. You can have a look at the original research paper here
Step1: Load raw data
We will use the taxifare dataset, using the CSV files that we created in the first notebook of this sequence. Those files have been saved into ../data.
Step2: Use tf.data to read the CSV files
We wrote these functions for reading data from the csv files above in the previous notebook. For this lab we will also include some additional engineered features in our model. In particular, we will compute the difference in latitude and longitude, as well as the Euclidean distance between the pick-up and drop-off locations. We can accomplish this by adding these new features to the features dictionary with the function add_engineered_features below.
Note that we include a call to this function when collecting our features dict and labels in the features_and_labels function below as well.
Step3: Feature columns for Wide and Deep model
For the Wide columns, we will create feature columns of crossed features. To do this, we'll create a collection of Tensorflow feature columns to pass to the tf.feature_column.crossed_column constructor. The Deep columns will consist of numeric columns and the embedding columns we want to create.
Exercise. In the cell below, create feature columns for our wide-and-deep model. You'll need to build
1. bucketized columns using tf.feature_column.bucketized_column for the pickup and dropoff latitude and longitude,
2. crossed columns using tf.feature_column.crossed_column for those bucketized columns, and
3. embedding columns using tf.feature_column.embedding_column for the crossed columns.
Step4: Gather list of feature columns
Next we gather the list of wide and deep feature columns we'll pass to our Wide & Deep model in Tensorflow. Recall, wide columns are sparse, have linear relationship with the output while continuous columns are deep, have a complex relationship with the output. We will use our previously bucketized columns to collect crossed feature columns and sparse feature columns for our wide columns, and embedding feature columns and numeric features columns for the deep columns.
Exercise. Collect the wide and deep columns into two separate lists. You'll have two lists
Step5: Build a Wide and Deep model in Keras
To build a wide-and-deep network, we connect the sparse (i.e. wide) features directly to the output node, but pass the dense (i.e. deep) features through a set of fully connected layers. Here’s that model architecture looks using the Functional API.
First, we'll create our input columns using tf.keras.layers.Input.
Step6: Then, we'll define our custom RMSE evaluation metric and build our wide and deep model.
Exercise. Complete the code in the function build_model below so that it returns a compiled Keras model. The argument dnn_hidden_units should represent the number of units in each layer of your network. Use the Functional API to build a wide-and-deep model. Use the deep_columns you created above to build the deep layers and the wide_columns to create the wide layers. Once you have the wide and deep components, you will combine them to feed to a final fully connected layer.
Step7: Next, we can call the build_model to create the model. Here we'll have two hidden layers, each with 10 neurons, for the deep part of our model. We can also use plot_model to see a diagram of the model we've created.
Step8: Next, we'll set up our training variables, create our datasets for training and validation, and train our model.
(We refer you the the blog post ML Design Pattern #3
Step9: Just as before, we can examine the history to see how the RMSE changes through training on the train set and validation set. | Python Code:
import datetime
import os
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow import feature_column as fc
from tensorflow import keras
from tensorflow.keras import Model
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures, Input, concatenate
print(tf.__version__)
%matplotlib inline
Explanation: Introducing the Keras Functional API
Learning Objectives
1. Understand embeddings and how to create them with the feature column API
1. Understand Deep and Wide models and when to use them
1. Understand the Keras functional API and how to build a deep and wide model with it
Introduction
In the last notebook, we learned about the Keras Sequential API. The Keras Functional API provides an alternate way of building models which is more flexible. With the Functional API, we can build models with more complex topologies, multiple input or output layers, shared layers or non-sequential data flows (e.g. residual layers).
In this notebook we'll use what we learned about feature columns to build a Wide & Deep model. Recall, that the idea behind Wide & Deep models is to join the two methods of learning through memorization and generalization by making a wide linear model and a deep learning model to accommodate both. You can have a look at the original research paper here: Wide & Deep Learning for Recommender Systems.
<img src='assets/wide_deep.png' width='80%'>
<sup>(image: https://ai.googleblog.com/2016/06/wide-deep-learning-better-together-with.html)</sup>
The Wide part of the model is associated with the memory element. In this case, we train a linear model with a wide set of crossed features and learn the correlation of this related data with the assigned label. The Deep part of the model is associated with the generalization element where we use embedding vectors for features. The best embeddings are then learned through the training process. While both of these methods can work well alone, Wide & Deep models excel by combining these techniques together.
Start by importing the necessary libraries for this lab.
End of explanation
!ls -l ../data/*.csv
Explanation: Load raw data
We will use the taxifare dataset, using the CSV files that we created in the first notebook of this sequence. Those files have been saved into ../data.
End of explanation
CSV_COLUMNS = [
"fare_amount",
"pickup_datetime",
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
"key",
]
LABEL_COLUMN = "fare_amount"
DEFAULTS = [[0.0], ["na"], [0.0], [0.0], [0.0], [0.0], [0.0], ["na"]]
UNWANTED_COLS = ["pickup_datetime", "key"]
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
features = row_data
for unwanted_col in UNWANTED_COLS:
features.pop(unwanted_col)
return features, label
def create_dataset(pattern, batch_size=1, mode="eval"):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS
)
dataset = dataset.map(features_and_labels)
if mode == "train":
dataset = dataset.shuffle(buffer_size=1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
Explanation: Use tf.data to read the CSV files
We wrote these functions for reading data from the csv files above in the previous notebook. For this lab we will also include some additional engineered features in our model. In particular, we will compute the difference in latitude and longitude, as well as the Euclidean distance between the pick-up and drop-off locations. We can accomplish this by adding these new features to the features dictionary with the function add_engineered_features below.
Note that we include a call to this function when collecting our features dict and labels in the features_and_labels function below as well.
End of explanation
# 1. Bucketize latitudes and longitudes
NBUCKETS = 16
latbuckets = np.linspace(start=38.0, stop=42.0, num=NBUCKETS).tolist()
lonbuckets = np.linspace(start=-76.0, stop=-72.0, num=NBUCKETS).tolist()
fc_bucketized_plat = # TODO: Your code goes here.
fc_bucketized_plon = # TODO: Your code goes here.
fc_bucketized_dlat = # TODO: Your code goes here.
fc_bucketized_dlon = # TODO: Your code goes here.
# 2. Cross features for locations
fc_crossed_dloc = # TODO: Your code goes here.
fc_crossed_ploc = # TODO: Your code goes here.
fc_crossed_pd_pair = # TODO: Your code goes here.
# 3. Create embedding columns for the crossed columns
fc_pd_pair = # TODO: Your code goes here.
fc_dloc = # TODO: Your code goes here.
fc_ploc = # TODO: Your code goes here.
Explanation: Feature columns for Wide and Deep model
For the Wide columns, we will create feature columns of crossed features. To do this, we'll create a collection of Tensorflow feature columns to pass to the tf.feature_column.crossed_column constructor. The Deep columns will consist of numeric columns and the embedding columns we want to create.
Exercise. In the cell below, create feature columns for our wide-and-deep model. You'll need to build
1. bucketized columns using tf.feature_column.bucketized_column for the pickup and dropoff latitude and longitude,
2. crossed columns using tf.feature_column.crossed_column for those bucketized columns, and
3. embedding columns using tf.feature_column.embedding_column for the crossed columns.
End of explanation
# TODO 2
wide_columns = [
# One-hot encoded feature crosses
# TODO: Your code goes here.
]
deep_columns = [
# Embedding_columns
# TODO: Your code goes here.
# Numeric columns
# TODO: Your code goes here.
]
Explanation: Gather list of feature columns
Next we gather the list of wide and deep feature columns we'll pass to our Wide & Deep model in Tensorflow. Recall, wide columns are sparse, have linear relationship with the output while continuous columns are deep, have a complex relationship with the output. We will use our previously bucketized columns to collect crossed feature columns and sparse feature columns for our wide columns, and embedding feature columns and numeric features columns for the deep columns.
Exercise. Collect the wide and deep columns into two separate lists. You'll have two lists: One called wide_columns containing the one-hot encoded features from the crossed features and one called deep_columns which contains numeric and embedding feature columns.
End of explanation
INPUT_COLS = [
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
]
inputs = {
colname: Input(name=colname, shape=(), dtype="float32")
for colname in INPUT_COLS
}
Explanation: Build a Wide and Deep model in Keras
To build a wide-and-deep network, we connect the sparse (i.e. wide) features directly to the output node, but pass the dense (i.e. deep) features through a set of fully connected layers. Here’s that model architecture looks using the Functional API.
First, we'll create our input columns using tf.keras.layers.Input.
End of explanation
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_model(dnn_hidden_units):
# Create the deep part of model
deep = # TODO: Your code goes here.
# Create the wide part of model
wide = # TODO: Your code goes here.
# Combine deep and wide parts of the model
combined = # TODO: Your code goes here.
# Map the combined outputs into a single prediction value
output = # TODO: Your code goes here.
# Finalize the model
model = # TODO: Your code goes here.
# Compile the keras model
model.compile(
# TODO: Your code goes here.
)
return model
Explanation: Then, we'll define our custom RMSE evaluation metric and build our wide and deep model.
Exercise. Complete the code in the function build_model below so that it returns a compiled Keras model. The argument dnn_hidden_units should represent the number of units in each layer of your network. Use the Functional API to build a wide-and-deep model. Use the deep_columns you created above to build the deep layers and the wide_columns to create the wide layers. Once you have the wide and deep components, you will combine them to feed to a final fully connected layer.
End of explanation
HIDDEN_UNITS = [10, 10]
model = build_model(dnn_hidden_units=HIDDEN_UNITS)
tf.keras.utils.plot_model(model, show_shapes=False, rankdir="LR")
Explanation: Next, we can call the build_model to create the model. Here we'll have two hidden layers, each with 10 neurons, for the deep part of our model. We can also use plot_model to see a diagram of the model we've created.
End of explanation
BATCH_SIZE = 1000
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset will repeat, wrap around
NUM_EVALS = 50 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample
trainds = create_dataset(
pattern="../data/taxi-train*", batch_size=BATCH_SIZE, mode="train"
)
evalds = create_dataset(
pattern="../data/taxi-valid*", batch_size=BATCH_SIZE, mode="eval"
).take(NUM_EVAL_EXAMPLES // 1000)
%%time
steps_per_epoch = NUM_TRAIN_EXAMPLES // (BATCH_SIZE * NUM_EVALS)
OUTDIR = "./taxi_trained"
shutil.rmtree(path=OUTDIR, ignore_errors=True) # start fresh each time
history = model.fit(
x=trainds,
steps_per_epoch=steps_per_epoch,
epochs=NUM_EVALS,
validation_data=evalds,
callbacks=[TensorBoard(OUTDIR)],
)
Explanation: Next, we'll set up our training variables, create our datasets for training and validation, and train our model.
(We refer you the the blog post ML Design Pattern #3: Virtual Epochs for further details on why express the training in terms of NUM_TRAIN_EXAMPLES and NUM_EVALS and why, in this training code, the number of epochs is really equal to the number of evaluations we perform.)
End of explanation
RMSE_COLS = ["rmse", "val_rmse"]
pd.DataFrame(history.history)[RMSE_COLS].plot()
Explanation: Just as before, we can examine the history to see how the RMSE changes through training on the train set and validation set.
End of explanation |
1,392 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Proteins example recreated in python from
https
Step1: note numpy also has recfromcsv() and pandas can read_csv, with pandas DF.values giving a numpy array
Step2: Samples clustering using scipy
First, we'll implement the clustering using scipy modules
Step3: the fastcluster module http | Python Code:
import numpy as np
from numpy import genfromtxt
data = genfromtxt('http://www.biz.uiowa.edu/faculty/jledolter/DataMining/protein.csv',delimiter=',',names=True,dtype=float)
Explanation: Proteins example recreated in python from
https://rstudio-pubs-static.s3.amazonaws.com/33876_1d7794d9a86647ca90c4f182df93f0e8.html and http://nbviewer.jupyter.org/github/OxanaSachenkova/hclust-python/blob/master/hclust.ipynb
End of explanation
len(data)
len(data.dtype.names)
data.dtype.names
type(data)
data
data_array = data.view((np.float, len(data.dtype.names)))
data_array
data_array = data_array.transpose()
print(data_array)
data_array[1:10]
Explanation: note numpy also has recfromcsv() and pandas can read_csv, with pandas DF.values giving a numpy array
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial.distance import pdist, squareform
from scipy.cluster.hierarchy import linkage, dendrogram
data_dist = pdist(data_array[1:10]) # computing the distance
data_link = linkage(data_dist) # computing the linkage
dendrogram(data_link,labels=data.dtype.names)
plt.xlabel('Samples')
plt.ylabel('Distance')
plt.suptitle('Samples clustering', fontweight='bold', fontsize=14);
plt.show()
# Compute and plot first dendrogram.
fig = plt.figure(figsize=(8,8))
# x ywidth height
ax1 = fig.add_axes([0.05,0.1,0.2,0.6])
Y = linkage(data_dist, method='single')
Z1 = dendrogram(Y, orientation='right',labels=data.dtype.names) # adding/removing the axes
ax1.set_xticks([])
# Compute and plot second dendrogram.
ax2 = fig.add_axes([0.3,0.71,0.6,0.2])
Z2 = dendrogram(Y)
ax2.set_xticks([])
ax2.set_yticks([])
#Compute and plot the heatmap
axmatrix = fig.add_axes([0.3,0.1,0.6,0.6])
idx1 = Z1['leaves']
idx2 = Z2['leaves']
D = squareform(data_dist)
D = D[idx1,:]
D = D[:,idx2]
im = axmatrix.matshow(D, aspect='auto', origin='lower', cmap=plt.cm.YlGnBu)
axmatrix.set_xticks([])
axmatrix.set_yticks([])
# Plot colorbar.
axcolor = fig.add_axes([0.91,0.1,0.02,0.6])
plt.colorbar(im, cax=axcolor)
plt.show()
Explanation: Samples clustering using scipy
First, we'll implement the clustering using scipy modules
End of explanation
! pip install fastcluster
from fastcluster import *
%timeit data_link = linkage(data_array[1:10], method='single', metric='euclidean', preserve_input=True)
dendrogram(data_link,labels=data.dtype.names)
plt.xlabel('Samples')
plt.ylabel('Distance')
plt.suptitle('Samples clustering', fontweight='bold', fontsize=14);
plt.show()
Explanation: the fastcluster module http://math.stanford.edu/~muellner/fastcluster.html?section=0
End of explanation |
1,393 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modelos Bayesianos
Aplicar um modelo estatístico a um conjunto de dados significa interpretá-lo como um conjunto de realizações de um experimento randomizado. Isso permite associar o conjunto de dados a uma distribuição de probabilidades, e então utilizar o arcabouço de técnicas da estatística para executar análises automatizadas. Neste caderno, utilizaremos o Teorema de Bayes para executar diversas análises em nosso conjunto de dados.
Objetivos
Ao final desta iteração, o aluno será capaz de
Step1: Teorema de Bayes
O Teorema de Bayes, tradicionalmente conhecida na estatística, determina que
Step2: Podemos verificar a estabilidade do modelo para diferentes tamanhos de conjunto de treino de forma semelhante a que fizemos no caso de KNN
Step3: Classificador de Máxima Probabilidade À Posteriori (MAP)
Embora o classificador ML seja bastante relevante em diversas aplicações, é possível que as probabilidades a priori sejam importantes em determinados problemas. Quando o resultado para uma população é mais importante que o resultado para cada indivíduo, o critério Bayesiano é mais adequado. Um possível problema é o de estimar o número de alunos que irão reprovar uma disciplina tomando por base seus históricos
Step4: O critério MAP minimiza a probabilidade de erro teórico do estimador. Embora esse seja um resultado relevante, também é importante considerar que a estimativa das probabilidades envolvidas pode não ser sempre ótima.
Em comparação com o estimador ML, o estimador MAP é capaz de atingir resultados teóricos melhores. Ao mesmo tempo, tem mais parâmetros que devem ser estimados, o que significa que precisa de mais dados para ser treinado adequadamente. Por fim, ambos os estimadores são adequados para aplicações ligeiramente diferentes.
Comparando ML e MAP
Neste momento, é importante entender como comparar dois algoritmos de classificação. Vamos executar novamente o procedimento de teste Monte Carlo para o algoritmo MAP, e então mostrar sua curva de desempenho junto à de ML, como segue.
Step5: Veja que, embora MAP tenha uma possibilidade teórica de conseguir um erro menor que ML, seu erro médio é bastante semelhante. A variância do erro também apresenta um comportamento semelhante, aumentando à medida que o conjunto de treino aumenta. Essa variância não decorre de uma degradação do modelo, mas sim da diminuição do conjunto de testes
Step6: Podemos verificar que o treinamento não-supervisionado, por utilizar todo o conjunto de dados para treino/teste, não apresenta flutuações de desempenho. Ao mesmo tempo, não é possível dizer que esse modelo generaliza para outros pontos, já que é um modelo que foi treinado e testado no mesmo conjunto de dados. A não-generalização, neste caso, não é um grande problema, já que o problema está restrito à base de dados que temos. Neste caso, realizamos o agrupamento dos dados do nosso conjunto através do modelo GMM, e então interpretamos manualmente os resultados de acordo com nosso conhecimento prévio.
De qualquer forma, podemos mostrar novamente a figura | Python Code:
# Inicializacao
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
# Abrindo conjunto de dados
import csv
with open("biometria.csv", 'rb') as f:
dados = list(csv.reader(f))
rotulos_volei = [d[0] for d in dados[1:-1] if d[0] is 'V']
rotulos_futebol = [d[0] for d in dados[1:-1] if d[0] is 'F']
altura_volei = [[float(d[1])] for d in dados[1:-1] if d[0] is 'V']
altura_futebol = [[float(d[1])] for d in dados[1:-1] if d[0] is 'F']
peso_volei = [[float(d[2])] for d in dados[1:-1] if d[0] is 'V']
peso_futebol = [[float(d[2])] for d in dados[1:-1] if d[0] is 'F']
Explanation: Modelos Bayesianos
Aplicar um modelo estatístico a um conjunto de dados significa interpretá-lo como um conjunto de realizações de um experimento randomizado. Isso permite associar o conjunto de dados a uma distribuição de probabilidades, e então utilizar o arcabouço de técnicas da estatística para executar análises automatizadas. Neste caderno, utilizaremos o Teorema de Bayes para executar diversas análises em nosso conjunto de dados.
Objetivos
Ao final desta iteração, o aluno será capaz de:
* Entender o conceito de modelo gerador
* Entender o conceito de aprendizado não-supervisionado
* Entender o conceito de classificação e agrupamento como problemas de inteligência computacional
* Aplicar modelos de misturas de Gaussianas em contextos de classificação e agrupamento
* Aplicar modelos de máxima verossimilhança (ML) e máxima probabilidade à posteriori (MAP)
End of explanation
from sklearn import mixture
from sklearn.cross_validation import train_test_split
def treinamento_GMM_ML(train_size=0.3, n_components=2):
# Separar dados adequadamente
dados_treino, dados_teste, rotulos_treino, rotulos_teste =\
train_test_split(altura_volei + altura_futebol, rotulos_volei + rotulos_futebol, train_size=train_size)
treino_futebol = [dados_treino[i] for i in xrange(len(dados_treino)) if rotulos_treino[i] == 'F']
treino_volei = [dados_treino[i] for i in xrange(len(dados_treino)) if rotulos_treino[i] == 'V']
# Especificar parametros da mistura
g1 = mixture.GMM(n_components=n_components)
g2 = mixture.GMM(n_components=n_components)
# Treinar modelo GMM
g1.fit(treino_futebol)
g2.fit(treino_volei)
# Executar modelos sobre conjunto de teste
p_futebol = g1.score(dados_teste)
p_volei = g2.score(dados_teste)
# Verificar qual modelo mais provavelmente gerou os dados de teste
x = []
for i in xrange(len(dados_teste)):
if p_futebol[i] > p_volei[i]:
x.append('F')
else:
x.append('V')
# Verificar quantidade de acertos
acertos = 0.0
for i in xrange(len(x)):
if x[i] == rotulos_teste[i]:
acertos += 1
acertos *= 100.0/float(len(x))
return acertos
print "Acertos:", treinamento_GMM_ML(), "%"
Explanation: Teorema de Bayes
O Teorema de Bayes, tradicionalmente conhecida na estatística, determina que:
$$P(A|B) = \frac{P(B|A)P(A)}{P(B)}$$
A dedução do teorema não é tão importante quanto a sua interpretação. Se $A$ é um rótulo (não-observável) e $B$ é um conjunto de propriedades de um elemento, então temos um caso bastante interessante: a probabilidade de observar $A$ dado que o elemento pertence a uma classe $B$ é determinada pela probabilidade de encontrar a observação $B$ dada uma classe $A$, da probabilidade de encontrar a classe $A$ no conjunto-universo e da probabilidade de encontrar a observação $B$ no conjunto universo. Em outras palavras:
$$\mbox{Expectativa A Posteriori} = P(\mbox{Rótulo | Observação}) \propto \mbox{Verossimilhança} \times \mbox{Expectativa A Priori}.$$
Isso significa que, se formos capazes de estimar a distribuição dos dados $B$ em um conjunto $A$ (ou seja, $P(B|A)$), podemos estimar $P(A|B)$ como uma medida da força da hipótese de se atribuir o rótulo $A$ à observação $B$. Para o caso que temos trabalhado, de esportistas, temos:
$$P(\mbox{Esporte} | \mbox{Altura, Peso}) \propto P(\mbox{Altura, Peso}|\mbox{Esporte})P(\mbox{Esporte}).$$
Misturas de Gaussianas
Uma forma possível de estimar $P(\mbox{Altura, Peso}|\mbox{Esporte})$ é utilizar dados rotulados e uma distribuição escolhida. Uma idéia bastante comum é assumir uma distribuição Gaussiana com desvio-padrão $\sigma$ e média $\mu$, de tal forma que:
$$G(x, \mu, \sigma) = \frac{1}{\sigma \sqrt{2 \pi}} e^{ - \frac{1}{2} (\frac{x-\mu}{\sigma})^2}.$$
Uma Gaussiana pode ser um modelo interessante para diversas distribuições, mas nem sempre para todas. Um modelo mais avançado é a mistura de Gaussianas, que representa uma distribuição que é uma soma ponderada de $M$ distribuições Gaussianas:
$$S(x, \Theta) = \sum_{m=1}^M a_m G(x, \mu_m, \sigma_m),$$
onde $\Theta$ representa o conjunto de todos os parâmetros do modelo e $\sum_{m=1}^M a_m = 1$.
O modelo de mistura de Gaussianas (Gaussian Mixture Model, GMM) pode se adaptar a um conjunto de dados através de um algoritmo chamado Maximização de Expectativa (Expectation Maximization, EM). O algoritmo EM é um algoritmo iterativo que estima novos parâmetros $\Theta'$ à partir do conjunto de parâmetros atuais do modelo e de um conjunto de dados não-rotulados. Os novos parâmetros são estimados de forma que $S(x, \Theta') > S(x, \Theta)$, isto é, maximizando a probabilidade de que os dados sejam gerados pelo modelo.
Assim, podemos estimar $P(B|A)$ assumindo um modelo gerativo tipo GMM e então usando o algoritmo EM em dados de treino.
Decisão de Máxima Verossimilhança
A decisão de Máxima Verossimilhança (Maximum Likelihood, ML) desconsidera a probabilidade de cada classe a priori. Esse tipo de estimador é importante em situações em que a estimativa de $P(\mbox{Rótulo})$ não é confiável ou não é relevante. Um possível exemplo é um algoritmo que verificará, através de dados, se um paciente é portador de uma determinada doença rara.
Se a probabilidade a priori for levada em consideração, o índice de falsos negativos será muito alto, uma vez que espera-se que apenas uma fração muito pequena da população seja portadora da doença. Em diversos casos, um estimador com menor índice de erros simplesmente escolhe a classe mais frequente, independente de outros fatores. Porém, esse índice a priori não é relevante para o paciente em questão: a probabilidade diz respeito à probabilidade de um indivíduo aleatório da população ser portador da doença, e não à probabilidade relacionada a aquele indivíduo específico.
A decisão ML consiste em estimar modelos de $P(\mbox{Observação | Rótulo})$ para cada classe, e então assumir como resposta do sistema a classe de maior probabilidade. Assim, temos:
End of explanation
# Parametros para executar busca exaustiva
train_size_min = 0.35
train_size_max = 0.95
train_size_step = 0.05
# Numero de iteracoes para cada tamanho de conjunto de treino
n_iter = 100
# Listas que armazenarao os resultados
steps = []
medias = []
variancias = []
train_size_atual = train_size_min
while train_size_atual <= train_size_max: # para cada tamanho do conjunto de treino
acertos = []
for k in xrange(n_iter): # para cada iteracao do processo Monte Carlo
dados_treino, dados_teste, rotulos_treino, rotulos_teste =\
train_test_split(altura_volei + altura_futebol, rotulos_volei + rotulos_futebol, train_size=train_size_atual)
score = treinamento_GMM_ML(train_size=train_size_atual, n_components=2)
acertos.append(score)
steps.append(train_size_atual)
medias.append(np.mean(np.array(acertos)))
variancias.append(np.std(np.array(acertos)))
train_size_atual += train_size_step
plt.figure();
plt.errorbar(steps, medias, yerr=variancias);
plt.ylabel('Indice de acertos');
plt.xlabel('Tamanho do conjunto de treino');
Explanation: Podemos verificar a estabilidade do modelo para diferentes tamanhos de conjunto de treino de forma semelhante a que fizemos no caso de KNN:
End of explanation
import math
def treinamento_GMM_MAP(train_size=0.3, n_components=2):
# Separar dados adequadamente
dados_treino, dados_teste, rotulos_treino, rotulos_teste =\
train_test_split(altura_volei + altura_futebol, rotulos_volei + rotulos_futebol, train_size=train_size)
treino_futebol = [dados_treino[i] for i in xrange(len(dados_treino)) if rotulos_treino[i] == 'F']
treino_volei = [dados_treino[i] for i in xrange(len(dados_treino)) if rotulos_treino[i] == 'V']
# Especificar parametros da mistura
g1 = mixture.GMM(n_components=n_components)
g2 = mixture.GMM(n_components=n_components)
# Treinar modelo GMM
g1.fit(treino_futebol)
g2.fit(treino_volei)
# Treino das probabilidades a priori
prior_futebol = len([rotulo for rotulo in rotulos_treino if rotulo == 'F']) / float(len(rotulos_treino))
prior_volei = len([rotulo for rotulo in rotulos_treino if rotulo == 'V']) / float(len(rotulos_treino))
# Executar modelos sobre conjunto de teste
p_futebol = g1.score(dados_teste) + math.log(prior_futebol)
p_volei = g2.score(dados_teste) + math.log(prior_volei)
# Verificar qual modelo mais provavelmente gerou os dados de teste
x = []
for i in xrange(len(dados_teste)):
if p_futebol[i] > p_volei[i]:
x.append('F')
else:
x.append('V')
# Verificar quantidade de acertos
acertos = 0.0
for i in xrange(len(x)):
if x[i] == rotulos_teste[i]:
acertos += 1
acertos *= 100.0/float(len(x))
return acertos
print "Acertos:", treinamento_GMM_MAP(), "%"
Explanation: Classificador de Máxima Probabilidade À Posteriori (MAP)
Embora o classificador ML seja bastante relevante em diversas aplicações, é possível que as probabilidades a priori sejam importantes em determinados problemas. Quando o resultado para uma população é mais importante que o resultado para cada indivíduo, o critério Bayesiano é mais adequado. Um possível problema é o de estimar o número de alunos que irão reprovar uma disciplina tomando por base seus históricos: neste caso, a probabilidade a priori de um aluno ser reprovado (medido pelo histórico de reprovações da disciplina/professor) é relevante.
Neste caso, estimaremos a probabilidade a posteriori para cada uma das classes, ou seja:
$$P_{\mbox{posteriori}} = P(\mbox{Observação | Rótulo}) \times P(\mbox{Rótulo})$$
Veja que a probabilidade condicional - a verossimilhança - pode ser estimada utilizando o algoritmo EM, ao passo que a probabilidade a priori pode ser estimada verificando a frequência de cada rótulo no conjunto de teste.
Então, escolheremos a classe com maior probabilidade a posteriori. Então, nossa função de estimativa pode ser modificada para:
End of explanation
# Parametros para executar busca exaustiva
train_size_min = 0.35
train_size_max = 0.95
train_size_step = 0.05
# Numero de iteracoes para cada tamanho de conjunto de treino
n_iter = 100
# Listas que armazenarao os resultados
steps1 = []
medias1 = []
variancias1 = []
train_size_atual = train_size_min
while train_size_atual <= train_size_max: # para cada tamanho do conjunto de treino
acertos = []
for k in xrange(n_iter): # para cada iteracao do processo Monte Carlo
dados_treino, dados_teste, rotulos_treino, rotulos_teste =\
train_test_split(altura_volei + altura_futebol, rotulos_volei + rotulos_futebol, train_size=train_size_atual)
score = treinamento_GMM_ML(train_size=train_size_atual, n_components=2)
acertos.append(score)
steps1.append(train_size_atual)
medias1.append(np.mean(np.array(acertos)))
variancias1.append(np.std(np.array(acertos)))
train_size_atual += train_size_step
plt.figure();
plt.errorbar(steps, medias, yerr=variancias);
plt.errorbar(steps1, medias1, yerr=variancias1, color='red');
plt.ylabel('Indice de acertos');
plt.xlabel('Tamanho do conjunto de treino');
Explanation: O critério MAP minimiza a probabilidade de erro teórico do estimador. Embora esse seja um resultado relevante, também é importante considerar que a estimativa das probabilidades envolvidas pode não ser sempre ótima.
Em comparação com o estimador ML, o estimador MAP é capaz de atingir resultados teóricos melhores. Ao mesmo tempo, tem mais parâmetros que devem ser estimados, o que significa que precisa de mais dados para ser treinado adequadamente. Por fim, ambos os estimadores são adequados para aplicações ligeiramente diferentes.
Comparando ML e MAP
Neste momento, é importante entender como comparar dois algoritmos de classificação. Vamos executar novamente o procedimento de teste Monte Carlo para o algoritmo MAP, e então mostrar sua curva de desempenho junto à de ML, como segue.
End of explanation
def treinamento_GMM_nao_supervisionado():
# Especificar parametros da mistura
g = mixture.GMM(n_components=2)
# Treinar modelo GMM
g.fit(altura_volei + altura_futebol)
# Verificar qual Gaussiana corresponde a cada rótulo
if g.means_[0][0] > g.means_[1][0]:
rotulos = ('V', 'F')
else:
rotulos = ('F', 'V')
# Executar modelos sobre conjunto de teste
p = g.predict_proba(altura_volei + altura_futebol)
# Verificar qual modelo mais provavelmente gerou os dados de teste
x = []
for i in xrange(len(altura_volei + altura_futebol)):
if p[i][0] > p[i][1]:
x.append(rotulos[0])
else:
x.append(rotulos[1])
# Verificar quantidade de acertos
acertos = 0.0
for i in xrange(len(x)):
if x[i] == (rotulos_volei + rotulos_futebol)[i]:
acertos += 1
acertos *= 100.0/float(len(x))
return acertos
acertos_nao_supervisionados = treinamento_GMM_nao_supervisionado()
print "Acertos:", acertos_nao_supervisionados, "%"
Explanation: Veja que, embora MAP tenha uma possibilidade teórica de conseguir um erro menor que ML, seu erro médio é bastante semelhante. A variância do erro também apresenta um comportamento semelhante, aumentando à medida que o conjunto de treino aumenta. Essa variância não decorre de uma degradação do modelo, mas sim da diminuição do conjunto de testes: com a diminuição do número de pontos de teste, observa-se também o aumento do impacto dos pontos que geram classificações erradas.
Agrupamento (clustering)
Em nosso algoritmo de classificação, temos adotado o paradigma de treinar classificadores de forma supervisionada, ou seja, fornecendo a ele exemplos rotulados. Porém, há situações em que queremos fornecer apenas exemplos não-rotulados ao nosso sistema, e deixar que agrupamentos adequados sejam escolhidos automaticamente. Dados rotulados podem ser difíceis de obter ou simplesmente não estar presentes em quantidade suficiente. Ainda, é possível que estejamos interessados exatamente no resultado desses agrupamentos para analisar dados posteriormente.
Neste caso, é necessário utilizar um algoritmo de treino que não dependa dos rótulos fornecidos na etapa de treino. O algoritmo EM, por exemplo, funciona de forma não-supervisionada, pois busca aumentar a probabilidade de que os dados sejam gerados por um modelo e não usa nenhuma informação além dos próprios vetores de características dos dados. Para que um algoritmo não-supervisionado funcione, é preciso que o seu funcionamento -- ou, o modelo subjacente ao algoritmo -- reflita, de alguma forma, o comportamento esperado dos dados.
Para o nosso conjunto de dados, poderíamos adotar uma idéia simples, porém promissora. Se todos os pontos relativos à altura dos jogadores forem modelados por uma mistura de duas Gaussianas, seria provável que uma gaussiana se concentrasse nos jogadores de vôlei (mais altos) e outra nos jogadores de futebol (mais baixos). Nesse caso, poderíamos verificar qual das Gaussianas mais provavelmente gerou cada um dos pontos de dados, e, assim, associar cada ponto a um rótulo (vôlei para a Gaussiana de média mais alta, e futebol para a Gaussiana de média mais baixa).
End of explanation
plt.figure();
plt.errorbar(steps, medias, yerr=variancias);
plt.errorbar(steps1, medias1, yerr=variancias1, color='red');
plt.plot(steps, [acertos_nao_supervisionados] * len(steps), ':', color='green')
plt.ylabel('Indice de acertos');
plt.xlabel('Tamanho do conjunto de treino');
Explanation: Podemos verificar que o treinamento não-supervisionado, por utilizar todo o conjunto de dados para treino/teste, não apresenta flutuações de desempenho. Ao mesmo tempo, não é possível dizer que esse modelo generaliza para outros pontos, já que é um modelo que foi treinado e testado no mesmo conjunto de dados. A não-generalização, neste caso, não é um grande problema, já que o problema está restrito à base de dados que temos. Neste caso, realizamos o agrupamento dos dados do nosso conjunto através do modelo GMM, e então interpretamos manualmente os resultados de acordo com nosso conhecimento prévio.
De qualquer forma, podemos mostrar novamente a figura
End of explanation |
1,394 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian model selection and linear regression
This notebook uses Bayesian selection for linear regression with basis functions in order to (partially) answer question #2231975 in Math StackExchange. The necessary code can be found in bitbucket.
The idea is to use a fixed set of simple functions to interpolate a given (small) dataset. Non-parametric regression will yield almost perfect results but it seemed not to be an option for the OP, so this is one possibility.
We begin with the usual boilerplate importing the necessary modules. Note the manipulation of the imports path in order to access the code in the local repository.
Step1: We now load data and normalize it to have zero mean and variance 1. This is required to avoid numerical issues
Step2: Next we prepare some set of hypothesis spaces to be tested against each other. Because it's easy and already implemented in the repo, we take two polynomial and two trigonometric families of basis functions.
Step3: We now perform bayesian updates to our belief in each hypothesis space. Each data point is fed to the LinearRegression object which then performs
Step4: The winner among the hypotheses proposed is clearly the Trigonometric hypothesis ($H_3$) with $M=12$ basis functions
Step5: Note how the model comparison rejects the hypothesis Trig7 after seeing about half the dataset and leans in favor of Trig11 which becomes a better fit. This might come at cost later, though, because Trig11 is a wildly oscillating polynomial beyond the interval considered whereas Trig7 is a bit more tame. More data would be needed to decide and besides, you really don't want to extrapolate with your regression ;) | Python Code:
import sys
sys.path.append("../src/")
from Hypotheses import *
from ModelSelection import LinearRegression
from Plots import updateMAPFitPlot, updateProbabilitiesPlot
import numpy as np
from sklearn import preprocessing
import matplotlib.pyplot as pl
%matplotlib notebook
Explanation: Bayesian model selection and linear regression
This notebook uses Bayesian selection for linear regression with basis functions in order to (partially) answer question #2231975 in Math StackExchange. The necessary code can be found in bitbucket.
The idea is to use a fixed set of simple functions to interpolate a given (small) dataset. Non-parametric regression will yield almost perfect results but it seemed not to be an option for the OP, so this is one possibility.
We begin with the usual boilerplate importing the necessary modules. Note the manipulation of the imports path in order to access the code in the local repository.
End of explanation
data = np.loadtxt('data-2231875.txt', delimiter=',', skiprows=1)
data[:,1] = preprocessing.scale(data[:,1])
pl.title("The (normalized) dataset")
_ = pl.plot(data[:,0], data[:,1])
#pl.savefig('data.svg')
Explanation: We now load data and normalize it to have zero mean and variance 1. This is required to avoid numerical issues: for large values of the target values, some probabilities in the computations become zero because of the exponential function ($e^{-t}$ becomes almost zero for relatively small values of $t$).
End of explanation
var = data[:, 1].std() # Should be approx. 1 after scaling
sigma = 0.1 # Observation noise sigma
hc = HypothesisCollection()
hc.append(PolynomialHypothesis(M=5, variance=var, noiseVariance=sigma**2))
hc.append(PolynomialHypothesis(M=6, variance=var, noiseVariance=sigma**2))
hc.append(TrigonometricHypothesis(halfM=4, variance=var, noiseVariance=sigma**2))
hc.append(TrigonometricHypothesis(halfM=6, variance=var, noiseVariance=sigma**2))
lr = LinearRegression(hc, sigma)
Explanation: Next we prepare some set of hypothesis spaces to be tested against each other. Because it's easy and already implemented in the repo, we take two polynomial and two trigonometric families of basis functions.
End of explanation
%%time
ymin, ymax = min(data[:,1]), max(data[:,1])
# Looping is ugly, but it is what it is! :P
for x, y in data:
lr.update(x, y)
# MAP values for the weights w_j
wmap = [param.mean for param in lr.parameter]
fig, (ax1, ax2) = pl.subplots(2)
updateMAPFitPlot(ax1, lr.XHist, lr.hypotheses, wmap, 0.005)
ax1.plot(lr.XHist, lr.THist, 'k+', ms=4, alpha=0.5) # plot the data points
ax1.set_title("Data and MAP fits")
updateProbabilitiesPlot(ax2, lr)
ax2.set_title("Incremental model probability")
fig.subplots_adjust(hspace=0.5)
#pl.savefig('mapfits.svg')
_ = pl.show()
Explanation: We now perform bayesian updates to our belief in each hypothesis space. Each data point is fed to the LinearRegression object which then performs:
1. Estimation of the weights for each hypothesis.
2. Computation of the posterior probability of each hypothesis, given the data.
End of explanation
prob_hypotheses = np.array(lr.probHyp)
winner = np.argmax(prob_hypotheses[:,-1])
wmap[winner].round(2).flatten()
Explanation: The winner among the hypotheses proposed is clearly the Trigonometric hypothesis ($H_3$) with $M=12$ basis functions:
$$\phi_j (x) = \cos (\pi j x)\ \text{ for }\ j = 2 k,$$
$$\phi_j (x) = \sin (\pi j x)\ \text{ for }\ j = 2 k+1,$$
where $k \in 0, \ldots, M/2,$. Our best candidate is then
$$f(x) = \sum_{j=0}^11 w_j \phi_j (x). $$
The specific values of the weights $w_j$ are taken from from the a posteriori distribution computed (Gaussian since we started with a Gaussian prior). Their MAP values are:
End of explanation
xx = np.linspace(-1,2,200)
for h, w, l in zip(lr.hypotheses[2:], wmap[2:], ['Trig7', 'Trig11']):
pl.plot(xx, [np.dot(h.evaluate(x).flatten(), w) for x in xx], label=l)
pl.title("Complexity in competing hypotheses")
_ = pl.legend()
#pl.savefig('complexity.svg')
Explanation: Note how the model comparison rejects the hypothesis Trig7 after seeing about half the dataset and leans in favor of Trig11 which becomes a better fit. This might come at cost later, though, because Trig11 is a wildly oscillating polynomial beyond the interval considered whereas Trig7 is a bit more tame. More data would be needed to decide and besides, you really don't want to extrapolate with your regression ;)
End of explanation |
1,395 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to defer evaluation of f-strings
It seems that one solution is to use lambdas, which are explored below.
What other solutions are there?
Imagine that one wants to format a string
selecting the format from several f-strings.
Unfortunately, the obvious straightforward way
of having the f-strings as values in a dictionary,
does not work as desired,
because the f-strings are evaluated when creating the dictionary.
This unhappy way follows.
Step1: Notice below that
the current values of year, month, and day
are ignored when evaluating date_formats['iso'].
Step2: A solution is to use lambdas in the dictionary, and call them later,
as shown below. | Python Code:
year, month, day = 'hello', -1, 0
date_formats = {
'iso': f'{year}-{month:02d}-{day:02d}',
'us': f'{month}/{day}/{year}',
'other': f'{day} {month} {year}',
}
Explanation: How to defer evaluation of f-strings
It seems that one solution is to use lambdas, which are explored below.
What other solutions are there?
Imagine that one wants to format a string
selecting the format from several f-strings.
Unfortunately, the obvious straightforward way
of having the f-strings as values in a dictionary,
does not work as desired,
because the f-strings are evaluated when creating the dictionary.
This unhappy way follows.
End of explanation
year, month, day = 2017, 3, 27
print(year, month, day)
print(date_formats['iso'])
Explanation: Notice below that
the current values of year, month, and day
are ignored when evaluating date_formats['iso'].
End of explanation
year, month, day = 'hello', -1, 0
# year, month, and day do not have to be defined when creating dictionary.
del year # Test that with one of them.
date_formats = {
'iso': (lambda: f'{year}-{month:02d}-{day:02d}'),
'us': (lambda: f'{month}/{day}/{year}'),
'other': (lambda: f'{day}.{month}.{year}'),
}
dates = (
(2017, 3, 27),
(2017, 4, 24),
(2017, 5, 22),
)
for format_name, format in date_formats.items():
print(f'{format_name}:')
for year, month, day in dates:
print(format())
Explanation: A solution is to use lambdas in the dictionary, and call them later,
as shown below.
End of explanation |
1,396 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating a model from scratch
We describe here how to generate a simple history file for computation with Noddy using the functionality of pynoddy. If possible, it is advisable to generate the history files with the Windows GUI for Noddy as this method provides, to date, a simpler and more complete interface to the entire functionality.
For completeness, pynoddy contains the functionality to generate simple models, for example to automate the model construction process, or to enable the model construction for users who are not running Windows. Some simple examlpes are shown in the following.
Step1: Defining a stratigraphy
We start with the definition of a (base) stratigraphy for the model.
Step2: Add a fault event
As a next step, let's now add the faults to the model.
Step3: Complete Model Set-up
And here now, combining all the previous steps, the entire model set-up with base stratigraphy and two faults | Python Code:
from matplotlib import rc_params
from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
import sys, os
import matplotlib.pyplot as plt
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('../..')
sys.path.append(repo_path)
import pynoddy.history
%matplotlib inline
rcParams.update({'font.size': 20})
Explanation: Creating a model from scratch
We describe here how to generate a simple history file for computation with Noddy using the functionality of pynoddy. If possible, it is advisable to generate the history files with the Windows GUI for Noddy as this method provides, to date, a simpler and more complete interface to the entire functionality.
For completeness, pynoddy contains the functionality to generate simple models, for example to automate the model construction process, or to enable the model construction for users who are not running Windows. Some simple examlpes are shown in the following.
End of explanation
# Combined: model generation and output vis to test:
history = "simple_model.his"
output_name = "simple_out"
import importlib
importlib.reload(pynoddy.history)
importlib.reload(pynoddy.events)
# create pynoddy object
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
strati_options = {'num_layers' : 8,
'layer_names' : ['layer 1', 'layer 2', 'layer 3',
'layer 4', 'layer 5', 'layer 6',
'layer 7', 'layer 8'],
'layer_thickness' : [1500, 500, 500, 500, 500, 500, 500, 500]}
nm.add_event('stratigraphy', strati_options )
nm.write_history(history)
# Compute the model
importlib.reload(pynoddy)
pynoddy.compute_model(history, output_name)
# Plot output
import pynoddy.output
importlib.reload(pynoddy.output)
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('y', layer_labels = strati_options['layer_names'][::-1],
colorbar = True, title="",
savefig = False, fig_filename = "ex01_strati.eps")
Explanation: Defining a stratigraphy
We start with the definition of a (base) stratigraphy for the model.
End of explanation
importlib.reload(pynoddy.history)
importlib.reload(pynoddy.events)
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
strati_options = {'num_layers' : 8,
'layer_names' : ['layer 1', 'layer 2', 'layer 3', 'layer 4', 'layer 5', 'layer 6', 'layer 7', 'layer 8'],
'layer_thickness' : [1500, 500, 500, 500, 500, 500, 500, 500]}
nm.add_event('stratigraphy', strati_options )
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_E',
'pos' : (6000, 0, 5000),
'dip_dir' : 270,
'dip' : 60,
'slip' : 1000}
nm.add_event('fault', fault_options)
nm.events
nm.write_history(history)
# Compute the model
pynoddy.compute_model(history, output_name)
# Plot output
importlib.reload(pynoddy.output)
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('y', layer_labels = strati_options['layer_names'][::-1],
colorbar = True, title = "",
savefig = False, fig_filename = "ex01_fault_E.eps")
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_1',
'pos' : (5500, 3500, 0),
'dip_dir' : 270,
'dip' : 60,
'slip' : 1000}
nm.add_event('fault', fault_options)
nm.write_history(history)
# Compute the model
pynoddy.compute_model(history, output_name)
# Plot output
importlib.reload(pynoddy.output)
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('y', layer_labels = strati_options['layer_names'][::-1], colorbar = True)
nm1 = pynoddy.history.NoddyHistory(history)
nm1.get_extent()
Explanation: Add a fault event
As a next step, let's now add the faults to the model.
End of explanation
importlib.reload(pynoddy.history)
importlib.reload(pynoddy.events)
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
strati_options = {'num_layers' : 8,
'layer_names' : ['layer 1', 'layer 2', 'layer 3',
'layer 4', 'layer 5', 'layer 6',
'layer 7', 'layer 8'],
'layer_thickness' : [1500, 500, 500, 500, 500,
500, 500, 500]}
nm.add_event('stratigraphy', strati_options )
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_W',
'pos' : (4000, 3500, 5000),
'dip_dir' : 90,
'dip' : 60,
'slip' : 1000}
nm.add_event('fault', fault_options)
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_E',
'pos' : (6000, 3500, 5000),
'dip_dir' : 270,
'dip' : 60,
'slip' : 1000}
nm.add_event('fault', fault_options)
nm.write_history(history)
# Change cube size
nm1 = pynoddy.history.NoddyHistory(history)
nm1.change_cube_size(50)
nm1.write_history(history)
# Compute the model
pynoddy.compute_model(history, output_name)
# Plot output
importlib.reload(pynoddy.output)
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('y', layer_labels = strati_options['layer_names'][::-1],
colorbar = True, title="",
savefig = True, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd') # note: YlOrRd colourmap should be suitable for colorblindness!
Explanation: Complete Model Set-up
And here now, combining all the previous steps, the entire model set-up with base stratigraphy and two faults:
End of explanation |
1,397 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computing a covariance matrix
Many methods in MNE, including source estimation and some classification
algorithms, require covariance estimations from the recordings.
In this tutorial we cover the basics of sensor covariance computations and
construct a noise covariance matrix that can be used when computing the
minimum-norm inverse solution. For more information, see
minimum_norm_estimates.
Step1: Source estimation method such as MNE require a noise estimations from the
recordings. In this tutorial we cover the basics of noise covariance and
construct a noise covariance matrix that can be used when computing the
inverse solution. For more information, see minimum_norm_estimates.
Step2: The definition of noise depends on the paradigm. In MEG it is quite common
to use empty room measurements for the estimation of sensor noise. However if
you are dealing with evoked responses, you might want to also consider
resting state brain activity as noise.
First we compute the noise using empty room recording. Note that you can also
use only a part of the recording with tmin and tmax arguments. That can be
useful if you use resting state as a noise baseline. Here we use the whole
empty room recording to compute the noise covariance (tmax=None is the
same as the end of the recording, see
Step3: Now that you have the covariance matrix in an MNE-Python object you can
save it to a file with
Step4: Note that this method also attenuates any activity in your
source estimates that resemble the baseline, if you like it or not.
Step5: Plot the covariance matrices
Try setting proj to False to see the effect. Notice that the projectors in
epochs are already applied, so proj parameter has no effect.
Step6: How should I regularize the covariance matrix?
The estimated covariance can be numerically
unstable and tends to induce correlations between estimated source amplitudes
and the number of samples available. The MNE manual therefore suggests to
regularize the noise covariance matrix (see
cov_regularization_math), especially if only few samples are
available. Unfortunately it is not easy to tell the effective number of
samples, hence, to choose the appropriate regularization.
In MNE-Python, regularization is done using advanced regularization methods
described in
Step7: This procedure evaluates the noise covariance quantitatively by how well it
whitens the data using the
negative log-likelihood of unseen data. The final result can also be visually
inspected.
Under the assumption that the baseline does not contain a systematic signal
(time-locked to the event of interest), the whitened baseline signal should
be follow a multivariate Gaussian distribution, i.e.,
whitened baseline signals should be between -1.96 and 1.96 at a given time
sample.
Based on the same reasoning, the expected value for the
Step8: This plot displays both, the whitened evoked signals for each channels and
the whitened
Step9: This will plot the whitened evoked for the optimal estimator and display the | Python Code:
import os.path as op
import mne
from mne.datasets import sample
Explanation: Computing a covariance matrix
Many methods in MNE, including source estimation and some classification
algorithms, require covariance estimations from the recordings.
In this tutorial we cover the basics of sensor covariance computations and
construct a noise covariance matrix that can be used when computing the
minimum-norm inverse solution. For more information, see
minimum_norm_estimates.
End of explanation
data_path = sample.data_path()
raw_empty_room_fname = op.join(
data_path, 'MEG', 'sample', 'ernoise_raw.fif')
raw_empty_room = mne.io.read_raw_fif(raw_empty_room_fname)
raw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(raw_fname)
raw.set_eeg_reference('average', projection=True)
raw.info['bads'] += ['EEG 053'] # bads + 1 more
Explanation: Source estimation method such as MNE require a noise estimations from the
recordings. In this tutorial we cover the basics of noise covariance and
construct a noise covariance matrix that can be used when computing the
inverse solution. For more information, see minimum_norm_estimates.
End of explanation
raw_empty_room.info['bads'] = [
bb for bb in raw.info['bads'] if 'EEG' not in bb]
raw_empty_room.add_proj(
[pp.copy() for pp in raw.info['projs'] if 'EEG' not in pp['desc']])
noise_cov = mne.compute_raw_covariance(
raw_empty_room, tmin=0, tmax=None)
Explanation: The definition of noise depends on the paradigm. In MEG it is quite common
to use empty room measurements for the estimation of sensor noise. However if
you are dealing with evoked responses, you might want to also consider
resting state brain activity as noise.
First we compute the noise using empty room recording. Note that you can also
use only a part of the recording with tmin and tmax arguments. That can be
useful if you use resting state as a noise baseline. Here we use the whole
empty room recording to compute the noise covariance (tmax=None is the
same as the end of the recording, see :func:mne.compute_raw_covariance).
Keep in mind that you want to match your empty room dataset to your
actual MEG data, processing-wise. Ensure that filters
are all the same and if you use ICA, apply it to your empty-room and subject
data equivalently. In this case we did not filter the data and
we don't use ICA. However, we do have bad channels and projections in
the MEG data, and, hence, we want to make sure they get stored in the
covariance object.
End of explanation
events = mne.find_events(raw)
epochs = mne.Epochs(raw, events, event_id=1, tmin=-0.2, tmax=0.5,
baseline=(-0.2, 0.0), decim=3, # we'll decimate for speed
verbose='error') # and ignore the warning about aliasing
Explanation: Now that you have the covariance matrix in an MNE-Python object you can
save it to a file with :func:mne.write_cov. Later you can read it back
using :func:mne.read_cov.
You can also use the pre-stimulus baseline to estimate the noise covariance.
First we have to construct the epochs. When computing the covariance, you
should use baseline correction when constructing the epochs. Otherwise the
covariance matrix will be inaccurate. In MNE this is done by default, but
just to be sure, we define it here manually.
End of explanation
noise_cov_baseline = mne.compute_covariance(epochs, tmax=0)
Explanation: Note that this method also attenuates any activity in your
source estimates that resemble the baseline, if you like it or not.
End of explanation
noise_cov.plot(raw_empty_room.info, proj=True)
noise_cov_baseline.plot(epochs.info, proj=True)
Explanation: Plot the covariance matrices
Try setting proj to False to see the effect. Notice that the projectors in
epochs are already applied, so proj parameter has no effect.
End of explanation
noise_cov_reg = mne.compute_covariance(epochs, tmax=0., method='auto',
rank=None)
Explanation: How should I regularize the covariance matrix?
The estimated covariance can be numerically
unstable and tends to induce correlations between estimated source amplitudes
and the number of samples available. The MNE manual therefore suggests to
regularize the noise covariance matrix (see
cov_regularization_math), especially if only few samples are
available. Unfortunately it is not easy to tell the effective number of
samples, hence, to choose the appropriate regularization.
In MNE-Python, regularization is done using advanced regularization methods
described in :footcite:EngemannGramfort2015. For this the 'auto' option
can be used. With this option cross-validation will be used to learn the
optimal regularization:
End of explanation
evoked = epochs.average()
evoked.plot_white(noise_cov_reg, time_unit='s')
Explanation: This procedure evaluates the noise covariance quantitatively by how well it
whitens the data using the
negative log-likelihood of unseen data. The final result can also be visually
inspected.
Under the assumption that the baseline does not contain a systematic signal
(time-locked to the event of interest), the whitened baseline signal should
be follow a multivariate Gaussian distribution, i.e.,
whitened baseline signals should be between -1.96 and 1.96 at a given time
sample.
Based on the same reasoning, the expected value for the :term:global field
power (GFP) <GFP> is 1 (calculation of the GFP should take into account the
true degrees of freedom, e.g. ddof=3 with 2 active SSP vectors):
End of explanation
noise_covs = mne.compute_covariance(
epochs, tmax=0., method=('empirical', 'shrunk'), return_estimators=True,
rank=None)
evoked.plot_white(noise_covs, time_unit='s')
Explanation: This plot displays both, the whitened evoked signals for each channels and
the whitened :term:GFP. The numbers in the GFP panel represent the
estimated rank of the data, which amounts to the effective degrees of freedom
by which the squared sum across sensors is divided when computing the
whitened :term:GFP. The whitened :term:GFP also helps detecting spurious
late evoked components which can be the consequence of over- or
under-regularization.
Note that if data have been processed using signal space separation
(SSS) :footcite:TauluEtAl2005,
gradiometers and magnetometers will be displayed jointly because both are
reconstructed from the same SSS basis vectors with the same numerical rank.
This also implies that both sensor types are not any longer statistically
independent.
These methods for evaluation can be used to assess model violations.
Additional
introductory materials can be found here <https://goo.gl/ElWrxe>_.
For expert use cases or debugging the alternative estimators can also be
compared (see ex-evoked-whitening) and
ex-covariance-whitening-dspm):
End of explanation
evoked_meg = evoked.copy().pick('meg')
noise_cov['method'] = 'empty_room'
noise_cov_baseline['method'] = 'baseline'
evoked_meg.plot_white([noise_cov_baseline, noise_cov], time_unit='s')
Explanation: This will plot the whitened evoked for the optimal estimator and display the
:term:GFP for all estimators as separate lines in the related panel.
Finally, let's have a look at the difference between empty room and
event related covariance, hacking the "method" option so that their types
are shown in the legend of the plot.
End of explanation |
1,398 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro
The TV show Silicon Valley had an app called "See Food" that promised to identify food.
In this notebook, you will write code using and comparing pre-trained models to choose one as an engine for the See Food app.
You won't go too deep into Keras or TensorFlow details in this particular exercise. Don't worry. You'll go deeper into model development soon. For now, you'll make sure you know how to use pre-trained models.
Set-Up
We will run a few steps of environmental set-up before writing your own code. You don't need to understand the details of this set-up code. You can just run each code cell until you get to the exercises.
1) Create Image Paths
This workspace includes image files you will use to test your models. Run the cell below to store a few filepaths to these images in a variable img_paths.
Step1: 2) Run an Example Model
Here is the code you saw in the tutorial. It loads data, loads a pre-trained model, and makes predictions. Run this cell too.
Step2: 3) Visualize Predictions
Step3: 4) Set Up Code Checking
As a last step before writing your own code, run the following cell to enable feedback on your code.
Step4: Exercises
You will write a couple useful functions in the next exercises. Then you will put these functions together to compare the effectiveness of various pretrained models for your hot-dog detection program.
Exercise 1
We want to distinguish whether an image is a hot dog or not. But our models classify pictures into 1000 different categories. Write a function that takes the models predictions (in the same format as preds from the set-up code) and returns a list of True and False values.
Some tips
Step5: If you'd like to see a hint or the solution, uncomment the appropriate line below.
If you did not get a working solution, copy the solution code into your code cell above and run it. You will need this function for the next step.
Step6: Exercise 2
Step7: If you'd like a hint or the solution, uncomment the appropriate line below
Step8: Exercise 3
Step9: Uncomment the appropriate line below if you'd like a hint or the solution | Python Code:
import os
from os.path import join
hot_dog_image_dir = '../input/hot-dog-not-hot-dog/seefood/train/hot_dog'
hot_dog_paths = [join(hot_dog_image_dir,filename) for filename in
['1000288.jpg',
'127117.jpg']]
not_hot_dog_image_dir = '../input/hot-dog-not-hot-dog/seefood/train/not_hot_dog'
not_hot_dog_paths = [join(not_hot_dog_image_dir, filename) for filename in
['823536.jpg',
'99890.jpg']]
img_paths = hot_dog_paths + not_hot_dog_paths
Explanation: Intro
The TV show Silicon Valley had an app called "See Food" that promised to identify food.
In this notebook, you will write code using and comparing pre-trained models to choose one as an engine for the See Food app.
You won't go too deep into Keras or TensorFlow details in this particular exercise. Don't worry. You'll go deeper into model development soon. For now, you'll make sure you know how to use pre-trained models.
Set-Up
We will run a few steps of environmental set-up before writing your own code. You don't need to understand the details of this set-up code. You can just run each code cell until you get to the exercises.
1) Create Image Paths
This workspace includes image files you will use to test your models. Run the cell below to store a few filepaths to these images in a variable img_paths.
End of explanation
from IPython.display import Image, display
from learntools.deep_learning.decode_predictions import decode_predictions
import numpy as np
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.preprocessing.image import load_img, img_to_array
image_size = 224
def read_and_prep_images(img_paths, img_height=image_size, img_width=image_size):
imgs = [load_img(img_path, target_size=(img_height, img_width)) for img_path in img_paths]
img_array = np.array([img_to_array(img) for img in imgs])
output = preprocess_input(img_array)
return(output)
my_model = ResNet50(weights='../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels.h5')
test_data = read_and_prep_images(img_paths)
preds = my_model.predict(test_data)
most_likely_labels = decode_predictions(preds, top=3)
Explanation: 2) Run an Example Model
Here is the code you saw in the tutorial. It loads data, loads a pre-trained model, and makes predictions. Run this cell too.
End of explanation
for i, img_path in enumerate(img_paths):
display(Image(img_path))
print(most_likely_labels[i])
Explanation: 3) Visualize Predictions
End of explanation
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.deep_learning.exercise_3 import *
print("Setup Complete")
Explanation: 4) Set Up Code Checking
As a last step before writing your own code, run the following cell to enable feedback on your code.
End of explanation
# Experiment with code outside the function, then move it into the function once you think it is right
# the following lines are given as a hint to get you started
decoded = decode_predictions(preds, top=1)
print(decoded)
def is_hot_dog(preds):
'''
inputs:
preds_array: array of predictions from pre-trained model
outputs:
is_hot_dog_list: a list indicating which predictions show hotdog as the most likely label
'''
pass
# Check your answer
q_1.check()
Explanation: Exercises
You will write a couple useful functions in the next exercises. Then you will put these functions together to compare the effectiveness of various pretrained models for your hot-dog detection program.
Exercise 1
We want to distinguish whether an image is a hot dog or not. But our models classify pictures into 1000 different categories. Write a function that takes the models predictions (in the same format as preds from the set-up code) and returns a list of True and False values.
Some tips:
- Work iteratively. Figure out one line at a time outsie the function, and print that line's output to make sure it's right. Once you have all the code you need, move it into the function is_hot_dog. If you get an error, check that you have copied the right code and haven't left anything out.
- The raw data we loaded in img_paths had two images of hot dogs, followed by two images of other foods. So, if you run your function on preds, which represents the output of the model on these images, your function should return [True, True, False, False].
- You will want to use the decode_predictions function that was also used in the code provided above. We provided a line with this in the code cell to get you started.
End of explanation
# q_1.hint()
# q_1.solution()
#%%RM_IF(PROD)%%
def is_hot_dog(preds):
decoded = decode_predictions(preds, top=1)
# pull out predicted label, which is in d[0][1] due to how decode_predictions structures results
labels = [d[0][1] for d in decoded]
out = [l == 'hotdog' for l in labels]
return out
q_1.assert_check_passed()
Explanation: If you'd like to see a hint or the solution, uncomment the appropriate line below.
If you did not get a working solution, copy the solution code into your code cell above and run it. You will need this function for the next step.
End of explanation
def calc_accuracy(model, paths_to_hotdog_images, paths_to_other_images):
pass
# Code to call calc_accuracy. my_model, hot_dog_paths and not_hot_dog_paths were created in the setup code
my_model_accuracy = calc_accuracy(my_model, hot_dog_paths, not_hot_dog_paths)
print("Fraction correct in small test set: {}".format(my_model_accuracy))
# Check your answer
q_2.check()
Explanation: Exercise 2: Evaluate Model Accuracy
You have a model (called my_model). Is it good enough to build your app around?
Find out by writing a function that calculates a model's accuracy (fraction correct). You will try an alternative model in the next step. So we will put this logic in a reusable function that takes data and the model as arguments, and returns the accuracy.
Tips:
Use the is_hot_dog function from above to help write your function
To save you some scrolling, here is the code from above where we used a TensorFlow model to make predictions:
my_model = ResNet50(weights='../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels.h5')
test_data = read_and_prep_images(img_paths)
preds = my_model.predict(test_data)
End of explanation
#_COMMENT_IF(PROD)_
q_2.hint()
# q_2.solution()
#%%RM_IF(PROD)%%
def calc_accuracy(model, paths_to_hotdog_images, paths_to_other_images):
# We'll use the counts for denominator of accuracy calculation
num_hot_dog_images = len(paths_to_hotdog_images)
num_other_images = len(paths_to_other_images)
hotdog_image_data = read_and_prep_images(paths_to_hotdog_images)
preds_for_hotdogs = model.predict(hotdog_image_data)
# Summing list of binary variables gives a count of True values
num_correct_hotdog_preds = sum(is_hot_dog(preds_for_hotdogs))
other_image_data = read_and_prep_images(paths_to_other_images)
preds_other_images = model.predict(other_image_data)
# Number correct is the number judged not to be hot dogs
num_correct_other_preds = num_other_images - sum(is_hot_dog(preds_other_images))
total_correct = num_correct_hotdog_preds + num_correct_other_preds
total_preds = num_hot_dog_images + num_other_images
return total_correct / total_preds
q_2.assert_check_passed()
Explanation: If you'd like a hint or the solution, uncomment the appropriate line below
End of explanation
# import the model
from tensorflow.keras.applications import VGG16
vgg16_model = ____
# calculate accuracy on small dataset as a test
vgg16_accuracy = ____
print("Fraction correct in small dataset: {}".format(vgg16_accuracy))
# Check your answer
q_3.check()
Explanation: Exercise 3:
There are other models besides the ResNet model (which we have loaded). For example, an earlier winner of the ImageNet competition is the VGG16 model. Don't worry about the differences between these models yet. We'll come back to that later. For now, just focus on the mechanics of applying these models to a problem.
The code used to load a pretrained ResNet50 model was
my_model = ResNet50(weights='../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels.h5')
The weights for the model are stored at ../input/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels.h5.
In the cell below, create a VGG16 model with the preloaded weights. Then use your calc_accuracy function to determine what fraction of images the VGG16 model correctly classifies. Is it better or worse than the pretrained ResNet model?
End of explanation
#_COMMENT_IF(PROD)_
q_3.hint()
#q_3.solution()
#%%RM_IF(PROD)%%
from tensorflow.keras.applications import VGG16
vgg16_model = VGG16(weights='../input/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels.h5')
vgg16_accuracy = calc_accuracy(vgg16_model, hot_dog_paths, not_hot_dog_paths)
q_3.assert_check_passed()
Explanation: Uncomment the appropriate line below if you'd like a hint or the solution
End of explanation |
1,399 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations Exercise 1
Imports
Step2: Lorenz system
The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read
Step4: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
Step6: Write a function plot_lorentz that
Step7: Use interact to explore your plot_lorenz function with | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
Explanation: Ordinary Differential Equations Exercise 1
Imports
End of explanation
def lorentz_derivs(yvec, t, sigma, rho, beta):
Compute the the derivatives for the Lorentz system at yvec(t).
d1 = sigma * (yvec[1] - yvec[0])
d2 = yvec[0] * (rho - yvec[2]) - yvec[1]
d3 = yvec[0] * yvec[1] - beta * yvec[2]
return(d1, d2, d3)
#raise NotImplementedError()
assert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])
Explanation: Lorenz system
The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read:
$$ \frac{dx}{dt} = \sigma(y-x) $$
$$ \frac{dy}{dt} = x(\rho-z) - y $$
$$ \frac{dz}{dt} = xy - \beta z $$
The solution vector is $[x(t),y(t),z(t)]$ and $\sigma$, $\rho$, and $\beta$ are parameters that govern the behavior of the solutions.
Write a function lorenz_derivs that works with scipy.integrate.odeint and computes the derivatives for this system.
End of explanation
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Solve the Lorenz system for a single initial condition.
Parameters
----------
ic : array, list, tuple
Initial conditions [x,y,z].
max_time: float
The max time to use. Integrate with 250 points per time unit.
sigma, rho, beta: float
Parameters of the differential equation.
Returns
-------
soln : np.ndarray
The array of the solution. Each row will be the solution vector at that time.
t : np.ndarray
The array of time points used.
t = np.linspace(0.0, max_time, 250 * max_time)
soln = odeint(lorentz_derivs, ic, t, (sigma, rho, beta))
return(soln, t)
#raise NotImplementedError()
res = solve_lorentz((.5, .5, .5))
soln = res[0]
soln[:,0]
assert True # leave this to grade solve_lorenz
Explanation: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
End of explanation
N = 5
colors = plt.cm.hot(np.linspace(0,1,N))
for i in range(N):
# To use these colors with plt.plot, pass them as the color argument
print(colors[i])
plt.plot?
def plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Plot [x(t),z(t)] for the Lorenz system.
Parameters
----------
N : int
Number of initial conditions and trajectories to plot.
max_time: float
Maximum time to use.
sigma, rho, beta: float
Parameters of the differential equation.
np.random.seed(1)
colors = plt.cm.hot(np.linspace(0, 1, N))
for i in range(N):
xrand = np.random.uniform(-15.0, 15.0)
yrand = np.random.uniform(-15.0, 15.0)
zrand = np.random.uniform(-15.0, 15.0)
res, t = solve_lorentz((xrand, yrand, zrand), max_time, sigma, rho, beta)
plt.plot(res[:,0], res[:,2], color = colors[i])
#raise NotImplementedError()
plot_lorentz()
assert True # leave this to grade the plot_lorenz function
Explanation: Write a function plot_lorentz that:
Solves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time.
Plot $[x(t),z(t)]$ using a line to show each trajectory.
Color each line using the hot colormap from Matplotlib.
Label your plot and choose an appropriate x and y limit.
The following cell shows how to generate colors that can be used for the lines:
End of explanation
interact(plot_lorentz, max_time = (1, 10), N = (1, 50), sigma = (0.0, 50.0), rho = (0.0, 50.0), beta = fixed(8.3))
#raise NotImplementedError()
Explanation: Use interact to explore your plot_lorenz function with:
max_time an integer slider over the interval $[1,10]$.
N an integer slider over the interval $[1,50]$.
sigma a float slider over the interval $[0.0,50.0]$.
rho a float slider over the interval $[0.0,50.0]$.
beta fixed at a value of $8/3$.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.