Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
4,100 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Traverse a Square - Part 2 - Variables
In this notebook, we will introduce one of the most powerful ideas in programming
Step1: Try changing the message in the previous code cell and re-running it. Does it behave as you expect?
You may remember from the Getting Started WIth Notebooks.ipynb notebook that if the last statement in a code cell returns a value, the value will be displayed as the output of the code cell when the cell contents have been executed.
If you place the name of a variable, or one or more comma separated variables, on the last line of a code cell, the value will be displayed.
What do you think the output of the following cell will be? Run the cell to find out.
Step2: You can assign whatever object you like to a variable.
For example, we can assign numbers to them and do sums with them
Step3: See if you can add the count of a new set of purchases to the number of items in your basket in the cell above. For example, what if you also bought 3 pears. And a bunch of bananas.
Making Use of Variables
Let's look back at our simple attempt at the square drawing program, in which we repeated blocks of instructions and set the numberical parameter values separately in each case.
Before we run the program, we need to load in the bits we need...
Step4: The original programme appears in the code cell below.
how many changes would you have to make to it in order to change the side length?
can you see how you might be able to simplify the act of changing the side length?
what would you need to change if you wanted to make the turns faster? Or slower?
HINT
Step5: Using the above programme as a guide, see if you can write a programme in the code cell below that makes it easier to maintin and simplifies the act of changing the numerical parameter values.
Step6: How did you get on?
How easy is is to change the side length now? Or find a new combination of the turn speed and turn angle to turn through ninety degrees (or thereabouts?). Try it and see...
Here's the programme I came up with | Python Code:
# Create the message variable and assign the value "Hello World" to it
message="Hello World"
# Use the variable in a print statement
# The print statement retrieves the value assigned to the variable and displays the value
print(message)
Explanation: Traverse a Square - Part 2 - Variables
In this notebook, we will introduce one of the most powerful ideas in programming: the variable.
A variable is a container that we can reference by name that is associated with a particular value. The value is assigned to the variable using the the = operator, which we might read as is set to the value of.
For example, consider the following assignment statement:
python
message="Hello World"
Here, we create a named container message and put the value Hello World into it.
When we refer to the variable as part of another expression, we can then access the value it contains and use that in our expression, as the following example demonstrates:
End of explanation
message
Explanation: Try changing the message in the previous code cell and re-running it. Does it behave as you expect?
You may remember from the Getting Started WIth Notebooks.ipynb notebook that if the last statement in a code cell returns a value, the value will be displayed as the output of the code cell when the cell contents have been executed.
If you place the name of a variable, or one or more comma separated variables, on the last line of a code cell, the value will be displayed.
What do you think the output of the following cell will be? Run the cell to find out.
End of explanation
#Assign raw numbers to variables
apples=5
oranges=10
#Do a sum with the values represented by the variables and assign the result to a new variable
items_in_basket = apples + oranges
#Display the resulting value as the cell output
items_in_basket
Explanation: You can assign whatever object you like to a variable.
For example, we can assign numbers to them and do sums with them:
End of explanation
%run 'Set-up.ipynb'
%run 'Loading scenes.ipynb'
%run 'vrep_models/PioneerP3DX.ipynb'
Explanation: See if you can add the count of a new set of purchases to the number of items in your basket in the cell above. For example, what if you also bought 3 pears. And a bunch of bananas.
Making Use of Variables
Let's look back at our simple attempt at the square drawing program, in which we repeated blocks of instructions and set the numberical parameter values separately in each case.
Before we run the program, we need to load in the bits we need...
End of explanation
%%vrepsim '../scenes/OU_Pioneer.ttt' PioneerP3DX
import time
#side 1
robot.move_forward()
time.sleep(1)
#turn 1
robot.rotate_left(1.8)
time.sleep(0.45)
#side 2
robot.move_forward()
time.sleep(1)
#turn 2
robot.rotate_left(1.8)
time.sleep(0.45)
#side 3
robot.move_forward()
time.sleep(1)
#turn 3
robot.rotate_left(1.8)
time.sleep(0.45)
#side 4
robot.move_forward()
time.sleep(1)
Explanation: The original programme appears in the code cell below.
how many changes would you have to make to it in order to change the side length?
can you see how you might be able to simplify the act of changing the side length?
what would you need to change if you wanted to make the turns faster? Or slower?
HINT: think variables...
End of explanation
%%vrepsim '../scenes/OU_Pioneer.ttt' PioneerP3DX
import time
#YOUR CODE HERE
Explanation: Using the above programme as a guide, see if you can write a programme in the code cell below that makes it easier to maintin and simplifies the act of changing the numerical parameter values.
End of explanation
%%vrepsim '../scenes/OU_Pioneer.ttt' PioneerP3DX
import time
side_length_time=1
turn_speed=1.8
turn_time=0.45
#side 1
robot.move_forward()
time.sleep(side_length_time)
#turn 1
robot.rotate_left(turn_speed)
time.sleep(turn_time)
#side 2
robot.move_forward()
time.sleep(side_length_time)
#turn 2
robot.rotate_left(turn_speed)
time.sleep(turn_time)
#side 3
robot.move_forward()
time.sleep(side_length_time)
#turn 3
robot.rotate_left(turn_speed)
time.sleep(turn_time)
#side 4
robot.move_forward()
time.sleep(side_length_time)
Explanation: How did you get on?
How easy is is to change the side length now? Or find a new combination of the turn speed and turn angle to turn through ninety degrees (or thereabouts?). Try it and see...
Here's the programme I came up with: I used three variables, one for side length, one for turn time, and one for turn speed. Feel free to try running and modifying this programme too...
End of explanation |
4,101 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CS229 Homework 1 Problem 1
In this exercise we use logistic regression to construct a decision boundary for a binary classification problem. In order to do so, we must first load the data.
Step1: Here we load the data sets. They are text files, so the numpy loadtxt function will suffice.
Step2: Next we pack a column of ones into the design matrix X so when we perform logistic regression to estimate the intercept parameter, we can pack it all into a matrix.
Step3: Here we pack the data into a DataFrame for plotting.
Step4: Now we perform regression. The logistic regression function uses the Newton-Raphson method to estimate the parameters for the decision boundary in the data set.
Step5: Exercise 1.a.
Here are the resulting parameter estimates from logistic regression
Step6: with the resulting costs per iteration of Newton-Raphson. The first term is the intercept term for the line, corresponding to the first column in the design matrix X being all ones.
Step7: So the logistic regression function appears to be converging. The cost functional is minimized on the last iteration.
Exercise 1.b.
For the final step, we plot the results. We use a color map to distinguish the classification of each datum. The color purple is used for -1, and the color yellow is used for +1.
Step8: Now we plot the results. First, create a polynomial p from the estimated parameters.
Step9: Then plot the results. | Python Code:
import numpy as np
import pandas as pd
import logistic_regression as lr
Explanation: CS229 Homework 1 Problem 1
In this exercise we use logistic regression to construct a decision boundary for a binary classification problem. In order to do so, we must first load the data.
End of explanation
X = np.loadtxt('logistic_x.txt')
y = np.loadtxt('logistic_y.txt')
Explanation: Here we load the data sets. They are text files, so the numpy loadtxt function will suffice.
End of explanation
ones = np.ones((99,1))
Xsplit = np.split(X, indices_or_sections=[1], axis=1)
# Pack the intercept coordinates into X so we can calculate the
# intercept for the logistic regression.
X = np.concatenate([ones, Xsplit[0], Xsplit[1]], axis=1)
Explanation: Next we pack a column of ones into the design matrix X so when we perform logistic regression to estimate the intercept parameter, we can pack it all into a matrix.
End of explanation
Xd = pd.DataFrame(X, columns=['x0', 'x1', 'x2'])
yd = pd.DataFrame(y, columns=['y'])
df = pd.concat((yd, Xd), axis=1)
Explanation: Here we pack the data into a DataFrame for plotting.
End of explanation
theta, cost = lr.logistic_regression(X, y, epsilon=lr.EPSILON, max_iters=lr.MAX_ITERS)
Explanation: Now we perform regression. The logistic regression function uses the Newton-Raphson method to estimate the parameters for the decision boundary in the data set.
End of explanation
print('theta = {}'.format(theta))
Explanation: Exercise 1.a.
Here are the resulting parameter estimates from logistic regression
End of explanation
print('cost = {}'.format(cost))
Explanation: with the resulting costs per iteration of Newton-Raphson. The first term is the intercept term for the line, corresponding to the first column in the design matrix X being all ones.
End of explanation
import matplotlib.pyplot as plt
import matplotlib.colors as clr
colors = ['red', 'blue']
levels = [0, 1]
cmap, norm = clr.from_levels_and_colors(levels=levels, colors=colors, extend='max')
cs = np.where(df['y'] < 0, 0, 1)
cs
Explanation: So the logistic regression function appears to be converging. The cost functional is minimized on the last iteration.
Exercise 1.b.
For the final step, we plot the results. We use a color map to distinguish the classification of each datum. The color purple is used for -1, and the color yellow is used for +1.
End of explanation
p = np.poly1d([-theta[1]/theta[2], -theta[0]/theta[2]])
x = np.linspace(0, 8, 200)
p
Explanation: Now we plot the results. First, create a polynomial p from the estimated parameters.
End of explanation
plt.scatter(df['x1'], df['x2'], c=cs)
plt.plot(x, p(x))
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
Explanation: Then plot the results.
End of explanation |
4,102 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
JLab ML Lunch 2 - Data Exploration
Second ML challenge hosted
On October 30th, a test dataset will be released, and predictions must be submitted within 24 hours
Let's take a look at the training data!
Step1: Training Data
This shows the state vector ($x,y,z, p_x, p_y, p_z$) for the origin and 24 detector stations
Jupyter-matplotlib widget used for handy visualizations (https
Step2: Now read in the example test data
Step3: One caveat on the test data
The last value of each row is actually the z-value of the next step to be predicted, not the x-position
... but this isn't the same spot for each row
Just add two commas before the last number of each row
Step4: This should be saved for later usage | Python Code:
%matplotlib widget
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
import imageio
Explanation: JLab ML Lunch 2 - Data Exploration
Second ML challenge hosted
On October 30th, a test dataset will be released, and predictions must be submitted within 24 hours
Let's take a look at the training data!
End of explanation
X_train = pd.read_csv("MLchallenge2_training.csv")
# There are 150 columns. Let's just see a few
X_train[['x', 'y', 'z', 'px', 'py', 'pz',
'x1', 'y1', 'z1', 'px1', 'py1', 'pz1']].head()
def plot_quiver_track(df, track_id, elev=None,
azim=None, dist=None):
# Extract the track row
track = df.loc[track_id].values
# Get all the values of each type of feature
x = [track[(6*i)] for i in range(0, 25)]
y = [track[1+(6*i)] for i in range(0, 25)]
z = [track[2+(6*i)] for i in range(0, 25)]
px = [track[3+(6*i)] for i in range(0, 25)]
py = [track[4+(6*i)] for i in range(0, 25)]
pz = [track[5+(6*i)] for i in range(0, 25)]
# I ideally would like to link the magnitude
# of the momentum to the color, but my results
# were buggy...
p_tot = np.sqrt(np.square(px) +
np.square(py) +
np.square(pz))
# Create our 3D figure
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.xaxis.set_pane_color((1,1,1,1))
ax.yaxis.set_pane_color((1,1,1,1))
ax.zaxis.set_pane_color((1,1,1,1))
# Set the three 3D plot viewing attributes
if elev is not None:
ax.elev = elev
if azim is not None:
ax.azim = azim
if dist is not None:
ax.dist = dist
# Create our quiver plot
ax.quiver(z, x, y, pz, px, py, length=14)
# Labels for clarity
ax.set_title("Track {}".format(track_id))
ax.set_xlabel("z", fontweight="bold")
ax.set_ylabel("x", fontweight="bold")
ax.set_zlabel("y", fontweight="bold")
plt.tight_layout()
return fig, ax
fig, ax = plot_quiver_track(X_train, 2)
fig.show()
gif_filename = "track-2-anim"
ax.elev = 50.
ax.azim = 90.
ax.dist = 9.
img_files = []
for n in range(0, 100):
ax.elev = ax.elev-0.4
ax.azim = ax.azim+1.5
filename = f'images/{gif_filename}/img{str(n).zfill(3)}.png'
img_files.append(filename)
plt.savefig(filename, bbox_inches='tight')
images = []
for filename in img_files:
images.append(imageio.imread(filename))
imageio.mimsave('images/track-2.gif', images)
Explanation: Training Data
This shows the state vector ($x,y,z, p_x, p_y, p_z$) for the origin and 24 detector stations
Jupyter-matplotlib widget used for handy visualizations (https://github.com/matplotlib/jupyter-matplotlib)
End of explanation
X_test = pd.read_csv("test_in.csv", names=X_train.columns)
X_test[['x', 'y', 'z', 'x15', 'y15', 'z15', 'x23', 'y23', 'z23']].head()
import missingno as mno
ax = mno.matrix(X_test.head(100))
Explanation: Now read in the example test data
End of explanation
import re
from io import StringIO
with open('test_in.csv', 'r') as f:
data_str = f.read()
data_str_io = StringIO(
re.sub(r"([-+]?[0-9]*\.?[0-9]+([eE][-+]?[0-9]+)?\n)", r",,\1", data_str)
)
X_test = pd.read_csv(data_str_io, names=X_train.columns)
X_test.head()
Explanation: One caveat on the test data
The last value of each row is actually the z-value of the next step to be predicted, not the x-position
... but this isn't the same spot for each row
Just add two commas before the last number of each row
End of explanation
import re
from io import StringIO
def load_test_data(filename):
with open(filename, 'r') as f:
data_str = f.read()
data_str_io = StringIO(
re.sub(r"([-+]?[0-9]*\.?[0-9]+([eE][-+]?[0-9]+)?\n)", r",,\1", data_str)
)
X_test = pd.read_csv(data_str_io, names=X_train.columns)
return X_test
Explanation: This should be saved for later usage
End of explanation |
4,103 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 13 de Agosto del 2015
Los datos del experimento
Step1: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
Step2: Aumentando la velocidad se ha conseguido que disminuya el valor máxima, sin embargo ha disminuido el valor mínimo. Para la siguiente iteracción, se va a volver a las velocidades de 1.5- 3.4 y se van a añadir más reglas con unos incrementos de velocidades menores, para evitar saturar la velocidad de traccción tanto a nivel alto como nivel bajo.
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
Step3: Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
Step4: Representación de X/Y
Step5: Analizamos datos del ratio
Step6: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$ | Python Code:
#Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Abrimos el fichero csv con los datos de la muestra
datos = pd.read_csv('ensayo2.CSV')
%pylab inline
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
columns = ['Diametro X','Diametro Y', 'RPM TRAC']
#Mostramos un resumen de los datos obtenidoss
datos[columns].describe()
#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]
Explanation: Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 13 de Agosto del 2015
Los datos del experimento:
* Hora de inicio: 12:06
* Hora final : 12:26
* Filamento extruido: 314Ccm
* $T: 150ºC$
* $V_{min} tractora: 1.5 mm/s$
* $V_{max} tractora: 5.3 mm/s$
* Los incrementos de velocidades en las reglas del sistema experto son distintas:
* En los caso 3 y 5 se mantiene un incremento de +2.
* En los casos 4 y 6 se reduce el incremento a -1.
Este experimento dura 20min por que a simple vista se ve que no aporta ninguna mejora, de hecho, añade más inestabilidad al sitema.
Se opta por añadir más reglas al sistema, e intentar hacer que la velocidad de tracción no llegue a los límites.
End of explanation
datos.ix[:, "Diametro X":"Diametro Y"].plot(figsize=(16,10),ylim=(0.5,3)).hlines([1.85,1.65],0,3500,colors='r')
#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')
datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
Explanation: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
End of explanation
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
Explanation: Aumentando la velocidad se ha conseguido que disminuya el valor máxima, sin embargo ha disminuido el valor mínimo. Para la siguiente iteracción, se va a volver a las velocidades de 1.5- 3.4 y se van a añadir más reglas con unos incrementos de velocidades menores, para evitar saturar la velocidad de traccción tanto a nivel alto como nivel bajo.
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
End of explanation
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]
#datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
Explanation: Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
End of explanation
plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')
Explanation: Representación de X/Y
End of explanation
ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']
ratio.describe()
rolling_mean = pd.rolling_mean(ratio, 50)
rolling_std = pd.rolling_std(ratio, 50)
rolling_mean.plot(figsize=(12,6))
# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)
ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))
Explanation: Analizamos datos del ratio
End of explanation
Th_u = 1.85
Th_d = 1.65
data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |
(datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]
data_violations.describe()
data_violations.plot(subplots=True, figsize=(12,12))
Explanation: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$
End of explanation |
4,104 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
N2 - Eurocode 8, CEN (2005)
This simplified nonlinear procedure for the estimation of the seismic response of structures uses capacity curves and inelastic spectra. This method has been developed to be used in combination with code-based response spectra, but it is also possible to employ it for the assessment of structural response subject to ground motion records. It also has the distinct aspect of assuming an elastic-perfectly plastic force-displacement relationship in the construction of the bilinear curve. This method is part of recommendations of the Eurocode 8 (CEN, 2005) for the seismic design of new structures, and the capacity curves are usually simplified by a elasto-perfectly plastic relationship.
Note
Step1: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
Step2: Load ground motion records
Please indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.
Note
Step3: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
The damage types currently supported are
Step4: Obtain the damage probability matrix
The parameter damping_ratio needs to be defined in the cell below in order to calculate the damage probability matrix.
Step5: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above
Step6: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above
Step7: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above
Step8: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions
Step9: Plot vulnerability function
Step10: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above | Python Code:
import N2Method
from rmtk.vulnerability.common import utils
%matplotlib inline
Explanation: N2 - Eurocode 8, CEN (2005)
This simplified nonlinear procedure for the estimation of the seismic response of structures uses capacity curves and inelastic spectra. This method has been developed to be used in combination with code-based response spectra, but it is also possible to employ it for the assessment of structural response subject to ground motion records. It also has the distinct aspect of assuming an elastic-perfectly plastic force-displacement relationship in the construction of the bilinear curve. This method is part of recommendations of the Eurocode 8 (CEN, 2005) for the seismic design of new structures, and the capacity curves are usually simplified by a elasto-perfectly plastic relationship.
Note: To run the code in a cell:
Click on the cell to select it.
Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
End of explanation
capacity_curves_file = "../../../../../../rmtk_data/capacity_curves_Sa-Sd.csv"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
utils.plot_capacity_curves(capacity_curves)
Explanation: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
End of explanation
gmrs_folder = "../../../../../../rmtk_data/GMRs"
minT, maxT = 0.1, 2.0
gmrs = utils.read_gmrs(gmrs_folder)
#utils.plot_response_spectra(gmrs, minT, maxT)
Explanation: Load ground motion records
Please indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.
Note: Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.
The parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.
End of explanation
damage_model_file = "../../../../../../rmtk_data/damage_model_ISD.csv"
damage_model = utils.read_damage_model(damage_model_file)
Explanation: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
The damage types currently supported are: capacity curve dependent, spectral displacement and interstorey drift. If the damage model type is interstorey drift the user can provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements, otherwise a linear relationship is assumed.
End of explanation
damping_ratio = 0.05
PDM, Sds = N2Method.calculate_fragility(capacity_curves, gmrs, damage_model, damping_ratio)
Explanation: Obtain the damage probability matrix
The parameter damping_ratio needs to be defined in the cell below in order to calculate the damage probability matrix.
End of explanation
IMT = "Sa"
period = 0.3
regression_method = "least squares"
fragility_model = utils.calculate_mean_fragility(gmrs, PDM, period, damping_ratio,
IMT, damage_model, regression_method)
Explanation: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:
1. IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sd" and "Sa".
2. period: This parameter defines the time period of the fundamental mode of vibration of the structure.
3. regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are "least squares" and "max likelihood".
End of explanation
minIML, maxIML = 0.01, 3.00
utils.plot_fragility_model(fragility_model, minIML, maxIML)
# utils.plot_fragility_stats(fragility_statistics,minIML,maxIML)
Explanation: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
End of explanation
taxonomy = "RC"
minIML, maxIML = 0.01, 3.00
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
Explanation: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
cons_model_file = "../../../../../../rmtk_data/cons_model.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00,
2.20, 2.40, 2.60, 2.80, 3.00, 3.20, 3.40, 3.60, 3.80, 4.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
Explanation: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:
1. cons_model_file: This parameter specifies the path of the consequence model file.
2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.
3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are "lognormal", "beta", and "PMF".
End of explanation
utils.plot_vulnerability_model(vulnerability_model)
Explanation: Plot vulnerability function
End of explanation
taxonomy = "RC"
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
Explanation: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation |
4,105 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quality Controlling Saildrone T/S
Objective
Step1: First, learn about the data
Load the data
Step2: Let's learn about this dataset, starting from the attributes.
Step3: Great, it follows the CF and ACDD conventions, so we don't need to wander around, but we know what to expect and where to find the information that we will need. For instance, does it conform with some Simple Geometry? If so, which one?
Step4: OK, this is a trajectory, so we expect that each measurement will have a time and position.
What are the available variables? We are interested in the temperature and the salinity of the seawater.
Step5: It looks like we are interested in TEMP_CTD_MEAN and SAL_MEAN. Let's confirm that. We can learn a lot by inspecting the Attributes.
Step6: Yes, we can see in the attributes of both variables the standard_name and long_name. We found what we need. Let's simplify our dataset and extract only what we need - temperature and salinity - and call it "tsg".
Step7: Notice that there is a trajectory dimension. Since this is a single trajectory, CF does not require to keep this dimension, but this is a good practice. In case we want to merge this dataset with another trajectory, let's say another Saildrone from another year, and both trajectories would merge seamlessly with two trajectories.
To simplify, let's remove the trajectory dimension by choosing only the first (and only one) trajectory.
Step8: Now, if we look at the temperature, it will have only the dimension obs.
Step9: Actuall QC
So far, we have been learning about this dataset and subsampling.
If you were familiar with this dataset, you could have skipped all that and started here.
Now, let's QC this data, the easiest part (if using CoTeDe).
Step10: Great! You just finished to QC the temperature of the whole Saildrone Antarctic mission. It's probably not the best approach to use the gradient test only, but good enough for this example.
What are the flags available?
Step11: Yes, it seems right. We asked to inspect all variables that were the type
Step12: Let's improve this. Let's evaluate temperature and salinity at the same time, but now let's add another test, the rate of change.
Step13: Nice, you can choose which tests to apply on each variable, and that includes which parameters to use on each test.
You also can choose between defining a test for the type of measurement (sea_water_temperature) or the variable specifically (SAL_MEAN). That is convenient when you have a platform equipped with several sensors, like Saildrone.
Finally, let's check what we got! | Python Code:
import xarray as xr
from cotede.qc import ProfileQC
Explanation: Quality Controlling Saildrone T/S
Objective:
This notebook shows how to use CoTeDe to evaluate temperature and salinity measured along-track from a Saildrone.
The nature of this dataset is similar to a Thermosalinograph (TSG) on vessels of opportunity. As the vessel sails, it pumps water from near the surface, which is measured by a CTD. Thus, it is a time-series with a nearly constant depth, and each measurement is associated with time, latitude, and longitude.
Data:
For this tutorial, let's use the Saildrone Antarctic Cirumnaviation mission (https://www.saildrone.com/antarctica). I don't want to bypass their data distribution, so I'll let you download it yourself. Please place it in the same directory (folder) of this notebook.
Let's use the 24hrs resolution just for demonstration purposes since this is the public version. We will probably get better results by quality controlling on the high-resolution measurements and only then, if convenient for our scientific questions, sub-sample for lower resolution.
The data is available at https://data.saildrone.com/data/sets/antarctica-circumnavigation-2019/access
Let's import xarray, which we'll use to load the data from the netCDF. We could use netCDF4 or scipy, but it is probably more intuitive with xarray.
Let's also import ProfileQC from CoTeDe. Yes, I know, Saildrone does not measure profiles but don't worry about the name of this class; it will work with the same principle. Maybe one day, I'll create another class to deal with the along-track type of measurements.
End of explanation
ds = xr.open_dataset('saildrone-antarctica.nc')
Explanation: First, learn about the data
Load the data
End of explanation
ds.attrs['Conventions']
Explanation: Let's learn about this dataset, starting from the attributes.
End of explanation
ds.attrs['featureType']
Explanation: Great, it follows the CF and ACDD conventions, so we don't need to wander around, but we know what to expect and where to find the information that we will need. For instance, does it conform with some Simple Geometry? If so, which one?
End of explanation
list(ds.keys())
Explanation: OK, this is a trajectory, so we expect that each measurement will have a time and position.
What are the available variables? We are interested in the temperature and the salinity of the seawater.
End of explanation
print(ds["SAL_MEAN"])
print("====")
print(ds["TEMP_CTD_MEAN"])
Explanation: It looks like we are interested in TEMP_CTD_MEAN and SAL_MEAN. Let's confirm that. We can learn a lot by inspecting the Attributes.
End of explanation
tsg = ds[['TEMP_CTD_MEAN', 'SAL_MEAN']]
tsg
Explanation: Yes, we can see in the attributes of both variables the standard_name and long_name. We found what we need. Let's simplify our dataset and extract only what we need - temperature and salinity - and call it "tsg".
End of explanation
tsg = tsg.isel(trajectory=0)
Explanation: Notice that there is a trajectory dimension. Since this is a single trajectory, CF does not require to keep this dimension, but this is a good practice. In case we want to merge this dataset with another trajectory, let's say another Saildrone from another year, and both trajectories would merge seamlessly with two trajectories.
To simplify, let's remove the trajectory dimension by choosing only the first (and only one) trajectory.
End of explanation
tsg['TEMP_CTD_MEAN']
tsg['SAL_MEAN'].attrs
tsg['TEMP_CTD_MEAN'][:10]
Explanation: Now, if we look at the temperature, it will have only the dimension obs.
End of explanation
pqc = ProfileQC(tsg, {'sea_water_temperature':{'gradient': {'threshold': 5}}})
Explanation: Actuall QC
So far, we have been learning about this dataset and subsampling.
If you were familiar with this dataset, you could have skipped all that and started here.
Now, let's QC this data, the easiest part (if using CoTeDe).
End of explanation
pqc.flags.keys()
Explanation: Great! You just finished to QC the temperature of the whole Saildrone Antarctic mission. It's probably not the best approach to use the gradient test only, but good enough for this example.
What are the flags available?
End of explanation
pqc.flags['TEMP_CTD_MEAN']['gradient']
Explanation: Yes, it seems right. We asked to inspect all variables that were the type: seawater temperature.
What was the result, i.e. what are the flags assigned?
End of explanation
cfg = {
'sea_water_temperature':{
'gradient': {'threshold': 5},
'rate_of_change': {'threshold': 5}},
'SAL_MEAN': {
'rate_of_change': {'threshold': 2}}
}
pqc = ProfileQC(tsg, cfg)
pqc.flags
Explanation: Let's improve this. Let's evaluate temperature and salinity at the same time, but now let's add another test, the rate of change.
End of explanation
import matplotlib.pyplot as plt
plt.figure(figsize=(14,4))
idx = pqc.flags['TEMP_CTD_MEAN']['overall'] <= 2
plt.plot(pqc['time'][idx], pqc['TEMP_CTD_MEAN'][idx], '.')
plt.title('Temperature [$^\circ$C]')
plt.figure(figsize=(14,4))
idx = pqc.flags['SAL_MEAN']['overall'] <= 2
plt.plot(pqc['time'][idx], pqc['SAL_MEAN'][idx], '.')
plt.title('Salinity')
Explanation: Nice, you can choose which tests to apply on each variable, and that includes which parameters to use on each test.
You also can choose between defining a test for the type of measurement (sea_water_temperature) or the variable specifically (SAL_MEAN). That is convenient when you have a platform equipped with several sensors, like Saildrone.
Finally, let's check what we got!
End of explanation |
4,106 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hyperparameter optimization using pyGPGO
by José Jiménez (Oct 18, 2017)
In this tutorial, we will learn the basics of the Bayesian optimization (BO) framework through a step-by-step example in the context of optimizing the hyperparameters of a binary classifier. But first of all, where is Bayesian optimization useful?
There are a lot of case scenarios, one would typically use the BO framework in situations like
Step1: Before going any further, let's visualize it!
Step2: Let's say that we want to use a Support Vector Machine (SVM) with the radial basis function kernel classifier on this data, which has two usual parameters to optimize, $C$ and $\gamma$. We need to first define a target function that takes these two hyperparameters as input and spits out an error (e.g, using some form of cross validation). Define also a dictionary, specifying parameters and input spaces for each.
Step3: Now comes the fun part, where we specify our BO framework using pyGPGO. We are going to use a Gaussian Process (GP) model to approximate our true objective function, and a covariance function that measures similarity among training examples. An excellent introduction to Gaussian Process regression can be found in [@Rassmussen-Williams2004]. We are going to use the squared exponential kernel for this example, that takes the form
Step4: We specify now an acquisition function, that will determine the behaviour of the BO procedure when selecting a new point. For instance, it is very common to use the Expected Improvement (EI) acquisition, that will both take into account the probability of improvement of a point and its magnitude
Step5: We're almost done! Finally call the GPGO class and put everything together. We'll run the procedure for 20 epochs.
Step6: Finally retrieve your result! | Python Code:
import numpy as np
from sklearn.datasets import make_moons
np.random.seed(20)
X, y = make_moons(n_samples = 200, noise = 0.3) # Data and target
Explanation: Hyperparameter optimization using pyGPGO
by José Jiménez (Oct 18, 2017)
In this tutorial, we will learn the basics of the Bayesian optimization (BO) framework through a step-by-step example in the context of optimizing the hyperparameters of a binary classifier. But first of all, where is Bayesian optimization useful?
There are a lot of case scenarios, one would typically use the BO framework in situations like:
The objective function has no closed-form
No gradient information is available
In presence of noise
The BO framework uses a surrogate model to approximate the objective function and chooses to optimize it instead according to a chosen criteria. For an in-depth introduction to the topic, we whole-heartedly recommend reading [@Snoek2012, @Jimenez2017].
Let's start by creating some synthetic data that we will use later for classification.
End of explanation
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
cm_bright = ListedColormap(['#fc4349', '#6dbcdb'])
fig = plt.figure()
plt.scatter(X[:, 0], X[:, 1], c = y, cmap = cm_bright)
plt.show()
Explanation: Before going any further, let's visualize it!
End of explanation
from sklearn.svm import SVC
from sklearn.model_selection import cross_val_score
def evaluateModel(C, gamma):
clf = SVC(C=10**C, gamma=10**gamma)
return np.average(cross_val_score(clf, X, y))
params = {'C': ('cont', (-4, 5)),
'gamma': ('cont', (-4, 5))
}
Explanation: Let's say that we want to use a Support Vector Machine (SVM) with the radial basis function kernel classifier on this data, which has two usual parameters to optimize, $C$ and $\gamma$. We need to first define a target function that takes these two hyperparameters as input and spits out an error (e.g, using some form of cross validation). Define also a dictionary, specifying parameters and input spaces for each.
End of explanation
from pyGPGO.surrogates.GaussianProcess import GaussianProcess
from pyGPGO.covfunc import squaredExponential
sexp = squaredExponential()
gp = GaussianProcess(sexp)
Explanation: Now comes the fun part, where we specify our BO framework using pyGPGO. We are going to use a Gaussian Process (GP) model to approximate our true objective function, and a covariance function that measures similarity among training examples. An excellent introduction to Gaussian Process regression can be found in [@Rassmussen-Williams2004]. We are going to use the squared exponential kernel for this example, that takes the form:
$$k(r) = \exp\left(-\dfrac{r^2}{2l^2} \right)$$,
where $r = |x - x'|$ two examples.
End of explanation
from pyGPGO.acquisition import Acquisition
acq = Acquisition(mode = 'ExpectedImprovement')
Explanation: We specify now an acquisition function, that will determine the behaviour of the BO procedure when selecting a new point. For instance, it is very common to use the Expected Improvement (EI) acquisition, that will both take into account the probability of improvement of a point and its magnitude:
End of explanation
from pyGPGO.GPGO import GPGO
gpgo = GPGO(gp, acq, evaluateModel, params)
gpgo.run(max_iter = 20)
Explanation: We're almost done! Finally call the GPGO class and put everything together. We'll run the procedure for 20 epochs.
End of explanation
gpgo.getResult()
Explanation: Finally retrieve your result!
End of explanation |
4,107 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The
Step1: Load and inspect example data
This data set contains source estimation data from an audio visual task. It
has been mapped onto the inflated cortical surface representation obtained
from
FreeSurfer <sphx_glr_auto_tutorials_plot_background_freesurfer.py>
using the dSPM method. It highlights a noticeable peak in the auditory
cortices.
Let's see how it looks like.
Step2: SourceEstimate (stc)
A source estimate contains the time series of a activations
at spatial locations defined by the source space.
In the context of a FreeSurfer surfaces - which consist of 3D triangulations
- we could call each data point on the inflated brain
representation a vertex . If every vertex represents the spatial location
of a time series, the time series and spatial location can be written into a
matrix, where to each vertex (rows) at multiple time points (columns) a value
can be assigned. This value is the strength of our signal at a given point in
space and time. Exactly this matrix is stored in stc.data.
Let's have a look at the shape
Step3: We see that stc carries 7498 time series of 25 samples length. Those time
series belong to 7498 vertices, which in turn represent locations
on the cortical surface. So where do those vertex values come from?
FreeSurfer separates both hemispheres and creates surfaces
representation for left and right hemisphere. Indices to surface locations
are stored in stc.vertices. This is a list with two arrays of integers,
that index a particular vertex of the FreeSurfer mesh. A value of 42 would
hence map to the x,y,z coordinates of the mesh with index 42.
See next section on how to get access to the positions in a
Step4: Since we did not change the time representation, only the selected subset of
vertices and hence only the row size of the matrix changed. We can check if
the rows of stc.lh_data and stc.rh_data sum up to the value we had
before.
Step5: Indeed and as the mindful reader already suspected, the same can be said
about vertices. stc.lh_vertno thereby maps to the left and
stc.rh_vertno to the right inflated surface representation of
FreeSurfer.
Relationship to SourceSpaces (src)
As mentioned above,
Step6: The first value thereby indicates which vertex and the second which time
point index from within stc.lh_vertno or stc.lh_data is used. We can
use the respective information to get the index of the surface vertex
resembling the peak and its value.
Step7: Let's visualize this as well, using the same surfer_kwargs as in the
beginning. | Python Code:
import os
from mne import read_source_estimate
from mne.datasets import sample
print(__doc__)
# Paths to example data
sample_dir_raw = sample.data_path()
sample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample')
subjects_dir = os.path.join(sample_dir_raw, 'subjects')
fname_stc = os.path.join(sample_dir, 'sample_audvis-meg')
Explanation: The :class:SourceEstimate <mne.SourceEstimate> data structure
Source estimates, commonly referred to as STC (Source Time Courses),
are obtained from source localization methods.
Source localization method solve the so-called 'inverse problem'.
MNE provides different methods for solving it:
dSPM, sLORETA, LCMV, MxNE etc.
Source localization consists in projecting the EEG/MEG sensor data into
a 3-dimensional 'source space' positioned in the individual subject's brain
anatomy. Hence the data is transformed such that the recorded time series at
each sensor location maps to time series at each spatial location of the
'source space' where is defined our source estimates.
An STC object contains the amplitudes of the sources over time.
It only stores the amplitudes of activations but
not the locations of the sources. To get access to the locations
you need to have the :class:source space <mne.SourceSpaces>
(often abbreviated src) used to compute the
:class:forward operator <mne.Forward> (often abbreviated fwd).
See tut_forward for more details on forward modeling, and
sphx_glr_auto_tutorials_plot_mne_dspm_source_localization.py
for an example of source localization with dSPM, sLORETA or eLORETA.
Source estimates come in different forms:
- :class:`mne.SourceEstimate`: For cortically constrained source spaces.
- :class:`mne.VolSourceEstimate`: For volumetric source spaces
- :class:`mne.VectorSourceEstimate`: For cortically constrained source
spaces with vector-valued source activations (strength and orientation)
- :class:`mne.MixedSourceEstimate`: For source spaces formed of a
combination of cortically constrained and volumetric sources.
<div class="alert alert-info"><h4>Note</h4><p>:class:`(Vector) <mne.VectorSourceEstimate>`
:class:`SourceEstimate <mne.SourceEstimate>` are surface representations
mostly used together with
`FreeSurfer <sphx_glr_auto_tutorials_plot_background_freesurfer.py>`
surface representations.</p></div>
Let's get ourselves an idea of what a :class:mne.SourceEstimate really
is. We first set up the environment and load some data:
End of explanation
stc = read_source_estimate(fname_stc, subject='sample')
# Define plotting parameters
surfer_kwargs = dict(
hemi='lh', subjects_dir=subjects_dir,
clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',
initial_time=0.09, time_unit='s', size=(800, 800),
smoothing_steps=5)
# Plot surface
brain = stc.plot(**surfer_kwargs)
# Add title
brain.add_text(0.1, 0.9, 'SourceEstimate', 'title', font_size=16)
Explanation: Load and inspect example data
This data set contains source estimation data from an audio visual task. It
has been mapped onto the inflated cortical surface representation obtained
from
FreeSurfer <sphx_glr_auto_tutorials_plot_background_freesurfer.py>
using the dSPM method. It highlights a noticeable peak in the auditory
cortices.
Let's see how it looks like.
End of explanation
shape = stc.data.shape
print('The data has %s vertex locations with %s sample points each.' % shape)
Explanation: SourceEstimate (stc)
A source estimate contains the time series of a activations
at spatial locations defined by the source space.
In the context of a FreeSurfer surfaces - which consist of 3D triangulations
- we could call each data point on the inflated brain
representation a vertex . If every vertex represents the spatial location
of a time series, the time series and spatial location can be written into a
matrix, where to each vertex (rows) at multiple time points (columns) a value
can be assigned. This value is the strength of our signal at a given point in
space and time. Exactly this matrix is stored in stc.data.
Let's have a look at the shape
End of explanation
shape_lh = stc.lh_data.shape
print('The left hemisphere has %s vertex locations with %s sample points each.'
% shape_lh)
Explanation: We see that stc carries 7498 time series of 25 samples length. Those time
series belong to 7498 vertices, which in turn represent locations
on the cortical surface. So where do those vertex values come from?
FreeSurfer separates both hemispheres and creates surfaces
representation for left and right hemisphere. Indices to surface locations
are stored in stc.vertices. This is a list with two arrays of integers,
that index a particular vertex of the FreeSurfer mesh. A value of 42 would
hence map to the x,y,z coordinates of the mesh with index 42.
See next section on how to get access to the positions in a
:class:mne.SourceSpaces object.
Since both hemispheres are always represented separately, both attributes
introduced above, can also be obtained by selecting the respective
hemisphere. This is done by adding the correct prefix (lh or rh).
End of explanation
is_equal = stc.lh_data.shape[0] + stc.rh_data.shape[0] == stc.data.shape[0]
print('The number of vertices in stc.lh_data and stc.rh_data do ' +
('not ' if not is_equal else '') +
'sum up to the number of rows in stc.data')
Explanation: Since we did not change the time representation, only the selected subset of
vertices and hence only the row size of the matrix changed. We can check if
the rows of stc.lh_data and stc.rh_data sum up to the value we had
before.
End of explanation
peak_vertex, peak_time = stc.get_peak(hemi='lh', vert_as_index=True,
time_as_index=True)
Explanation: Indeed and as the mindful reader already suspected, the same can be said
about vertices. stc.lh_vertno thereby maps to the left and
stc.rh_vertno to the right inflated surface representation of
FreeSurfer.
Relationship to SourceSpaces (src)
As mentioned above, :class:src <mne.SourceSpaces> carries the mapping from
stc to the surface. The surface is built up from a
triangulated mesh <https://en.wikipedia.org/wiki/Surface_triangulation>_
for each hemisphere. Each triangle building up a face consists of 3 vertices.
Since src is a list of two source spaces (left and right hemisphere), we can
access the respective data by selecting the source space first. Faces
building up the left hemisphere can be accessed via src[0]['tris'], where
the index $0$ stands for the left and $1$ for the right
hemisphere.
The values in src[0]['tris'] refer to row indices in src[0]['rr'].
Here we find the actual coordinates of the surface mesh. Hence every index
value for vertices will select a coordinate from here. Furthermore
src[0]['vertno'] stores the same data as stc.lh_vertno,
except when working with sparse solvers such as
:func:mne.inverse_sparse.mixed_norm, as then only a fraction of
vertices actually have non-zero activations.
In other words stc.lh_vertno equals src[0]['vertno'], whereas
stc.rh_vertno equals src[1]['vertno']. Thus the Nth time series in
stc.lh_data corresponds to the Nth value in stc.lh_vertno and
src[0]['vertno'] respectively, which in turn map the time series to a
specific location on the surface, represented as the set of cartesian
coordinates stc.lh_vertno[N] in src[0]['rr'].
Let's obtain the peak amplitude of the data as vertex and time point index
End of explanation
peak_vertex_surf = stc.lh_vertno[peak_vertex]
peak_value = stc.lh_data[peak_vertex, peak_time]
Explanation: The first value thereby indicates which vertex and the second which time
point index from within stc.lh_vertno or stc.lh_data is used. We can
use the respective information to get the index of the surface vertex
resembling the peak and its value.
End of explanation
brain = stc.plot(**surfer_kwargs)
# We add the new peak coordinate (as vertex index) as an annotation dot
brain.add_foci(peak_vertex_surf, coords_as_verts=True, hemi='lh', color='blue')
# We add a title as well, stating the amplitude at this time and location
brain.add_text(0.1, 0.9, 'Peak coordinate', 'title', font_size=14)
Explanation: Let's visualize this as well, using the same surfer_kwargs as in the
beginning.
End of explanation |
4,108 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p>
<img src="http
Step1: defs
Step2: testing
Step3: $\mathfrak{p}={1^{j+1}0^{j}}$
Step4: $\mathfrak{p}={0^{j+1}1^{j}}$
Step5: $\mathfrak{p}={1^{j}0^{j}}$
Step6: $\mathfrak{p}={(10)}^{j}1$
Step7: $\mathfrak{p}={(01)}^{j}0$ | Python Code:
from sympy import *
from IPython.display import Markdown, Latex
from oeis import oeis_search
init_printing()
%run ~/Developer/working-copies/programming-contests/competitive-programming/python-libs/oeis.py
Explanation: <p>
<img src="http://www.cerm.unifi.it/chianti/images/logo%20unifi_positivo.jpg"
alt="UniFI logo" style="float: left; width: 20%; height: 20%;">
<div align="right">
Donatella Merlini<br>
Massimo Nocentini<br>
<small>
<br>October 27, 2016: fix fetching error, refactoring
<br>October 20, 2016: some Riordan patterns
</small>
</div>
</p>
<br>
<div align="center">
<b>Abstract</b><br>
A notebook to support an ongoing work <i>Algebraic generating functions for languages
avoiding Riordan patterns</i>.
</div>
End of explanation
j = symbols('j', positive=True)
t = symbols('t')
j_range = range(1, 10)
def make_expander(gf, t, terms_in_expansion = 15):
def worker(j_index):
term = Subs(gf, j, j_index)
return Eq(j, j_index), Eq(term, term.doit().series(t, n=terms_in_expansion))
return worker
def coeffs(res, t, limit):
return [res.rhs.coeff(t, n) for n in range(limit)]
def match_table(results, gf_col_width="12cm"):
rows = []
for j_eq, res in results:
j, j_index = j_eq.lhs, j_eq.rhs
terms_in_expansion = res.rhs.getn()
searchable = oeis_search(seq=coeffs(res, t, limit=terms_in_expansion),
only_possible_matchings=True, progress_indicator=None)
row_src = searchable(term_src=latex(res.lhs))
rows.append(row_src)
header_row = r'<tr><th style="width:{width};" >gf</th><th>matches</th></tr>'.format(width=gf_col_width)
return Markdown('<table style="width:100%">\n {header}\n {rows}\n </table>'.format(
header=header_row, rows='\n'.join(rows)))
def coeffs_table(results):
def coeffs_table_rows():
rows = []
for j_eq, res in results:
j, j_index = j_eq.lhs, j_eq.rhs
terms_in_expansion = res.rhs.getn()
coefficients = coeffs(res, t, limit=terms_in_expansion)
rows.append(latex(j_eq) + ' & ' + latex(coefficients))
return rows
return Latex(r'\begin{{array}}{{r|l}} {rows} \end{{array}}'.format(rows=r'\\'.join(coeffs_table_rows())))
Explanation: defs
End of explanation
def fib(t):
return t/(1-t-t**2)
f = fib(t)
f
s = f.series(t, n=10)
s
s.getn()
results = list(map(make_expander(fib(t), t), [1]))
results
coeffs_table(results)
match_table(results)
Explanation: testing
End of explanation
def S(t):
radix_term = sqrt(1-4*t+4*t**(j+1))
return 2/(radix_term * (1 + radix_term))
S(t)
results = list(map(make_expander(S(t), t), j_range))
# results
coeffs_table(results)
match_table(results)
Explanation: $\mathfrak{p}={1^{j+1}0^{j}}$
End of explanation
def S(t):
radix_term = sqrt(1-4*t+4*t**(j+1))
return 2*(1-t**j)/(radix_term * (1 + radix_term))
S(t)
results = list(map(make_expander(S(t), t), j_range))
coeffs_table(results)
match_table(results)
Explanation: $\mathfrak{p}={0^{j+1}1^{j}}$
End of explanation
def S(t):
radix_term = sqrt(1-4*t+2*t**j+t**(2*j))
return 2/(radix_term * (1 -t**j + radix_term))
S(t)
results = list(map(make_expander(S(t), t), j_range))
coeffs_table(results)
match_table(results)
Explanation: $\mathfrak{p}={1^{j}0^{j}}$
End of explanation
def S(t):
radix_term = sqrt(1-4*t+2*t**(j+1)+4*t**(j+2)-3*t**(2*j+2))
return 2*(1-t**j)/(1-4*t**j+3*t**(j+1)+radix_term)
S(t)
results = list(map(make_expander(S(t), t), j_range))
coeffs_table(results)
match_table(results)
Explanation: $\mathfrak{p}={(10)}^{j}1$
End of explanation
def S(t):
radix_term = sqrt(1-4*t+2*t**(j+1)+4*t**(j+2)-3*t**(2*j+2))
return 2*(1-t**j-t**(j+1)+t**(2*j+1))/(radix_term * (1 -2*t**j +t**(j+1) + radix_term))
S(t)
results = list(map(make_expander(S(t), t), j_range))
coeffs_table(results)
match_table(results, gf_col_width="18cm")
Explanation: $\mathfrak{p}={(01)}^{j}0$
End of explanation |
4,109 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cliff Walking Problem solved with TD(0) Algorithms
Step1: The OpenAI Gym toolkit includes the below environment for the "Cliff-Walking" problem
Step2: Load the Cliff-Walking environment
Step3: This environment has to do about gridworld shown below, where the traveller initial position (x) and the target to achieve (reach T) has been flagged appropriately. In addition in a one of the edge of this gridwordld example there is a "Cliff" denoted with C. Reward is $-1$ on all transitions except those into the cliff region. Steppping into this region incurs a reward of $-100$ and sends the agent instantly back to the start.
Once the environment is initialized you get the situation below. This is an episodic (undiscounted) task with start at traveller's starting point, and it is completed either when the goal is achieved, that is the traveller manage to reach the target location, T, or she may happen to step into the cliff. In this case the environment is reseted in each initial state.
Step4: Possible traveller's actions are of course her movements in this grid
Step5: 2. RL-Algorithms based on Temporal Difference TD(0)
2a. Load the "Temporal Difference" Python class
Load the Python class PlotUtils() which provides various plotting utilities and start a new instance.
Step6: Load the Temporal Difference Python class, TemporalDifferenceUtils()
Step7: Instantiate the class for the environment of interest
Step8: 2b. SARSA
Step9: 2c. Q-Learning
Step10: 2d. On-Policy Expected SARSA
Step11: 3. Double Learning
Step12: 3b. Double Q-Learning
Step13: 3c. Double Expected SARSA
Step14: 4. Comparison of SARSA, Q-Learning and Expected SARSA best models
After an initial transient, Q-learning learns values for the optimal policy, the ones that travels right along the edge of the cliff. Unfortunately, this results the traveller fall-off the cliff ocasionally, because of the $\varepsilon$-greedy selection. SARSA, on the other hand, takes the action selection into account and learns the longer but safer path, through the upper part of the grid. Although, Q-learning actually learns the values of the optimal policy, its online performance is worse than that of SARSA, which learns the roundabout policy.
Step15: Note
Step16: 5. Learned Policies
SARSA on-Policy TD(0) Control
Step17: Double SARSA on-Policy TD(0) Control
Step18: Q-Learning off-Policy TD(0) Control
Step19: Double Q-Learning off-Policy TD(0) Control
Step20: Expected SARSA on-Policy TD(0) Control
Step21: Double Expected SARSA on-Policy TD(0) Control | Python Code:
import gym
import random
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
from collections import OrderedDict
Explanation: Cliff Walking Problem solved with TD(0) Algorithms: Implementation & Comparisons
1. Load Libraries & Define Environment
End of explanation
print('OpenAI Gym environments for Cliff Walking Problem:')
[k for k in gym.envs.registry.env_specs.keys() if k.find('Cliff' , 0) >=0]
Explanation: The OpenAI Gym toolkit includes the below environment for the "Cliff-Walking" problem:
End of explanation
env = gym.make('CliffWalking-v0')
Explanation: Load the Cliff-Walking environment:
End of explanation
env.render()
Explanation: This environment has to do about gridworld shown below, where the traveller initial position (x) and the target to achieve (reach T) has been flagged appropriately. In addition in a one of the edge of this gridwordld example there is a "Cliff" denoted with C. Reward is $-1$ on all transitions except those into the cliff region. Steppping into this region incurs a reward of $-100$ and sends the agent instantly back to the start.
Once the environment is initialized you get the situation below. This is an episodic (undiscounted) task with start at traveller's starting point, and it is completed either when the goal is achieved, that is the traveller manage to reach the target location, T, or she may happen to step into the cliff. In this case the environment is reseted in each initial state.
End of explanation
help(env)
Explanation: Possible traveller's actions are of course her movements in this grid:
- "UP": denoted by 0
- "RIGHT": denoted by 1
- "DOWN": denoted by 2
- "LEFT": denoted by 3
To get the new state at every next step of an episode, you may pass the current action into the .step() method of the environment. The environment then will return a tuple (observation, reward, done, info) each of which are explained as below:
- observation (object): agent's observation of the current environment
- reward (float): amount of reward returned after previous action
- done (bool): whether the episode has ended, in which case further step() calls will return undefined results
- info (dict): contains auxiliary diagnostic information (helpful for debugging, and sometimes learning)
Note: At termination of each episode, the programmer is responsible to reset the environment.
For further details concerning the CliffWalking-v0 environment of OpenAI Gym toolkit consult the docstring below.
End of explanation
%run ../PlotUtils.py
plotutls = PlotUtils()
Explanation: 2. RL-Algorithms based on Temporal Difference TD(0)
2a. Load the "Temporal Difference" Python class
Load the Python class PlotUtils() which provides various plotting utilities and start a new instance.
End of explanation
%run ../TD0_Utils.py
Explanation: Load the Temporal Difference Python class, TemporalDifferenceUtils():
End of explanation
TD0 = TemporalDifferenceUtils(env)
Explanation: Instantiate the class for the environment of interest:
End of explanation
# Define Number of Episodes
n_episodes = 3e+3
# e-greedy parameters to investigate
print('Determine the epsilon parameters for the epsilon-greedy policy...\n')
epsilons = np.array([0.1, 0.13, 0.16])
print('epsilons: '.format(epsilons), '\n')
# various step-sizes (alpha) to try
print('Determine the step-sizes parameters (alphas) for the TD(0)...\n')
step_sizes = np.array([0.4])
print('step_sizes: {}'.format(step_sizes), '\n')
# Fixed discount
discount_fixed = 1
# Create a mesh-grid of trials
print('Create a dictionary of the RL-models of interest...\n')
epsilons, step_sizes = np.meshgrid(epsilons, step_sizes)
epsilons = epsilons.flatten()
step_sizes = step_sizes.flatten()
# Create a dictionary of the RL-trials of interest
RL_trials = {"baseline":
{'epsilon': 0.1,
'step_size': 0.5, 'discount': 1}}
for n, trial in enumerate(list(zip(epsilons, step_sizes))):
key = 'trial_' + str(n+1)
RL_trials[key] = {'epsilon': trial[0],
'step_size': trial[1], 'discount': discount_fixed}
print('Number of RL-models to try: {}\n'.format(len(RL_trials)))
print('Let all RL-models to be trained for {0:,} episodes...\n'.format(int(n_episodes)))
rewards_per_trial_SARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
q_values_per_trial_SARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
for trial, params_dict in RL_trials.items():
# Read out parameters from "params_dict"
epsilon = params_dict['epsilon']
step_size = params_dict['step_size']
discount = params_dict['discount']
# Apply SARSA [on-policy TD(0) Control]
q_values, tot_rewards = TD0.sarsa_on_policy_control(env,
n_episodes=n_episodes,
step_size=step_size, discount=discount, epsilon=epsilon)
# Update "rewards_per_trial" and "q_values_per_trial" OrderedDicts
rewards_per_trial_SARSA[trial] = tot_rewards
q_values_per_trial_SARSA[trial] = q_values
title = 'Efficiency of the RL Method\n[SARSA on-policy TD(0) Control]'
plotutls.plot_learning_curve(rewards_per_trial_SARSA, title=title, lower_reward_ratio=-100)
RL_trials
Explanation: 2b. SARSA: On-Policy TD(0) Control
End of explanation
# Define Number of Episodes
# n_episodes = 2e+3
# e-greedy parameters to investigate
print('Determine the epsilon parameters for the epsilon-greedy policy...\n')
epsilons = np.array([0.1, 0.13, 0.16])
print('epsilons: '.format(epsilons), '\n')
# various step-sizes (alpha) to try
print('Determine the step-sizes parameters (alphas) for the TD(0)...\n')
step_sizes = np.array([0.4])
print('step_sizes: {}'.format(step_sizes), '\n')
# Fixed discount
discount_fixed = 1
# Create a mesh-grid of trials
print('Create a dictionary of the RL-models of interest...\n')
epsilons, step_sizes = np.meshgrid(epsilons, step_sizes)
epsilons = epsilons.flatten()
step_sizes = step_sizes.flatten()
# Create a dictionary of the RL-trials of interest
RL_trials = {"baseline":
{'epsilon': 0.1,
'step_size': 0.5, 'discount': 1}}
for n, trial in enumerate(list(zip(epsilons, step_sizes))):
key = 'trial_' + str(n+1)
RL_trials[key] = {'epsilon': trial[0],
'step_size': trial[1], 'discount': discount_fixed}
print('Number of RL-models to try: {}\n'.format(len(RL_trials)))
print('Let all RL-models to be trained for {0:,} episodes...\n'.format(int(n_episodes)))
rewards_per_trial_QL = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
q_values_per_trial_QL = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
for trial, params_dict in RL_trials.items():
# Read out parameters from "params_dict"
epsilon = params_dict['epsilon']
step_size = params_dict['step_size']
discount = params_dict['discount']
# Apply SARSA [on-policy TD(0) Control]
q_values, tot_rewards = TD0.q_learning_off_policy(env,
n_episodes=n_episodes,
step_size=step_size, discount=discount, epsilon=epsilon)
# Update "rewards_per_trial" and "q_values_per_trial" OrderedDicts
rewards_per_trial_QL[trial] = tot_rewards
q_values_per_trial_QL[trial] = q_values
title = 'Efficiency of the RL Method\n[Q-Learning off-policy TD(0) Control]'
plotutls.plot_learning_curve(rewards_per_trial_QL, title=title)
RL_trials
Explanation: 2c. Q-Learning: Off-Policy TD(0) Control
End of explanation
# Define Number of Episodes
# n_episodes = 2e+3
# e-greedy parameters to investigate
print('Determine the epsilon parameters for the epsilon-greedy policy...\n')
epsilons = np.array([0.1, 0.13, 0.16])
print('epsilons: '.format(epsilons), '\n')
# various step-sizes (alpha) to try
print('Determine the step-sizes parameters (alphas) for the TD(0)...\n')
step_sizes = np.array([0.4])
print('step_sizes: {}'.format(step_sizes), '\n')
# Fixed discount
discount_fixed = 1
# Create a mesh-grid of trials
print('Create a dictionary of the RL-models of interest...\n')
epsilons, step_sizes = np.meshgrid(epsilons, step_sizes)
epsilons = epsilons.flatten()
step_sizes = step_sizes.flatten()
# Create a dictionary of the RL-trials of interest
RL_trials = {"baseline":
{'epsilon': 0.1,
'step_size': 0.5, 'discount': 1}}
for n, trial in enumerate(list(zip(epsilons, step_sizes))):
key = 'trial_' + str(n+1)
RL_trials[key] = {'epsilon': trial[0],
'step_size': trial[1], 'discount': discount_fixed}
print('Number of RL-models to try: {}\n'.format(len(RL_trials)))
print('Let all RL-models to be trained for {0:,} episodes...\n'.format(int(n_episodes)))
rewards_per_trial_ExpSARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
q_values_per_trial_ExpSARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
for trial, params_dict in RL_trials.items():
# Read out parameters from "params_dict"
epsilon = params_dict['epsilon']
step_size = params_dict['step_size']
discount = params_dict['discount']
# Apply SARSA [on-policy TD(0) Control]
q_values, tot_rewards = TD0.expected_sarsa_on_policy(env,
n_episodes=n_episodes,
step_size=step_size, discount=discount, epsilon=epsilon)
# Update "rewards_per_trial" and "q_values_per_trial" OrderedDicts
rewards_per_trial_ExpSARSA[trial] = tot_rewards
q_values_per_trial_ExpSARSA[trial] = q_values
title = 'Efficiency of the RL Method\n[Expected SARSA on-policy TD(0) Control]'
plotutls.plot_learning_curve(rewards_per_trial_ExpSARSA, title=title)
RL_trials
Explanation: 2d. On-Policy Expected SARSA
End of explanation
# Define Number of Episodes
# n_episodes = 2e+3
# e-greedy parameters to investigate
print('Determine the epsilon parameters for the epsilon-greedy policy...\n')
epsilons = np.array([0.1, 0.13, 0.16])
print('epsilons: '.format(epsilons), '\n')
# various step-sizes (alpha) to try
print('Determine the step-sizes parameters (alphas) for the TD(0)...\n')
step_sizes = np.array([0.4])
print('step_sizes: {}'.format(step_sizes), '\n')
# Fixed discount
discount_fixed = 1
# Create a mesh-grid of trials
print('Create a dictionary of the RL-models of interest...\n')
epsilons, step_sizes = np.meshgrid(epsilons, step_sizes)
epsilons = epsilons.flatten()
step_sizes = step_sizes.flatten()
# Create a dictionary of the RL-trials of interest
RL_trials = {"baseline":
{'epsilon': 0.1,
'step_size': 0.5, 'discount': 1}}
for n, trial in enumerate(list(zip(epsilons, step_sizes))):
key = 'trial_' + str(n+1)
RL_trials[key] = {'epsilon': trial[0],
'step_size': trial[1], 'discount': discount_fixed}
print('Number of RL-models to try: {}\n'.format(len(RL_trials)))
print('Let all RL-models to be trained for {0:,} episodes...\n'.format(int(n_episodes)))
rewards_per_trial_DSARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
q_values_per_trial_DSARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
for trial, params_dict in RL_trials.items():
# Read out parameters from "params_dict"
epsilon = params_dict['epsilon']
step_size = params_dict['step_size']
discount = params_dict['discount']
# Apply SARSA [on-policy TD(0) Control]
q_values_1, tot_rewards = TD0.sarsa_on_policy_control(env,
n_episodes=n_episodes,
step_size=step_size, discount=discount, epsilon=epsilon,
double_learning=True)
# Update "rewards_per_trial" and "q_values_per_trial" OrderedDicts
rewards_per_trial_DSARSA[trial] = tot_rewards
q_values_per_trial_DSARSA[trial] = q_values_1
title = 'Efficiency of the RL Method\n[Double SARSA: on-policy TD(0) Control]'
plotutls.plot_learning_curve(rewards_per_trial_DSARSA, title=title)
RL_trials
Explanation: 3. Double Learning: a method to mitigate maximization bias
3a. Double SARSA: On-Policy TD(0) Control
End of explanation
# Define Number of Episodes
# n_episodes = 2e+3
# e-greedy parameters to investigate
print('Determine the epsilon parameters for the epsilon-greedy policy...\n')
epsilons = np.array([0.1, 0.13, 0.16])
print('epsilons: '.format(epsilons), '\n')
# various step-sizes (alpha) to try
print('Determine the step-sizes parameters (alphas) for the TD(0)...\n')
step_sizes = np.array([0.4])
print('step_sizes: {}'.format(step_sizes), '\n')
# Fixed discount
discount_fixed = 1
# Create a mesh-grid of trials
print('Create a dictionary of the RL-models of interest...\n')
epsilons, step_sizes = np.meshgrid(epsilons, step_sizes)
epsilons = epsilons.flatten()
step_sizes = step_sizes.flatten()
# Create a dictionary of the RL-trials of interest
RL_trials = {"baseline":
{'epsilon': 0.1,
'step_size': 0.5, 'discount': 1}}
for n, trial in enumerate(list(zip(epsilons, step_sizes))):
key = 'trial_' + str(n+1)
RL_trials[key] = {'epsilon': trial[0],
'step_size': trial[1], 'discount': discount_fixed}
print('Number of RL-models to try: {}\n'.format(len(RL_trials)))
print('Let all RL-models to be trained for {0:,} episodes...\n'.format(int(n_episodes)))
rewards_per_trial_DQL = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
q_values_per_trial_DQL = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
for trial, params_dict in RL_trials.items():
# Read out parameters from "params_dict"
epsilon = params_dict['epsilon']
step_size = params_dict['step_size']
discount = params_dict['discount']
# Apply SARSA [on-policy TD(0) Control]
q_values, tot_rewards = TD0.q_learning_off_policy(env,
n_episodes=n_episodes,
step_size=step_size, discount=discount, epsilon=epsilon,
double_learning=True)
# Update "rewards_per_trial" and "q_values_per_trial" OrderedDicts
rewards_per_trial_DQL[trial] = tot_rewards
q_values_per_trial_DQL[trial] = q_values
title = 'Efficiency of the RL Method\n[Double Q-Learning: off-policy TD(0) Control]'
plotutls.plot_learning_curve(rewards_per_trial_DQL, title=title, lower_reward_ratio=-100)
RL_trials
Explanation: 3b. Double Q-Learning: Off-Policy TD(0) Control
End of explanation
# Define Number of Episodes
# n_episodes = 2e+3
# e-greedy parameters to investigate
print('Determine the epsilon parameters for the epsilon-greedy policy...\n')
epsilons = np.array([0.1, 0.13, 0.16])
print('epsilons: '.format(epsilons), '\n')
# various step-sizes (alpha) to try
print('Determine the step-sizes parameters (alphas) for the TD(0)...\n')
step_sizes = np.array([0.4])
print('step_sizes: {}'.format(step_sizes), '\n')
# Fixed discount
discount_fixed = 1
# Create a mesh-grid of trials
print('Create a dictionary of the RL-models of interest...\n')
epsilons, step_sizes = np.meshgrid(epsilons, step_sizes)
epsilons = epsilons.flatten()
step_sizes = step_sizes.flatten()
# Create a dictionary of the RL-trials of interest
RL_trials = {"baseline":
{'epsilon': 0.1,
'step_size': 0.5, 'discount': 1}}
for n, trial in enumerate(list(zip(epsilons, step_sizes))):
key = 'trial_' + str(n+1)
RL_trials[key] = {'epsilon': trial[0],
'step_size': trial[1], 'discount': discount_fixed}
print('Number of RL-models to try: {}\n'.format(len(RL_trials)))
print('Let all RL-models to be trained for {0:,} episodes...\n'.format(int(n_episodes)))
rewards_per_trial_DExpSARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
q_values_per_trial_DExpSARSA = OrderedDict((label, np.array([])) for label, _ in RL_trials.items())
for trial, params_dict in RL_trials.items():
# Read out parameters from "params_dict"
epsilon = params_dict['epsilon']
step_size = params_dict['step_size']
discount = params_dict['discount']
# Apply SARSA [on-policy TD(0) Control]
q_values, tot_rewards = TD0.expected_sarsa_on_policy(env,
n_episodes=n_episodes,
step_size=step_size, discount=discount, epsilon=epsilon,
double_learning=True)
# Update "rewards_per_trial" and "q_values_per_trial" OrderedDicts
rewards_per_trial_DExpSARSA[trial] = tot_rewards
q_values_per_trial_DExpSARSA[trial] = q_values
title = 'Efficiency of the RL Method\n[Double Expected SARSA: on-policy TD(0) Control]'
plotutls.plot_learning_curve(rewards_per_trial_DQL, title=title, lower_reward_ratio=-50)
RL_trials
Explanation: 3c. Double Expected SARSA: On-Policy TD(0) Control
End of explanation
winning_trial = 'baseline'
rewards_per_trial_best_models = OrderedDict([('Model_SARSA', np.array([])),
('Model_DSARSA', np.array([])),
('Model_QL', np.array([])),
('Model_DQL', np.array([])),
('Model_ExpSARSA', np.array([])),
('Model_DExpSARSA', np.array([]))])
rewards_per_trial_best_models['Model_SARSA'] = rewards_per_trial_SARSA[winning_trial]
rewards_per_trial_best_models['Model_DSARSA'] = rewards_per_trial_DSARSA[winning_trial]
rewards_per_trial_best_models['Model_QL'] = rewards_per_trial_QL[winning_trial]
rewards_per_trial_best_models['Model_DQL'] = rewards_per_trial_DQL[winning_trial]
rewards_per_trial_best_models['Model_ExpSARSA'] = rewards_per_trial_ExpSARSA[winning_trial]
rewards_per_trial_best_models['Model_DExpSARSA'] = rewards_per_trial_DExpSARSA[winning_trial]
title = 'Efficiency of the RL Method\n[SARSA vs Q-Learning and Expected SARSA Winning Models]'
plotutls.plot_learning_curve(rewards_per_trial_best_models, title=title, lower_reward_ratio=-100)
title = 'Efficiency of the RL Method\n[SARSA vs Q-Learning and Expected SARSA Winning Models]'
plotutls.plot_learning_curve(rewards_per_trial_best_models, title=title, lower_reward_ratio=-35)
Explanation: 4. Comparison of SARSA, Q-Learning and Expected SARSA best models
After an initial transient, Q-learning learns values for the optimal policy, the ones that travels right along the edge of the cliff. Unfortunately, this results the traveller fall-off the cliff ocasionally, because of the $\varepsilon$-greedy selection. SARSA, on the other hand, takes the action selection into account and learns the longer but safer path, through the upper part of the grid. Although, Q-learning actually learns the values of the optimal policy, its online performance is worse than that of SARSA, which learns the roundabout policy.
End of explanation
# print optimal policy
def print_optimal_policy(q_values, grid_height=4, grid_width=12):
# Define a helper dictionary of actions
actions_dict = {}
actions = ['UP', 'RIGHT', 'DOWN', 'LEFT']
for k, v in zip(actions, range(0, len(actions))):
actions_dict[k] = v
# Define the position of target dstination
GOAL = [3, 11]
# Reshape the "q_values" table to follow grid-world dimensionality
q_values = q_values.reshape((grid_height, grid_width, len(actions)))
optimal_policy = []
for i in range(0, grid_height):
optimal_policy.append([])
for j in range(0, grid_width):
if [i, j] == GOAL:
optimal_policy[-1].append('G')
continue
bestAction = np.argmax(q_values[i, j, :])
if bestAction == actions_dict['UP']:
optimal_policy[-1].append('\U00002191')
elif bestAction == actions_dict['RIGHT']:
optimal_policy[-1].append('\U00002192')
elif bestAction == actions_dict['DOWN']:
optimal_policy[-1].append('\U00002193')
elif bestAction == actions_dict['LEFT']:
optimal_policy[-1].append('\U00002190')
for row in optimal_policy:
print(*row)
Explanation: Note: In "CliffWalking-v0" environment the traveler can choose one of the below actions as she navigates through the grid:
- "UP": denoted by 0
- "RIGHT": denoted by 1
- "DOWN": denoted by 2
- "LEFT": denoted by 3
End of explanation
winning_trial = 'baseline'
print_optimal_policy(q_values_per_trial_SARSA[winning_trial], grid_height=4, grid_width=12)
Explanation: 5. Learned Policies
SARSA on-Policy TD(0) Control:
Winning trial:
End of explanation
print_optimal_policy(q_values_per_trial_DSARSA[winning_trial], grid_height=4, grid_width=12)
Explanation: Double SARSA on-Policy TD(0) Control:
End of explanation
print_optimal_policy(q_values_per_trial_QL[winning_trial], grid_height=4, grid_width=12)
Explanation: Q-Learning off-Policy TD(0) Control:
End of explanation
print_optimal_policy(q_values_per_trial_DQL[winning_trial], grid_height=4, grid_width=12)
Explanation: Double Q-Learning off-Policy TD(0) Control:
End of explanation
print_optimal_policy(q_values_per_trial_ExpSARSA[winning_trial], grid_height=4, grid_width=12)
Explanation: Expected SARSA on-Policy TD(0) Control:
End of explanation
print_optimal_policy(q_values_per_trial_DExpSARSA[winning_trial], grid_height=4, grid_width=12)
Explanation: Double Expected SARSA on-Policy TD(0) Control:
End of explanation |
4,110 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FDMS TME3
Kaggle How Much Did It Rain? II
Florian Toque & Paul Willot
Dear professor Denoyer...
Warning
This is an early version of our entry for the Kaggle challenge
It's still very messy and we send it because we forgot that we had to submit our progress step by step...
To summarize our goal, we plan to use a RNN to take advantage of the sequential data
Step1: 13.765.202 lines in train.csv
8.022.757 lines in test.csv
Reduced to
10.000
5.000
Step2: Get rid of Nan value for now
Step3: Forums indicate that a higher than 1m rainfall is probably an error. Which is quite understandable. We filter that out
Step5: Memento (mauri)
Step6: Submit
Step9: RNN | Python Code:
# from __future__ import exam_success
from __future__ import absolute_import
from __future__ import print_function
%matplotlib inline
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import random
import pandas as pd
# Sk cheats
from sklearn.cross_validation import cross_val_score # cross val
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.preprocessing import Imputer # get rid of nan
Explanation: FDMS TME3
Kaggle How Much Did It Rain? II
Florian Toque & Paul Willot
Dear professor Denoyer...
Warning
This is an early version of our entry for the Kaggle challenge
It's still very messy and we send it because we forgot that we had to submit our progress step by step...
To summarize our goal, we plan to use a RNN to take advantage of the sequential data
End of explanation
filename = "data/reduced_train_1000000.csv"
train = pd.read_csv(filename)
train = train.set_index('Id')
train = train.dropna()
train.head()
train["Expected"].describe()
Explanation: 13.765.202 lines in train.csv
8.022.757 lines in test.csv
Reduced to
10.000
5.000
End of explanation
train_clean = train[[not i for i in np.isnan(train["Ref_5x5_10th"])]]
Explanation: Get rid of Nan value for now
End of explanation
train = train[train['Expected'] < 1000]
train_clean.head()
train_clean.describe()
train_clean['Expected'].describe()
Explanation: Forums indicate that a higher than 1m rainfall is probably an error. Which is quite understandable. We filter that out
End of explanation
RandomForestRegressor()
etreg = ExtraTreesRegressor(n_estimators=100, max_depth=None, min_samples_split=1, random_state=0)
columns = train_clean.columns
columns = ["minutes_past","radardist_km","Ref","Ref_5x5_10th", "Ref_5x5_50th"]
columns = [u'Id', u'minutes_past', u'radardist_km', u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th', u'Expected']
columns = [u'minutes_past', u'radardist_km', u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th']
labels = train_clean["Expected"].values
features = train_clean[list(columns)].values
imp = Imputer(missing_values='NaN', strategy='mean', axis=0)
imp.fit(features)
features_trans = imp.transform(features)
ftrain = features_trans[:3000]
ltrain = labels[:3000]
ftest = features_trans[3000:]
ltest = labels[3000:]
%%time
etreg.fit(ftrain,ltrain)
def scorer(estimator, X, y):
return (estimator.predict(X[0])-y)**2
%%time
et_score = cross_val_score(etreg, features_trans, labels, cv=5)
print("Features: %s\nScore: %s\tMean: %.03f"%(columns, et_score,et_score.mean()))
r = random.randrange(len(ltrain))
print(r)
print(etreg.predict(ftrain[r]))
print(ltrain[r])
r = random.randrange(len(ltest))
print(r)
print(etreg.predict(ftest[r]))
print(ltest[r])
err = (etreg.predict(ftest)-ltest)**2
err.sum()/len(err)
Explanation: Memento (mauri)
End of explanation
filename = "data/reduced_test_5000.csv"
test = pd.read_csv(filename)
columns = [u'minutes_past', u'radardist_km', u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th']
features = test[list(columns)].values
imp = Imputer(missing_values='NaN', strategy='mean', axis=0)
imp.fit(features)
features_trans = imp.transform(features)
fall = test[test.columns].values
fall[20]
features_trans[0]
i = 1
pred = 0
while fall[i][0] == 1:
#print(fall[i])
pred+=etreg.predict(features_trans[i])[0]
#print(etreg.predict(features_trans[i])[0])
i+=1
print(i)
fall[-1][0]
%%time
res=[]
i=0
while i<len(fall) and i < 10000:
pred = 0
lenn = 0
curr=fall[i][0]
while i<len(fall) and fall[i][0] == curr:
#print(fall[i])
pred+=etreg.predict(features_trans[i])[0]
#print(etreg.predict(features_trans[i])[0])
i+=1
lenn += 1
res.append((curr,pred/lenn))
#i+=1
#print(i)
len(res)
res[:10]
def myfunc(hour):
#rowid = hour['Id'].iloc[0]
# sort hour by minutes_past
hour = hour.sort('minutes_past', ascending=True)
#est = (hour['Id'],random.random())
est = random.random()
return est
def marshall_palmer(ref, minutes_past):
#print("Estimating rainfall from {0} observations".format(len(minutes_past)))
# how long is each observation valid?
valid_time = np.zeros_like(minutes_past)
valid_time[0] = minutes_past.iloc[0]
for n in xrange(1, len(minutes_past)):
valid_time[n] = minutes_past.iloc[n] - minutes_past.iloc[n-1]
valid_time[-1] = valid_time[-1] + 60 - np.sum(valid_time)
valid_time = valid_time / 60.0
# sum up rainrate * validtime
sum = 0
for dbz, hours in zip(ref, valid_time):
# See: https://en.wikipedia.org/wiki/DBZ_(meteorology)
if np.isfinite(dbz):
mmperhr = pow(pow(10, dbz/10)/200, 0.625)
sum = sum + mmperhr * hours
return sum
def simplesum(ref,hour):
hour.sum()
# each unique Id is an hour of data at some gauge
def myfunc(hour):
#rowid = hour['Id'].iloc[0]
# sort hour by minutes_past
hour = hour.sort('minutes_past', ascending=True)
est = marshall_palmer(hour['Ref'], hour['minutes_past'])
return est
estimates = train.groupby(train.index).apply(myfunc)
estimates.head(20)
train["Expected"].head(20)
res=[]
for i in fall:
pred = 0
curr=i[0]
while fall[i][0] == 1:
#print(fall[i])
pred+=etreg.predict(features_trans[i])[0]
#print(etreg.predict(features_trans[i])[0])
i+=1
print(i)
etreg.predict(features_trans[0])
def marshall_palmer(data):
res=[]
for n in data:
res.append(etreg.predict(n)[0])
return np.array(res).mean()
def simplesum(ref,hour):
hour.sum()
def myfunc(hour):
hour = hour.sort('minutes_past', ascending=True)
est = marshall_palmer(hour[train.columns])
return est
estimates = train_clean.groupby(train_clean.index).apply(myfunc)
estimates.head(20)
Explanation: Submit
End of explanation
import pandas as pd
from random import random
flow = (list(range(1,10,1)) + list(range(10,1,-1)))*1000
pdata = pd.DataFrame({"a":flow, "b":flow})
pdata.b = pdata.b.shift(9)
data = pdata.iloc[10:] * random() # some noise
#columns = [u'minutes_past', u'radardist_km', u'Ref', u'Ref_5x5_10th',
# u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
# u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
# u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
# u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
# u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
# u'Kdp_5x5_50th', u'Kdp_5x5_90th']
columns = [u'radardist_km', u'Ref', u'Ref_5x5_10th']
nb_features = len(columns)
data = train[list(columns)]
data.head(10)
data.iloc[0].as_matrix()
train.head(5)
train.loc[11]
train.loc[11][:1]["Expected"].as_matrix
#train.index.unique()
def _load_data(data, n_prev = 100):
data should be pd.DataFrame()
docX, docY = [], []
for i in range(len(data)-n_prev):
docX.append(data.iloc[i:i+n_prev].as_matrix())
docY.append(data.iloc[i+n_prev].as_matrix())
alsX = np.array(docX)
alsY = np.array(docY)
return alsX, alsY
def train_test_split(df, test_size=0.1):
ntrn = round(len(df) * (1 - test_size))
X_train, y_train = _load_data(df.iloc[0:ntrn])
X_test, y_test = _load_data(df.iloc[ntrn:])
return (X_train, y_train), (X_test, y_test)
(X_train, y_train), (X_test, y_test) = train_test_split(data)
np.shape(X_train)
t = np.array([2,1])
t.shape = (1,2)
t.tolist()[0]
np.shape(t)
X_train[:2,:2]
XX[:2,:2]
XX[:2][:2]
np.shape(XX)
for i in XX:
print(np.shape(i))
np.shape(XX[0])
z = np.zeros([297,9,23])
np.shape(z)
np.shape(np.reshape(XX,(297,1)))
tl = train.loc[2][:1]["Expected"]
tl.as_blocks()
tl.as_matrix()
data.iloc[2:4].as_matrix()
train.loc[2].as_matrix()
m = data.loc[10].as_matrix()
pad = np.pad(m, ((0, max_padding -len(m) ),(0,0)), 'constant')
pad
train.index.unique()
max_padding = 20
%%time
docX, docY = [], []
for i in train.index.unique():
if isinstance(train.loc[i],pd.core.series.Series):
m = [data.loc[i].as_matrix()]
pad = np.pad(m, ((0, max_padding -len(m) ),(0,0)), 'constant')
docX.append(pad)
docY.append(float(train.loc[i]["Expected"]))
else:
m = data.loc[i].as_matrix()
pad = np.pad(m, ((0, max_padding -len(m) ),(0,0)), 'constant')
docX.append(pad)
docY.append(float(train.loc[i][:1]["Expected"]))
#docY.append(train.loc[i][:1]["Expected"].as_matrix)
XX = np.array(docX)
yy = np.array(docY)
np.shape(XX)
def _load_data(data):
data should be pd.DataFrame()
docX, docY = [], []
for i in data.index.unique():
#np.pad(tmp, ((0, max_padding -len(tmp) ),(0,0)), 'constant')
m = data.loc[i].as_matrix()
pad = np.pad(m, ((0, max_padding -len(m) ),(0,0)), 'constant')
docX.append(pad)
if isinstance(train.loc[i],pd.core.series.Series):
docY.append(float(train.loc[i]["Expected"]))
else:
docY.append(float(train.loc[i][:1]["Expected"]))
alsX = np.array(docX)
alsY = np.array(docY)
return alsX, alsY
def train_test_split(df, test_size=0.1):
ntrn = round(len(df) * (1 - test_size))
X_train, y_train = _load_data(df.iloc[0:ntrn])
X_test, y_test = _load_data(df.iloc[ntrn:])
return (X_train, y_train), (X_test, y_test)
(X_train, y_train), (X_test, y_test) = train_test_split(train)
len(X_train[0])
train.head()
X_train[0][:10]
yt = []
for i in y_train:
yt.append([i[0]])
yt[0]
X_train.shape
len(fea[0])
len(X_train[0][0])
f = np.array(fea)
f.shape()
#(X_train, y_train), (X_test, y_test) = train_test_split(data) # retrieve data
# and now train the model
# batch_size should be appropriate to your memory size
# number of epochs should be higher for real world problems
model.fit(X_train, yt, batch_size=450, nb_epoch=2, validation_split=0.05)
predicted = model.predict(X_test)
rmse = np.sqrt(((predicted - y_test) ** 2).mean(axis=0))
# and maybe plot it
pd.DataFrame(predicted[:100]).plot()
pd.DataFrame(y_test[:100]).plot()
filename = "data/reduced_train_10000.csv"
train = pd.read_csv(filename)
train = train.dropna()
train = train.set_index('Id')
train.head(10)
columns = [u'Id', u'minutes_past', u'radardist_km', u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th']
labels = train["Expected"].values
features = train[list(columns)].values
np.shape(features)
#max_padding = np.array([len(i) for i in fea]).max()
max_padding = 14
fea=[]
lab=[]
init=features[0][0]
tmp=[]
for idx,i in enumerate(features):
if i[0]==init:
tmp.append(i[1:])
else:
fea.append(np.pad(tmp, ((0, max_padding -len(tmp) ),(0,0)), 'constant').tolist())
lab.append(labels[idx])
tmp=[]
init=i[0]
tmp.append(i[1:])
fea.append(np.array(tmp))
lab.append(labels[idx])
f = np.array(fea)
y = np.array(lab)
type(X_train[0][0])
type(f[0][0])
fea[0]
t = np.array([[1,2,3],
[3,4,4]])
np.pad(t, ((0, max_padding -len(t) ),(0,0)), 'constant')
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.layers.recurrent import LSTM
from keras.layers.embeddings import Embedding
%%time
input_dim = max_padding
out_dim = 1
hidden_dim = 200
model = Sequential()
#Embedding(input_dim, hidden_dim, mask_zero=True)
#model.add(LSTM(hidden_dim, hidden_dim, return_sequences=False))
model.add(LSTM(nb_features, hidden_dim, return_sequences=False))
model.add(Dense(hidden_dim, out_dim))
model.add(Activation("linear"))
model.compile(loss="mean_squared_error", optimizer="rmsprop")
model.fit(XX, yy, batch_size=50, nb_epoch=20, validation_split=0.05)
test = random.randint(0,len(XX))
print(model.predict(XX[test:test+1])[0][0])
print(yy[test])
Explanation: RNN
End of explanation |
4,111 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'sandbox-2', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: NCC
Source ID: SANDBOX-2
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:25
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
4,112 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Yahoo API Example
This notebook is an example of using yahoo api to get fantasy sports data.
Step1: Prerequisite
First we need to create a Yahoo APP at https
Step2: Step 1
Step3: Step 2
Step4: Step 3
Step5: Example to get user Info
Step6: Example to query nba teams of logged user.
Step7: Example to get nba leagues of logged user
Step8: Example to get league metadata
Step9: Get all teams of a league
Step10: Example to get team stats of week 2
Step11: Example to get team stats of whole season
Step12: Example to get game stat categories | Python Code:
from rauth import OAuth2Service
import webbrowser
import json
Explanation: Yahoo API Example
This notebook is an example of using yahoo api to get fantasy sports data.
End of explanation
clientId= "dj0yJmk9M3gzSWJZYzFmTWZtJmQ9WVdrOU9YcGxTMHB4TXpnbWNHbzlNQS0tJnM9Y29uc3VtZXJzZWNyZXQmeD1kZg--"
clinetSecrect="dbd101e179b3d129668965de65d05c02df42333d"
Explanation: Prerequisite
First we need to create a Yahoo APP at https://developer.yahoo.com/apps/, and select Fantasy Sports - Read for API Permissions. Then we can get the Client ID (Consumer Key) and Client Secret (Consumer Secret)
End of explanation
oauth = OAuth2Service(client_id = clientId,
client_secret = clinetSecrect,
name = "yahoo",
access_token_url = "https://api.login.yahoo.com/oauth2/get_token",
authorize_url = "https://api.login.yahoo.com/oauth2/request_auth",
base_url = "http://fantasysports.yahooapis.com/fantasy/v2/")
Explanation: Step 1: Create an OAuth object
End of explanation
params = {
'response_type': 'code',
'redirect_uri': 'oob'
}
authorize_url = oauth.get_authorize_url(**params)
webbrowser.open(authorize_url)
code = input('Enter code: ')
Explanation: Step 2: Generate authorize url, and then get the verify code
For this script, the redirect_uri is set to 'oob',and open a page in brower to get the verify code.
For Web APP server, we can set redirect uri as callback domain during Yahoo APP creation.
End of explanation
data = {
'code': code,
'grant_type': 'authorization_code',
'redirect_uri': 'oob'
}
oauth_session = oauth.get_auth_session(data=data,
decoder= lambda payload : json.loads(payload.decode('utf-8')))
Explanation: Step 3: Get session with the code
End of explanation
user_url='https://fantasysports.yahooapis.com/fantasy/v2/users;use_login=1'
resp = oauth_session.get(user_url, params={'format': 'json'})
resp.json()
user_guid=resp.json()['fantasy_content']['users']['0']['user'][0]['guid']
user_guid
Explanation: Example to get user Info
End of explanation
team_url = 'https://fantasysports.yahooapis.com/fantasy/v2/users;use_login=1/games;game_keys=nba/teams'
resp = oauth_session.get(team_url, params={'format': 'json'})
teams = resp.json()['fantasy_content']['users']['0']['user'][1]['games']['0']['game'][1]['teams']
teams
team_count = int(teams['count'])
team_count
for idx in range(0,team_count):
team = teams[str(idx)]['team'][0][19]['managers']
print(team, '\n')
Explanation: Example to query nba teams of logged user.
End of explanation
league_url = 'https://fantasysports.yahooapis.com/fantasy/v2/users;use_login=1/games;game_keys=nba/leagues'
resp = oauth_session.get(league_url, params={'format': 'json'})
leagues = resp.json()['fantasy_content']['users']['0']['user'][1]['games']['0']['game'][1]['leagues']
leagues
league_count = int(leagues['count'])
league_count
for idx in range(0,league_count):
league = leagues[str(idx)]['league'][0]
print(league, '\n')
Explanation: Example to get nba leagues of logged user
End of explanation
settings_url = 'https://fantasysports.yahooapis.com/fantasy/v2/game/nba/leagues;league_keys=375.l.1039/settings'
resp = oauth_session.get(settings_url, params={'format': 'json'})
settings = resp.json()['fantasy_content']['game'][1]['leagues']['0']['league'][1]['settings'][0]
settings
stat_categories = settings['stat_categories']['stats']
for category in stat_categories:
print(category['stat'], '\n')
Explanation: Example to get league metadata
End of explanation
teams_url = 'https://fantasysports.yahooapis.com/fantasy/v2/league/375.l.573/teams'
resp = oauth_session.get(teams_url, params={'format': 'json'})
league_teams = resp.json()['fantasy_content']['league'][1]['teams']
league_teams
league_team_count = int(league_teams['count'])
league_team_count
for idx in range(0,league_team_count):
league_team = league_teams[str(idx)]['team'][0]
print(league_team, '\n')
team_logo = league_team[5]['team_logos'][0]['team_logo']['url']
# print('team_log', team_logo)
Explanation: Get all teams of a league
End of explanation
stat_url = 'https://fantasysports.yahooapis.com/fantasy/v2/team/375.l.1039.t.17/stats;type=week;week=2'
resp = oauth_session.get(stat_url, params={'format': 'json'})
team_stats = resp.json()['fantasy_content']['team'][1]['team_stats']['stats']
team_stats
Explanation: Example to get team stats of week 2
End of explanation
stat_url = 'https://fantasysports.yahooapis.com/fantasy/v2/team/375.l.1039.t.17/stats'
resp = oauth_session.get(stat_url, params={'format': 'json'})
team_stats = resp.json()['fantasy_content']['team'][1]['team_stats']['stats']
team_stats
Explanation: Example to get team stats of whole season
End of explanation
stat_url = 'https://fantasysports.yahooapis.com/fantasy/v2/game/nba/stat_categories'
resp = oauth_session.get(stat_url, params={'format': 'json'})
stat_categories = resp.json()['fantasy_content']['game'][1]['stat_categories']['stats']
stat_categories
Explanation: Example to get game stat categories
End of explanation |
4,113 | Given the following text description, write Python code to implement the functionality described.
Description:
Find if given matrix is Toeplitz or not
Python3 program to check whether given matrix is a Toeplitz matrix or not ; Function to check if all elements present in descending diagonal starting from position ( i , j ) in the matrix are all same or not ; mismatch found ; we only reach here when all elements in given diagonal are same ; Function to check whether given matrix is a Toeplitz matrix or not ; do for each element in first row ; check descending diagonal starting from position ( 0 , j ) in the matrix ; do for each element in first column ; check descending diagonal starting from position ( i , 0 ) in the matrix ; we only reach here when each descending diagonal from left to right is same ; Driver Code ; Function call
| Python Code:
N = 5
M = 4
def checkDiagonal(mat , i , j ) :
res = mat[i ][j ]
i += 1
j += 1
while(i < N and j < M ) :
if(mat[i ][j ] != res ) :
return False
i += 1
j += 1
return True
def isToeplitz(mat ) :
for j in range(M ) :
if not(checkDiagonal(mat , 0 , j ) ) :
return False
for i in range(1 , N ) :
if not(checkDiagonal(mat , i , 0 ) ) :
return False
return True
if __name__== "__main __":
mat =[[ 6 , 7 , 8 , 9 ] ,[4 , 6 , 7 , 8 ] ,[1 , 4 , 6 , 7 ] ,[0 , 1 , 4 , 6 ] ,[2 , 0 , 1 , 4 ] ]
if(isToeplitz(mat ) ) :
print("Matrix ▁ is ▁ a ▁ Toeplitz ")
else :
print("Matrix ▁ is ▁ not ▁ a ▁ Toeplitz ")
|
4,114 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
01
Step1: Set all graphics from matplotlib to display inline
Step2: Read the csv in (it should be UTF-8 already so you don't have to worry about encoding), save it with the proper boring name
Step3: Display the names of the columns in the csv
Step4: Display the first 3 animals.
Step5: Sort the animals to see the 3 longest animals.
Step6: What are the counts of the different values of the "animal" column? a.k.a. how many cats and how many dogs.
Step7: Only select the dogs.
Step8: Display all of the animals that are greater than 40 cm.
Step9: 'length' is the animal's length in cm. Create a new column called inches that is the length in inches.
Step10: Save the cats to a separate variable called "cats." Save the dogs to a separate variable called "dogs."
Step11: Display all of the animals that are cats and above 12 inches long. First do it using the "cats" variable, then do it using your normal dataframe.
Step12: What's the mean length of a cat?
Step13: What's the mean length of a dog?
Step14: Use groupby to accomplish both of the above tasks at once.
Step15: Make a histogram of the length of dogs. I apologize that it is so boring.
Step16: Change your graphing style to be something else (anything else!)
Step17: Make a horizontal bar graph of the length of the animals, with their name as the label (look at the billionaires notebook I put on Slack!)
Step18: Make a sorted horizontal bar graph of the cats, with the larger cats on top.
Step19: 02
Step20: My Second Try, Which Used a Spreadsheet of Populations
Step21: My First Try, Where I Looked Up the Populations by Hand
Step22: Who are the top 10 richest billionaires?
Step23: What's the average wealth of a billionaire? Male? Female?
Step24: Who is the poorest billionaire? Who are the top 10 poorest billionaires?
The 'Top 10' Poorest
Step25: But There Are Many More People Who Make Just As Little Money
Step26: 'What is relationship to company'? And what are the most common relationships?
According to the PDF, relationship to company "describes the billionaire's relationship to the company primarily responsible for their wealth, such as founder, executive, relation, or shareholder"
Step27: Most common source of wealth? Male vs. female?
Step28: Given the richest person in a country, what % of the GDP is their wealth?
Step29: Add up the wealth of all of the billionaires in a given country (or a few countries) and then compare it to the GDP of the country, or other billionaires, so like pit the US vs India
Step30: What are the most common industries for billionaires to come from? What's the total amount of billionaire money from each industry?
Step31: How many self made billionaires vs. others?
Step32: How old are billionaires? How old are billionaires self made vs. non self made? or different industries?
Billionaire Ages
Step33: Self-Made Billionaire Ages
Step34: The Ages of People Who Have Inherited Billions
Step35: The Ages of Billionaires in Different Industries
Step36: Who are the youngest billionaires? The oldest? Age distribution - maybe make a graph about it?
The Youngest Billionaires
Step37: The Oldest Billionaires
Step38: The Age Distribution of Billionaires
Step39: Maybe just made a graph about how wealthy they are in general?
Step40: Maybe plot their net worth vs age (scatterplot)
Step41: Make a bar graph of the top 10 or 20 richest
Step42: 03 | Python Code:
import pandas as pd
Explanation: 01: Building a pandas Cheat Sheet, Part 1
Use the csv I've attached to answer the following questions:
Import pandas with the right name
End of explanation
%matplotlib inline
Explanation: Set all graphics from matplotlib to display inline
End of explanation
df = pd.read_csv('07-hw-animals.csv')
Explanation: Read the csv in (it should be UTF-8 already so you don't have to worry about encoding), save it with the proper boring name
End of explanation
df.columns
Explanation: Display the names of the columns in the csv
End of explanation
df['animal'].head(3)
Explanation: Display the first 3 animals.
End of explanation
df.sort_values('length', ascending = False).head(3)
# or
df.sort_values('length').tail(3)
Explanation: Sort the animals to see the 3 longest animals.
End of explanation
df['animal'].value_counts()
Explanation: What are the counts of the different values of the "animal" column? a.k.a. how many cats and how many dogs.
End of explanation
df[df['animal'] == 'dog']
Explanation: Only select the dogs.
End of explanation
df[df['length'] > 40]
Explanation: Display all of the animals that are greater than 40 cm.
End of explanation
inches = df['length'] * 0.393701
df['inches'] = inches
Explanation: 'length' is the animal's length in cm. Create a new column called inches that is the length in inches.
End of explanation
cats = df[df['animal'] == 'cat']
dogs = df[df['animal'] == 'dog']
Explanation: Save the cats to a separate variable called "cats." Save the dogs to a separate variable called "dogs."
End of explanation
cats[cats['inches'] > 12]
df[(df['animal'] == 'cat') & (df['length'] > 12)]
Explanation: Display all of the animals that are cats and above 12 inches long. First do it using the "cats" variable, then do it using your normal dataframe.
End of explanation
cats['inches'].describe()
Explanation: What's the mean length of a cat?
End of explanation
dogs['inches'].describe()
Explanation: What's the mean length of a dog?
End of explanation
df.groupby('animal').describe()
Explanation: Use groupby to accomplish both of the above tasks at once.
End of explanation
dogs['inches'].hist()
Explanation: Make a histogram of the length of dogs. I apologize that it is so boring.
End of explanation
import matplotlib.pyplot as plt
plt.style.use('seaborn-deep')
dogs['inches'].hist()
Explanation: Change your graphing style to be something else (anything else!)
End of explanation
df.plot(kind = 'barh', x = 'name', y = 'inches', legend = False)
Explanation: Make a horizontal bar graph of the length of the animals, with their name as the label (look at the billionaires notebook I put on Slack!)
End of explanation
cats.sort_values('inches').plot(kind = 'barh', x = 'name', y = 'inches', legend=False)
Explanation: Make a sorted horizontal bar graph of the cats, with the larger cats on top.
End of explanation
df = pd.read_excel('richpeople.xlsx')
df = df[df['year'] == 2014]
Explanation: 02: Doing some research
Answer your own selection out of the following questions, or any other questions you might be able to think of. Write the question down first in a markdown cell (use a # to make the question a nice header), THEN try to get an answer to it. A lot of these are remarkably similar, and some you'll need to do manual work for - the GDP ones, for example.
<p>If you are trying to figure out some other question that we didn't cover in class and it does not have to do with joining to another data set, we're happy to help you figure it out during lab!
<p>Take a peek at the billionaires notebook I uploaded into Slack; it should be helpful for the graphs (I added a few other styles and options, too). You'll probably also want to look at the "sum()" line I added.
## What country are most billionaires from? For the top ones, how many billionaires per billion people?
End of explanation
cp = pd.read_excel('API_SP_POP_TOTL_DS2_en_excel_v2_toprowsremoved.xls')
pop_df = pd.merge(df, cp[['Country Code', '2014']], how = 'left', left_on = 'countrycode', right_on = 'Country Code')
dict_freq_countries = pop_df['citizenship'].value_counts().head(10).to_dict()
dict_freq_countries
for x in dict_freq_countries:
country_pop = pop_df[pop_df['citizenship'] == x].head(1).to_dict()
for key in country_pop['2014'].keys():
print(x, 'has', dict_freq_countries[x] / (country_pop['2014'][key] / 1000000000), 'billionaires per billion people.')
if country_pop['2014'][key] / 1000000000 < 1:
print('Of course, this is a nonsense figure for a country with less than a billion people.')
print('')
Explanation: My Second Try, Which Used a Spreadsheet of Populations
End of explanation
df['citizenship'].value_counts().head(10)
populations = [
{'country': 'United States', 'pop': 0.3214},
{'country': 'Germany', 'pop': 0.0809},
{'country': 'China' , 'pop': 1.3675},
{'country': 'Russia', 'pop': 0.1424},
{'country': 'Japan', 'pop': 0.1269},
{'country': 'Brazil' , 'pop': 0.2043},
{'country': 'Hong Kong' , 'pop': 0.0071},
{'country': 'France', 'pop': 0.0666},
{'country': 'United Kingdom', 'pop': 0.0641},
{'country': 'India', 'pop': 1.2517}, ]
for item in list(range(9)):
print(populations[item]['country'], 'has', df['citizenship'].value_counts()[item] / populations[item]['pop'], 'billionaires per billion people.')
if populations[item]['pop'] < 1:
print('Of course, this is a nonsense figure for a country with less than a billion people.')
print('')
#pop are in billions and based off of the CIA Factbook 2015 estimate
Explanation: My First Try, Where I Looked Up the Populations by Hand
End of explanation
df[['name', 'rank', 'networthusbillion']].sort_values('networthusbillion', ascending = False).head(10)
Explanation: Who are the top 10 richest billionaires?
End of explanation
df[['gender', 'networthusbillion']].groupby('gender').describe()
Explanation: What's the average wealth of a billionaire? Male? Female?
End of explanation
df[['name', 'rank', 'networthusbillion']].sort_values('networthusbillion').head(10)
Explanation: Who is the poorest billionaire? Who are the top 10 poorest billionaires?
The 'Top 10' Poorest
End of explanation
poorest_billionaires = df[(df['networthusbillion']) == (df['networthusbillion'].sort_values().head(1).values[0])]
print('But there are', poorest_billionaires['name'].count(), 'billionaires making just as little money:')
print('')
print(poorest_billionaires[['name', 'rank', 'networthusbillion']])
Explanation: But There Are Many More People Who Make Just As Little Money
End of explanation
df['relationshiptocompany'].value_counts()
Explanation: 'What is relationship to company'? And what are the most common relationships?
According to the PDF, relationship to company "describes the billionaire's relationship to the company primarily responsible for their wealth, such as founder, executive, relation, or shareholder"
End of explanation
print('Most common source of wealth:')
df['sourceofwealth'].value_counts().head(1)
print('The most common source of wealth for females and males:')
df[['gender', 'sourceofwealth']].groupby('gender').describe()
Explanation: Most common source of wealth? Male vs. female?
End of explanation
gdp = pd.read_excel('API_NY_GDP_MKTP_CD_DS2_en_excel_v2_rowsremoved.xls')
gdp.columns
gdp_df = pd.merge(df, gdp[['Country Code', '2014']], how = 'left', left_on = 'countrycode', right_on = 'Country Code')
gdp_df.head(1)
gdp_df[['name', 'citizenship', 'networthusbillion', '2014']].groupby('citizenship').max() #gives the max for each country
gdp_dict = gdp_df[['name', 'citizenship', 'networthusbillion', '2014']].groupby('citizenship').max().to_dict()
for country in gdp_dict['2014']:
print(country)
gdp_bill = gdp_dict['2014'][country] / 1000000000
print('gdp in billions:', gdp_bill)
print('richest billionaire:', gdp_dict['name'][country])
print('how many billions:', gdp_dict['networthusbillion'][country])
print('percent of gdp:', gdp_dict['networthusbillion'][country] / gdp_bill * 100)
print('')
Explanation: Given the richest person in a country, what % of the GDP is their wealth?
End of explanation
gdp_df[['citizenship', 'networthusbillion', '2014']].groupby('citizenship').sum() #gives the sum for each country
bill_df = gdp_df[['citizenship', 'networthusbillion', '2014']].groupby('citizenship').sum().to_dict()
for country in bill_df['2014']:
print(country)
gdp_bill = bill_df['2014'][country] / 1000000000
print('gdp in billions:', gdp_bill)
print('how many billions the billionaires there make:', bill_df['networthusbillion'][country])
print('percent of gdp:', bill_df['networthusbillion'][country] / gdp_bill * 100)
print('')
for country in bill_df['2014']:
if country == 'United States':
country1 = country
print(country)
gdp_bill1 = bill_df['2014'][country] / 1000000000
print('gdp in billions:', gdp_bill1)
billions1 = bill_df['networthusbillion'][country]
print('how many billions:', billions1)
percent1 = bill_df['networthusbillion'][country] / gdp_bill1 * 100
print('percent of gdp:', percent1)
print('')
elif country == 'India':
country2 = country
print(country)
gdp_bill2 = bill_df['2014'][country] / 1000000000
print('gdp in billions:', gdp_bill2)
billions2 = bill_df['networthusbillion'][country]
print('how many billions:', billions2)
percent2 = bill_df['networthusbillion'][country] / gdp_bill2 * 100
print('percent of gdp:', percent2)
print('')
print(country1 + "'s GDP is", gdp_bill1 / gdp_bill2, 'times that of', country2)
print(country1, 'billionaires make', billions1 / billions2, 'times the money those in', country2, 'do')
print(country1, 'billionaires share of their countrys\'s GDP is', percent1 / percent2, 'times that of those living in', country2)
Explanation: Add up the wealth of all of the billionaires in a given country (or a few countries) and then compare it to the GDP of the country, or other billionaires, so like pit the US vs India
End of explanation
# df.columns
# df[['networthusbillion', 'industry', 'sector']].head()
print('The most common industries for billionaires to come from:')
df['industry'].value_counts().head()
print('The total amount of billionaire money in each industry:')
df[['industry', 'networthusbillion']].groupby('industry').sum().sort_values('networthusbillion', ascending = False)
Explanation: What are the most common industries for billionaires to come from? What's the total amount of billionaire money from each industry?
End of explanation
df['selfmade'].value_counts()
Explanation: How many self made billionaires vs. others?
End of explanation
df['age'].hist()
df['age'].describe()
Explanation: How old are billionaires? How old are billionaires self made vs. non self made? or different industries?
Billionaire Ages
End of explanation
df[df['selfmade'] == 'self-made']['age'].hist()
df[df['selfmade'] == 'self-made']['age'].describe()
Explanation: Self-Made Billionaire Ages
End of explanation
df[df['selfmade'] == 'inherited']['age'].hist()
df[df['selfmade'] == 'inherited']['age'].describe()
Explanation: The Ages of People Who Have Inherited Billions
End of explanation
df[['age', 'industry']].groupby('industry').mean().sort_values('age')
Explanation: The Ages of Billionaires in Different Industries
End of explanation
df[['name', 'age']].sort_values('age').head()
Explanation: Who are the youngest billionaires? The oldest? Age distribution - maybe make a graph about it?
The Youngest Billionaires
End of explanation
df[['name', 'age']].sort_values('age', ascending=False).head()
Explanation: The Oldest Billionaires
End of explanation
df['age'].hist()
Explanation: The Age Distribution of Billionaires
End of explanation
df['networthusbillion'].hist()
# df['networthusbillion'].sort_values(ascending = False).head(10)
Explanation: Maybe just made a graph about how wealthy they are in general?
End of explanation
df[['networthusbillion', 'age']].plot(kind = 'scatter', x = 'networthusbillion', y = 'age')
Explanation: Maybe plot their net worth vs age (scatterplot)
End of explanation
df[['name', 'networthusbillion']].sort_values('networthusbillion', ascending = False).head(10).plot(kind = 'bar', x = 'name', y = 'networthusbillion')
Explanation: Make a bar graph of the top 10 or 20 richest
End of explanation
df = pd.read_json('https://data.sfgov.org/api/views/gxxq-x39z/rows.json')
# Can't get this to work! And I can't save the source code for some reason.
df.head()
Explanation: 03: Finding your own dataset
On Thursday, bring a dataset with you that's a csv/tsv/whatever. Try to open it in pandas, and do df.head() to make sure it displays OK.
End of explanation |
4,115 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Demand forecasting with BigQuery and TensorFlow</h1>
In this notebook, we will develop a machine learning model to predict the demand for taxi cabs in New York.
To develop the model, we will need to get historical data of taxicab usage. This data exists in BigQuery. Let's start by looking at the schema.
Set up
Step1: Restart the kernel after installation.
Step3: Explore table
Step5: <h2> Analyzing taxicab demand </h2>
Let's pull the number of trips for each day in the 2015 dataset using Standard SQL.
Step7: <h3> Modular queries and Pandas dataframe </h3>
Let's use the total number of trips as our proxy for taxicab demand (other reasonable alternatives are total trip_distance or total fare_amount). It is possible to predict multiple variables using Tensorflow, but for simplicity, we will stick to just predicting the number of trips.
We will give our query a name 'taxiquery' and have it use an input variable '$YEAR'. We can then invoke the 'taxiquery' by giving it a YEAR. The to_dataframe() converts the BigQuery result into a <a href='http
Step8: <h3> Benchmark </h3>
Often, a reasonable estimate of something is its historical average. We can therefore benchmark our machine learning model against the historical average.
Step10: The mean here is about 400,000 and the root-mean-square-error (RMSE) in this case is about 52,000. In other words, if we were to estimate that there are 400,000 taxi trips on any given day, that estimate is will be off on average by about 52,000 in either direction.
Let's see if we can do better than this -- our goal is to make predictions of taxicab demand whose RMSE is lower than 52,000.
What kinds of things affect people's use of taxicabs?
<h2> Weather data </h2>
We suspect that weather influences how often people use a taxi. Perhaps someone who'd normally walk to work would take a taxi if it is very cold or rainy.
One of the advantages of using a global data warehouse like BigQuery is that you get to mash up unrelated datasets quite easily.
Step12: <h3> Variables </h3>
Let's pull out the minimum and maximum daily temperature (in Fahrenheit) as well as the amount of rain (in inches) for La Guardia airport.
Step13: <h3> Merge datasets </h3>
Let's use Pandas to merge (combine) the taxi cab and weather datasets day-by-day.
Step14: <h3> Exploratory analysis </h3>
Is there a relationship between maximum temperature and the number of trips?
Step15: The scatterplot above doesn't look very promising. There appears to be a weak downward trend, but it's also quite noisy.
Is there a relationship between the day of the week and the number of trips?
Step16: Hurrah, we seem to have found a predictor. It appears that people use taxis more later in the week. Perhaps New Yorkers make weekly resolutions to walk more and then lose their determination later in the week, or maybe it reflects tourism dynamics in New York City.
Perhaps if we took out the <em>confounding</em> effect of the day of the week, maximum temperature will start to have an effect. Let's see if that's the case
Step17: Removing the confounding factor does seem to reflect an underlying trend around temperature. But ... the data are a little sparse, don't you think? This is something that you have to keep in mind -- the more predictors you start to consider (here we are using two
Step18: The data do seem a bit more robust. If we had even more data, it would be better of course. But in this case, we only have 2014-2016 data for taxi trips, so that's what we will go with.
<h2> Machine Learning with Tensorflow </h2>
We'll use 80% of our dataset for training and 20% of the data for testing the model we have trained. Let's shuffle the rows of the Pandas dataframe so that this division is random. The predictor (or input) columns will be every column in the database other than the number-of-trips (which is our target, or what we want to predict).
The machine learning models that we will use -- linear regression and neural networks -- both require that the input variables are numeric in nature.
The day of the week, however, is a categorical variable (i.e. Tuesday is not really greater than Monday). So, we should create separate columns for whether it is a Monday (with values 0 or 1), Tuesday, etc.
Against that, we do have limited data (remember
Step19: Let's update our benchmark based on the 80-20 split and the larger dataset.
Step20: <h2> Linear regression with tf.contrib.learn </h2>
We scale the number of taxicab rides by 400,000 so that the model can keep its predicted values in the [0-1] range. The optimization goes a lot faster when the weights are small numbers. We save the weights into ./trained_model_linear and display the root mean square error on the test dataset.
Step21: The RMSE here (57K) is lower than the benchmark (62K) indicates that we are doing about 10% better with the machine learning model than we would be if we were to just use the historical average (our benchmark).
<h2> Neural network with tf.contrib.learn </h2>
Let's make a more complex model with a few hidden nodes.
Step22: Using a neural network results in similar performance to the linear model when I ran it -- it might be because there isn't enough data for the NN to do much better. (NN training is a non-convex optimization, and you will get different results each time you run the above code).
<h2> Running a trained model </h2>
So, we have trained a model, and saved it to a file. Let's use this model to predict taxicab demand given the expected weather for three days.
Here we make a Dataframe out of those inputs, load up the saved model (note that we have to know the model equation -- it's not saved in the model file) and use it to predict the taxicab demand. | Python Code:
!sudo pip install --user pandas-gbq
!pip install --user pandas_gbq
!pip install tensorflow==1.15.3
Explanation: <h1>Demand forecasting with BigQuery and TensorFlow</h1>
In this notebook, we will develop a machine learning model to predict the demand for taxi cabs in New York.
To develop the model, we will need to get historical data of taxicab usage. This data exists in BigQuery. Let's start by looking at the schema.
Set up
End of explanation
PROJECT = 'cloud-training-demos' # CHANGE this to your GCP project
BUCKET = PROJECT + '-ml'
REGION = 'us-central1' # CHANGE this to the region you want to use
import os
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
%%bash
gcloud config set project ${PROJECT}
gcloud config set compute/region ${REGION}
def query_to_dataframe(query):
import pandas as pd
return pd.read_gbq(query, dialect='standard', project_id=PROJECT)
Explanation: Restart the kernel after installation.
End of explanation
import pandas as pd
import numpy as np
import shutil
query_to_dataframe(
SELECT * FROM `bigquery-public-data.new_york.tlc_yellow_trips_2015` LIMIT 10
)
Explanation: Explore table
End of explanation
query_to_dataframe(
SELECT
EXTRACT (DAYOFYEAR from pickup_datetime) AS daynumber
FROM `bigquery-public-data.new_york.tlc_yellow_trips_2015`
LIMIT 5
)
Explanation: <h2> Analyzing taxicab demand </h2>
Let's pull the number of trips for each day in the 2015 dataset using Standard SQL.
End of explanation
def taxiquery(year):
return
WITH trips AS (
SELECT EXTRACT (DAYOFYEAR from pickup_datetime) AS daynumber
FROM `bigquery-public-data.new_york.tlc_yellow_trips_*`
where _TABLE_SUFFIX = '{}'
)
SELECT daynumber, COUNT(1) AS numtrips FROM trips
GROUP BY daynumber ORDER BY daynumber
.format(year)
trips = query_to_dataframe(taxiquery(2015))
trips[:5]
Explanation: <h3> Modular queries and Pandas dataframe </h3>
Let's use the total number of trips as our proxy for taxicab demand (other reasonable alternatives are total trip_distance or total fare_amount). It is possible to predict multiple variables using Tensorflow, but for simplicity, we will stick to just predicting the number of trips.
We will give our query a name 'taxiquery' and have it use an input variable '$YEAR'. We can then invoke the 'taxiquery' by giving it a YEAR. The to_dataframe() converts the BigQuery result into a <a href='http://pandas.pydata.org/'>Pandas</a> dataframe.
End of explanation
avg = np.mean(trips['numtrips'])
print('Just using average={0} has RMSE of {1}'.format(avg, np.sqrt(np.mean((trips['numtrips'] - avg)**2))))
Explanation: <h3> Benchmark </h3>
Often, a reasonable estimate of something is its historical average. We can therefore benchmark our machine learning model against the historical average.
End of explanation
query_to_dataframe(
SELECT * FROM `bigquery-public-data.noaa_gsod.stations`
WHERE state = 'NY' AND wban != '99999' AND name LIKE '%LA GUARDIA%'
)
Explanation: The mean here is about 400,000 and the root-mean-square-error (RMSE) in this case is about 52,000. In other words, if we were to estimate that there are 400,000 taxi trips on any given day, that estimate is will be off on average by about 52,000 in either direction.
Let's see if we can do better than this -- our goal is to make predictions of taxicab demand whose RMSE is lower than 52,000.
What kinds of things affect people's use of taxicabs?
<h2> Weather data </h2>
We suspect that weather influences how often people use a taxi. Perhaps someone who'd normally walk to work would take a taxi if it is very cold or rainy.
One of the advantages of using a global data warehouse like BigQuery is that you get to mash up unrelated datasets quite easily.
End of explanation
def wxquery(year):
return
SELECT EXTRACT (DAYOFYEAR FROM CAST(CONCAT('{0}','-',mo,'-',da) AS TIMESTAMP)) AS daynumber,
MIN(EXTRACT (DAYOFWEEK FROM CAST(CONCAT('{0}','-',mo,'-',da) AS TIMESTAMP))) dayofweek,
MIN(min) mintemp, MAX(max) maxtemp, MAX(IF(prcp=99.99,0,prcp)) rain
FROM `bigquery-public-data.noaa_gsod.gsod*`
WHERE stn='725030' AND _TABLE_SUFFIX = '{0}'
GROUP BY 1 ORDER BY daynumber DESC
.format(year)
weather = query_to_dataframe(wxquery(2015))
weather[:5]
Explanation: <h3> Variables </h3>
Let's pull out the minimum and maximum daily temperature (in Fahrenheit) as well as the amount of rain (in inches) for La Guardia airport.
End of explanation
data = pd.merge(weather, trips, on='daynumber')
data[:5]
Explanation: <h3> Merge datasets </h3>
Let's use Pandas to merge (combine) the taxi cab and weather datasets day-by-day.
End of explanation
j = data.plot(kind='scatter', x='maxtemp', y='numtrips')
Explanation: <h3> Exploratory analysis </h3>
Is there a relationship between maximum temperature and the number of trips?
End of explanation
j = data.plot(kind='scatter', x='dayofweek', y='numtrips')
Explanation: The scatterplot above doesn't look very promising. There appears to be a weak downward trend, but it's also quite noisy.
Is there a relationship between the day of the week and the number of trips?
End of explanation
j = data[data['dayofweek'] == 7].plot(kind='scatter', x='maxtemp', y='numtrips')
Explanation: Hurrah, we seem to have found a predictor. It appears that people use taxis more later in the week. Perhaps New Yorkers make weekly resolutions to walk more and then lose their determination later in the week, or maybe it reflects tourism dynamics in New York City.
Perhaps if we took out the <em>confounding</em> effect of the day of the week, maximum temperature will start to have an effect. Let's see if that's the case:
End of explanation
data2 = data # 2015 data
for year in [2014, 2016]:
weather = query_to_dataframe(wxquery(year))
trips = query_to_dataframe(taxiquery(year))
data_for_year = pd.merge(weather, trips, on='daynumber')
data2 = pd.concat([data2, data_for_year])
data2.describe()
j = data2[data2['dayofweek'] == 7].plot(kind='scatter', x='maxtemp', y='numtrips')
Explanation: Removing the confounding factor does seem to reflect an underlying trend around temperature. But ... the data are a little sparse, don't you think? This is something that you have to keep in mind -- the more predictors you start to consider (here we are using two: day of week and maximum temperature), the more rows you will need so as to avoid <em> overfitting </em> the model.
<h3> Adding 2014 and 2016 data </h3>
Let's add in 2014 and 2016 data to the Pandas dataframe. Note how useful it was for us to modularize our queries around the YEAR.
End of explanation
import tensorflow as tf
shuffled = data2.sample(frac=1, random_state=13)
# It would be a good idea, if we had more data, to treat the days as categorical variables
# with the small amount of data, we have though, the model tends to overfit
#predictors = shuffled.iloc[:,2:5]
#for day in range(1,8):
# matching = shuffled['dayofweek'] == day
# key = 'day_' + str(day)
# predictors[key] = pd.Series(matching, index=predictors.index, dtype=float)
predictors = shuffled.iloc[:,1:5]
predictors[:5]
shuffled[:5]
targets = shuffled.iloc[:,5]
targets[:5]
Explanation: The data do seem a bit more robust. If we had even more data, it would be better of course. But in this case, we only have 2014-2016 data for taxi trips, so that's what we will go with.
<h2> Machine Learning with Tensorflow </h2>
We'll use 80% of our dataset for training and 20% of the data for testing the model we have trained. Let's shuffle the rows of the Pandas dataframe so that this division is random. The predictor (or input) columns will be every column in the database other than the number-of-trips (which is our target, or what we want to predict).
The machine learning models that we will use -- linear regression and neural networks -- both require that the input variables are numeric in nature.
The day of the week, however, is a categorical variable (i.e. Tuesday is not really greater than Monday). So, we should create separate columns for whether it is a Monday (with values 0 or 1), Tuesday, etc.
Against that, we do have limited data (remember: the more columns you use as input features, the more rows you need to have in your training dataset), and it appears that there is a clear linear trend by day of the week. So, we will opt for simplicity here and use the data as-is. Try uncommenting the code that creates separate columns for the days of the week and re-run the notebook if you are curious about the impact of this simplification.
End of explanation
trainsize = int(len(shuffled['numtrips']) * 0.8)
avg = np.mean(shuffled['numtrips'][:trainsize])
rmse = np.sqrt(np.mean((targets[trainsize:] - avg)**2))
print('Just using average={0} has RMSE of {1}'.format(avg, rmse))
Explanation: Let's update our benchmark based on the 80-20 split and the larger dataset.
End of explanation
SCALE_NUM_TRIPS = 600000.0
trainsize = int(len(shuffled['numtrips']) * 0.8)
testsize = len(shuffled['numtrips']) - trainsize
npredictors = len(predictors.columns)
noutputs = 1
tf.logging.set_verbosity(tf.logging.WARN) # change to INFO to get output every 100 steps ...
shutil.rmtree('./trained_model_linear', ignore_errors=True) # so that we don't load weights from previous runs
estimator = tf.contrib.learn.LinearRegressor(model_dir='./trained_model_linear',
feature_columns=tf.contrib.learn.infer_real_valued_columns_from_input(predictors.values))
print("starting to train ... this will take a while ... use verbosity=INFO to get more verbose output")
def input_fn(features, targets):
return tf.constant(features.values), tf.constant(targets.values.reshape(len(targets), noutputs)/SCALE_NUM_TRIPS)
estimator.fit(input_fn=lambda: input_fn(predictors[:trainsize], targets[:trainsize]), steps=10000)
pred = np.multiply(list(estimator.predict(predictors[trainsize:].values)), SCALE_NUM_TRIPS )
rmse = np.sqrt(np.mean(np.power((targets[trainsize:].values - pred), 2)))
print('LinearRegression has RMSE of {0}'.format(rmse))
Explanation: <h2> Linear regression with tf.contrib.learn </h2>
We scale the number of taxicab rides by 400,000 so that the model can keep its predicted values in the [0-1] range. The optimization goes a lot faster when the weights are small numbers. We save the weights into ./trained_model_linear and display the root mean square error on the test dataset.
End of explanation
SCALE_NUM_TRIPS = 600000.0
trainsize = int(len(shuffled['numtrips']) * 0.8)
testsize = len(shuffled['numtrips']) - trainsize
npredictors = len(predictors.columns)
noutputs = 1
tf.logging.set_verbosity(tf.logging.WARN) # change to INFO to get output every 100 steps ...
shutil.rmtree('./trained_model', ignore_errors=True) # so that we don't load weights from previous runs
estimator = tf.contrib.learn.DNNRegressor(model_dir='./trained_model',
hidden_units=[5, 5],
feature_columns=tf.contrib.learn.infer_real_valued_columns_from_input(predictors.values))
print("starting to train ... this will take a while ... use verbosity=INFO to get more verbose output")
def input_fn(features, targets):
return tf.constant(features.values), tf.constant(targets.values.reshape(len(targets), noutputs)/SCALE_NUM_TRIPS)
estimator.fit(input_fn=lambda: input_fn(predictors[:trainsize], targets[:trainsize]), steps=10000)
pred = np.multiply(list(estimator.predict(predictors[trainsize:].values)), SCALE_NUM_TRIPS )
rmse = np.sqrt(np.mean((targets[trainsize:].values - pred)**2))
print('Neural Network Regression has RMSE of {0}'.format(rmse))
Explanation: The RMSE here (57K) is lower than the benchmark (62K) indicates that we are doing about 10% better with the machine learning model than we would be if we were to just use the historical average (our benchmark).
<h2> Neural network with tf.contrib.learn </h2>
Let's make a more complex model with a few hidden nodes.
End of explanation
input = pd.DataFrame.from_dict(data =
{'dayofweek' : [4, 5, 6],
'mintemp' : [60, 40, 50],
'maxtemp' : [70, 90, 60],
'rain' : [0, 0.5, 0]})
# read trained model from ./trained_model
estimator = tf.contrib.learn.LinearRegressor(model_dir='./trained_model_linear',
feature_columns=tf.contrib.learn.infer_real_valued_columns_from_input(input.values))
pred = np.multiply(list(estimator.predict(input.values)), SCALE_NUM_TRIPS )
print(pred)
Explanation: Using a neural network results in similar performance to the linear model when I ran it -- it might be because there isn't enough data for the NN to do much better. (NN training is a non-convex optimization, and you will get different results each time you run the above code).
<h2> Running a trained model </h2>
So, we have trained a model, and saved it to a file. Let's use this model to predict taxicab demand given the expected weather for three days.
Here we make a Dataframe out of those inputs, load up the saved model (note that we have to know the model equation -- it's not saved in the model file) and use it to predict the taxicab demand.
End of explanation |
4,116 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Introduction to gradients and automatic differentiation
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Computing gradients
To differentiate automatically, TensorFlow needs to remember what operations happen in what order during the forward pass. Then, during the backward pass, TensorFlow traverses this list of operations in reverse order to compute gradients.
Gradient tapes
TensorFlow provides the tf.GradientTape API for automatic differentiation; that is, computing the gradient of a computation with respect to some inputs, usually tf.Variables.
TensorFlow "records" relevant operations executed inside the context of a tf.GradientTape onto a "tape". TensorFlow then uses that tape to compute the gradients of a "recorded" computation using reverse mode differentiation.
Here is a simple example
Step3: Once you've recorded some operations, use GradientTape.gradient(target, sources) to calculate the gradient of some target (often a loss) relative to some source (often the model's variables)
Step4: The above example uses scalars, but tf.GradientTape works as easily on any tensor
Step5: To get the gradient of loss with respect to both variables, you can pass both as sources to the gradient method. The tape is flexible about how sources are passed and will accept any nested combination of lists or dictionaries and return the gradient structured the same way (see tf.nest).
Step6: The gradient with respect to each source has the shape of the source
Step7: Here is the gradient calculation again, this time passing a dictionary of variables
Step8: Gradients with respect to a model
It's common to collect tf.Variables into a tf.Module or one of its subclasses (layers.Layer, keras.Model) for checkpointing and exporting.
In most cases, you will want to calculate gradients with respect to a model's trainable variables. Since all subclasses of tf.Module aggregate their variables in the Module.trainable_variables property, you can calculate these gradients in a few lines of code
Step9: <a id="watches"></a>
Controlling what the tape watches
The default behavior is to record all operations after accessing a trainable tf.Variable. The reasons for this are
Step10: You can list the variables being watched by the tape using the GradientTape.watched_variables method
Step11: tf.GradientTape provides hooks that give the user control over what is or is not watched.
To record gradients with respect to a tf.Tensor, you need to call GradientTape.watch(x)
Step12: Conversely, to disable the default behavior of watching all tf.Variables, set watch_accessed_variables=False when creating the gradient tape. This calculation uses two variables, but only connects the gradient for one of the variables
Step13: Since GradientTape.watch was not called on x0, no gradient is computed with respect to it
Step14: Intermediate results
You can also request gradients of the output with respect to intermediate values computed inside the tf.GradientTape context.
Step15: By default, the resources held by a GradientTape are released as soon as the GradientTape.gradient method is called. To compute multiple gradients over the same computation, create a gradient tape with persistent=True. This allows multiple calls to the gradient method as resources are released when the tape object is garbage collected. For example
Step16: Notes on performance
There is a tiny overhead associated with doing operations inside a gradient tape context. For most eager execution this will not be a noticeable cost, but you should still use tape context around the areas only where it is required.
Gradient tapes use memory to store intermediate results, including inputs and outputs, for use during the backwards pass.
For efficiency, some ops (like ReLU) don't need to keep their intermediate results and they are pruned during the forward pass. However, if you use persistent=True on your tape, nothing is discarded and your peak memory usage will be higher.
Gradients of non-scalar targets
A gradient is fundamentally an operation on a scalar.
Step17: Thus, if you ask for the gradient of multiple targets, the result for each source is
Step18: Similarly, if the target(s) are not scalar the gradient of the sum is calculated
Step19: This makes it simple to take the gradient of the sum of a collection of losses, or the gradient of the sum of an element-wise loss calculation.
If you need a separate gradient for each item, refer to Jacobians.
In some cases you can skip the Jacobian. For an element-wise calculation, the gradient of the sum gives the derivative of each element with respect to its input-element, since each element is independent
Step20: Control flow
Because a gradient tape records operations as they are executed, Python control flow is naturally handled (for example, if and while statements).
Here a different variable is used on each branch of an if. The gradient only connects to the variable that was used
Step21: Just remember that the control statements themselves are not differentiable, so they are invisible to gradient-based optimizers.
Depending on the value of x in the above example, the tape either records result = v0 or result = v1**2. The gradient with respect to x is always None.
Step22: Getting a gradient of None
When a target is not connected to a source you will get a gradient of None.
Step23: Here z is obviously not connected to x, but there are several less-obvious ways that a gradient can be disconnected.
1. Replaced a variable with a tensor
In the section on "controlling what the tape watches" you saw that the tape will automatically watch a tf.Variable but not a tf.Tensor.
One common error is to inadvertently replace a tf.Variable with a tf.Tensor, instead of using Variable.assign to update the tf.Variable. Here is an example
Step24: 2. Did calculations outside of TensorFlow
The tape can't record the gradient path if the calculation exits TensorFlow.
For example
Step25: 3. Took gradients through an integer or string
Integers and strings are not differentiable. If a calculation path uses these data types there will be no gradient.
Nobody expects strings to be differentiable, but it's easy to accidentally create an int constant or variable if you don't specify the dtype.
Step26: TensorFlow doesn't automatically cast between types, so, in practice, you'll often get a type error instead of a missing gradient.
4. Took gradients through a stateful object
State stops gradients. When you read from a stateful object, the tape can only observe the current state, not the history that lead to it.
A tf.Tensor is immutable. You can't change a tensor once it's created. It has a value, but no state. All the operations discussed so far are also stateless
Step27: Similarly, tf.data.Dataset iterators and tf.queues are stateful, and will stop all gradients on tensors that pass through them.
No gradient registered
Some tf.Operations are registered as being non-differentiable and will return None. Others have no gradient registered.
The tf.raw_ops page shows which low-level ops have gradients registered.
If you attempt to take a gradient through a float op that has no gradient registered the tape will throw an error instead of silently returning None. This way you know something has gone wrong.
For example, the tf.image.adjust_contrast function wraps raw_ops.AdjustContrastv2, which could have a gradient but the gradient is not implemented
Step28: If you need to differentiate through this op, you'll either need to implement the gradient and register it (using tf.RegisterGradient) or re-implement the function using other ops.
Zeros instead of None
In some cases it would be convenient to get 0 instead of None for unconnected gradients. You can decide what to return when you have unconnected gradients using the unconnected_gradients argument | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
Explanation: Introduction to gradients and automatic differentiation
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/autodiff"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/autodiff.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/autodiff.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/autodiff.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Automatic Differentiation and Gradients
Automatic differentiation
is useful for implementing machine learning algorithms such as
backpropagation for training
neural networks.
In this guide, you will explore ways to compute gradients with TensorFlow, especially in eager execution.
Setup
End of explanation
x = tf.Variable(3.0)
with tf.GradientTape() as tape:
y = x**2
Explanation: Computing gradients
To differentiate automatically, TensorFlow needs to remember what operations happen in what order during the forward pass. Then, during the backward pass, TensorFlow traverses this list of operations in reverse order to compute gradients.
Gradient tapes
TensorFlow provides the tf.GradientTape API for automatic differentiation; that is, computing the gradient of a computation with respect to some inputs, usually tf.Variables.
TensorFlow "records" relevant operations executed inside the context of a tf.GradientTape onto a "tape". TensorFlow then uses that tape to compute the gradients of a "recorded" computation using reverse mode differentiation.
Here is a simple example:
End of explanation
# dy = 2x * dx
dy_dx = tape.gradient(y, x)
dy_dx.numpy()
Explanation: Once you've recorded some operations, use GradientTape.gradient(target, sources) to calculate the gradient of some target (often a loss) relative to some source (often the model's variables):
End of explanation
w = tf.Variable(tf.random.normal((3, 2)), name='w')
b = tf.Variable(tf.zeros(2, dtype=tf.float32), name='b')
x = [[1., 2., 3.]]
with tf.GradientTape(persistent=True) as tape:
y = x @ w + b
loss = tf.reduce_mean(y**2)
Explanation: The above example uses scalars, but tf.GradientTape works as easily on any tensor:
End of explanation
[dl_dw, dl_db] = tape.gradient(loss, [w, b])
Explanation: To get the gradient of loss with respect to both variables, you can pass both as sources to the gradient method. The tape is flexible about how sources are passed and will accept any nested combination of lists or dictionaries and return the gradient structured the same way (see tf.nest).
End of explanation
print(w.shape)
print(dl_dw.shape)
Explanation: The gradient with respect to each source has the shape of the source:
End of explanation
my_vars = {
'w': w,
'b': b
}
grad = tape.gradient(loss, my_vars)
grad['b']
Explanation: Here is the gradient calculation again, this time passing a dictionary of variables:
End of explanation
layer = tf.keras.layers.Dense(2, activation='relu')
x = tf.constant([[1., 2., 3.]])
with tf.GradientTape() as tape:
# Forward pass
y = layer(x)
loss = tf.reduce_mean(y**2)
# Calculate gradients with respect to every trainable variable
grad = tape.gradient(loss, layer.trainable_variables)
for var, g in zip(layer.trainable_variables, grad):
print(f'{var.name}, shape: {g.shape}')
Explanation: Gradients with respect to a model
It's common to collect tf.Variables into a tf.Module or one of its subclasses (layers.Layer, keras.Model) for checkpointing and exporting.
In most cases, you will want to calculate gradients with respect to a model's trainable variables. Since all subclasses of tf.Module aggregate their variables in the Module.trainable_variables property, you can calculate these gradients in a few lines of code:
End of explanation
# A trainable variable
x0 = tf.Variable(3.0, name='x0')
# Not trainable
x1 = tf.Variable(3.0, name='x1', trainable=False)
# Not a Variable: A variable + tensor returns a tensor.
x2 = tf.Variable(2.0, name='x2') + 1.0
# Not a variable
x3 = tf.constant(3.0, name='x3')
with tf.GradientTape() as tape:
y = (x0**2) + (x1**2) + (x2**2)
grad = tape.gradient(y, [x0, x1, x2, x3])
for g in grad:
print(g)
Explanation: <a id="watches"></a>
Controlling what the tape watches
The default behavior is to record all operations after accessing a trainable tf.Variable. The reasons for this are:
The tape needs to know which operations to record in the forward pass to calculate the gradients in the backwards pass.
The tape holds references to intermediate outputs, so you don't want to record unnecessary operations.
The most common use case involves calculating the gradient of a loss with respect to all a model's trainable variables.
For example, the following fails to calculate a gradient because the tf.Tensor is not "watched" by default, and the tf.Variable is not trainable:
End of explanation
[var.name for var in tape.watched_variables()]
Explanation: You can list the variables being watched by the tape using the GradientTape.watched_variables method:
End of explanation
x = tf.constant(3.0)
with tf.GradientTape() as tape:
tape.watch(x)
y = x**2
# dy = 2x * dx
dy_dx = tape.gradient(y, x)
print(dy_dx.numpy())
Explanation: tf.GradientTape provides hooks that give the user control over what is or is not watched.
To record gradients with respect to a tf.Tensor, you need to call GradientTape.watch(x):
End of explanation
x0 = tf.Variable(0.0)
x1 = tf.Variable(10.0)
with tf.GradientTape(watch_accessed_variables=False) as tape:
tape.watch(x1)
y0 = tf.math.sin(x0)
y1 = tf.nn.softplus(x1)
y = y0 + y1
ys = tf.reduce_sum(y)
Explanation: Conversely, to disable the default behavior of watching all tf.Variables, set watch_accessed_variables=False when creating the gradient tape. This calculation uses two variables, but only connects the gradient for one of the variables:
End of explanation
# dys/dx1 = exp(x1) / (1 + exp(x1)) = sigmoid(x1)
grad = tape.gradient(ys, {'x0': x0, 'x1': x1})
print('dy/dx0:', grad['x0'])
print('dy/dx1:', grad['x1'].numpy())
Explanation: Since GradientTape.watch was not called on x0, no gradient is computed with respect to it:
End of explanation
x = tf.constant(3.0)
with tf.GradientTape() as tape:
tape.watch(x)
y = x * x
z = y * y
# Use the tape to compute the gradient of z with respect to the
# intermediate value y.
# dz_dy = 2 * y and y = x ** 2 = 9
print(tape.gradient(z, y).numpy())
Explanation: Intermediate results
You can also request gradients of the output with respect to intermediate values computed inside the tf.GradientTape context.
End of explanation
x = tf.constant([1, 3.0])
with tf.GradientTape(persistent=True) as tape:
tape.watch(x)
y = x * x
z = y * y
print(tape.gradient(z, x).numpy()) # [4.0, 108.0] (4 * x**3 at x = [1.0, 3.0])
print(tape.gradient(y, x).numpy()) # [2.0, 6.0] (2 * x at x = [1.0, 3.0])
del tape # Drop the reference to the tape
Explanation: By default, the resources held by a GradientTape are released as soon as the GradientTape.gradient method is called. To compute multiple gradients over the same computation, create a gradient tape with persistent=True. This allows multiple calls to the gradient method as resources are released when the tape object is garbage collected. For example:
End of explanation
x = tf.Variable(2.0)
with tf.GradientTape(persistent=True) as tape:
y0 = x**2
y1 = 1 / x
print(tape.gradient(y0, x).numpy())
print(tape.gradient(y1, x).numpy())
Explanation: Notes on performance
There is a tiny overhead associated with doing operations inside a gradient tape context. For most eager execution this will not be a noticeable cost, but you should still use tape context around the areas only where it is required.
Gradient tapes use memory to store intermediate results, including inputs and outputs, for use during the backwards pass.
For efficiency, some ops (like ReLU) don't need to keep their intermediate results and they are pruned during the forward pass. However, if you use persistent=True on your tape, nothing is discarded and your peak memory usage will be higher.
Gradients of non-scalar targets
A gradient is fundamentally an operation on a scalar.
End of explanation
x = tf.Variable(2.0)
with tf.GradientTape() as tape:
y0 = x**2
y1 = 1 / x
print(tape.gradient({'y0': y0, 'y1': y1}, x).numpy())
Explanation: Thus, if you ask for the gradient of multiple targets, the result for each source is:
The gradient of the sum of the targets, or equivalently
The sum of the gradients of each target.
End of explanation
x = tf.Variable(2.)
with tf.GradientTape() as tape:
y = x * [3., 4.]
print(tape.gradient(y, x).numpy())
Explanation: Similarly, if the target(s) are not scalar the gradient of the sum is calculated:
End of explanation
x = tf.linspace(-10.0, 10.0, 200+1)
with tf.GradientTape() as tape:
tape.watch(x)
y = tf.nn.sigmoid(x)
dy_dx = tape.gradient(y, x)
plt.plot(x, y, label='y')
plt.plot(x, dy_dx, label='dy/dx')
plt.legend()
_ = plt.xlabel('x')
Explanation: This makes it simple to take the gradient of the sum of a collection of losses, or the gradient of the sum of an element-wise loss calculation.
If you need a separate gradient for each item, refer to Jacobians.
In some cases you can skip the Jacobian. For an element-wise calculation, the gradient of the sum gives the derivative of each element with respect to its input-element, since each element is independent:
End of explanation
x = tf.constant(1.0)
v0 = tf.Variable(2.0)
v1 = tf.Variable(2.0)
with tf.GradientTape(persistent=True) as tape:
tape.watch(x)
if x > 0.0:
result = v0
else:
result = v1**2
dv0, dv1 = tape.gradient(result, [v0, v1])
print(dv0)
print(dv1)
Explanation: Control flow
Because a gradient tape records operations as they are executed, Python control flow is naturally handled (for example, if and while statements).
Here a different variable is used on each branch of an if. The gradient only connects to the variable that was used:
End of explanation
dx = tape.gradient(result, x)
print(dx)
Explanation: Just remember that the control statements themselves are not differentiable, so they are invisible to gradient-based optimizers.
Depending on the value of x in the above example, the tape either records result = v0 or result = v1**2. The gradient with respect to x is always None.
End of explanation
x = tf.Variable(2.)
y = tf.Variable(3.)
with tf.GradientTape() as tape:
z = y * y
print(tape.gradient(z, x))
Explanation: Getting a gradient of None
When a target is not connected to a source you will get a gradient of None.
End of explanation
x = tf.Variable(2.0)
for epoch in range(2):
with tf.GradientTape() as tape:
y = x+1
print(type(x).__name__, ":", tape.gradient(y, x))
x = x + 1 # This should be `x.assign_add(1)`
Explanation: Here z is obviously not connected to x, but there are several less-obvious ways that a gradient can be disconnected.
1. Replaced a variable with a tensor
In the section on "controlling what the tape watches" you saw that the tape will automatically watch a tf.Variable but not a tf.Tensor.
One common error is to inadvertently replace a tf.Variable with a tf.Tensor, instead of using Variable.assign to update the tf.Variable. Here is an example:
End of explanation
x = tf.Variable([[1.0, 2.0],
[3.0, 4.0]], dtype=tf.float32)
with tf.GradientTape() as tape:
x2 = x**2
# This step is calculated with NumPy
y = np.mean(x2, axis=0)
# Like most ops, reduce_mean will cast the NumPy array to a constant tensor
# using `tf.convert_to_tensor`.
y = tf.reduce_mean(y, axis=0)
print(tape.gradient(y, x))
Explanation: 2. Did calculations outside of TensorFlow
The tape can't record the gradient path if the calculation exits TensorFlow.
For example:
End of explanation
x = tf.constant(10)
with tf.GradientTape() as g:
g.watch(x)
y = x * x
print(g.gradient(y, x))
Explanation: 3. Took gradients through an integer or string
Integers and strings are not differentiable. If a calculation path uses these data types there will be no gradient.
Nobody expects strings to be differentiable, but it's easy to accidentally create an int constant or variable if you don't specify the dtype.
End of explanation
x0 = tf.Variable(3.0)
x1 = tf.Variable(0.0)
with tf.GradientTape() as tape:
# Update x1 = x1 + x0.
x1.assign_add(x0)
# The tape starts recording from x1.
y = x1**2 # y = (x1 + x0)**2
# This doesn't work.
print(tape.gradient(y, x0)) #dy/dx0 = 2*(x1 + x0)
Explanation: TensorFlow doesn't automatically cast between types, so, in practice, you'll often get a type error instead of a missing gradient.
4. Took gradients through a stateful object
State stops gradients. When you read from a stateful object, the tape can only observe the current state, not the history that lead to it.
A tf.Tensor is immutable. You can't change a tensor once it's created. It has a value, but no state. All the operations discussed so far are also stateless: the output of a tf.matmul only depends on its inputs.
A tf.Variable has internal state—its value. When you use the variable, the state is read. It's normal to calculate a gradient with respect to a variable, but the variable's state blocks gradient calculations from going farther back. For example:
End of explanation
image = tf.Variable([[[0.5, 0.0, 0.0]]])
delta = tf.Variable(0.1)
with tf.GradientTape() as tape:
new_image = tf.image.adjust_contrast(image, delta)
try:
print(tape.gradient(new_image, [image, delta]))
assert False # This should not happen.
except LookupError as e:
print(f'{type(e).__name__}: {e}')
Explanation: Similarly, tf.data.Dataset iterators and tf.queues are stateful, and will stop all gradients on tensors that pass through them.
No gradient registered
Some tf.Operations are registered as being non-differentiable and will return None. Others have no gradient registered.
The tf.raw_ops page shows which low-level ops have gradients registered.
If you attempt to take a gradient through a float op that has no gradient registered the tape will throw an error instead of silently returning None. This way you know something has gone wrong.
For example, the tf.image.adjust_contrast function wraps raw_ops.AdjustContrastv2, which could have a gradient but the gradient is not implemented:
End of explanation
x = tf.Variable([2., 2.])
y = tf.Variable(3.)
with tf.GradientTape() as tape:
z = y**2
print(tape.gradient(z, x, unconnected_gradients=tf.UnconnectedGradients.ZERO))
Explanation: If you need to differentiate through this op, you'll either need to implement the gradient and register it (using tf.RegisterGradient) or re-implement the function using other ops.
Zeros instead of None
In some cases it would be convenient to get 0 instead of None for unconnected gradients. You can decide what to return when you have unconnected gradients using the unconnected_gradients argument:
End of explanation |
4,117 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Twitter + Watson Tone Analyzer Sample Notebook
In this sample notebook, we show how to load and analyze data from the Twitter + Watson Tone Analyzer Spark sample application (code can be found here https
Step1: Load the data
In this section, we load the data from a parquet file that has been saved from a scala notebook (see tutorial here...) and create a SparkSQL DataFrame that contains all the data.
Step2: Compute the distribution of tweets by sentiments > 60%
In this section, we demonstrate how to use SparkSQL queries to compute for each tone that number of tweets that are greater than 60%
Step3: Breakdown of the top 5 hashtags by sentiment scores
In this section, we demonstrate how to build a more complex analytic which decompose the top 5 hashtags by sentiment scores. The code below computes the mean of all the sentiment scores and visualize them in a multi-series bar chart | Python Code:
# Import SQLContext and data types
from pyspark.sql import SQLContext
from pyspark.sql.types import *
Explanation: Twitter + Watson Tone Analyzer Sample Notebook
In this sample notebook, we show how to load and analyze data from the Twitter + Watson Tone Analyzer Spark sample application (code can be found here https://github.com/ibm-cds-labs/spark.samples/tree/master/streaming-twitter). The tweets data has been enriched with scores from various Sentiment Tone (e.g Anger, Cheerfulness, etc...).
End of explanation
parquetFile = sqlContext.read.parquet("swift://notebooks.spark/tweetsFull.parquet")
print parquetFile
parquetFile.registerTempTable("tweets");
sqlContext.cacheTable("tweets")
tweets = sqlContext.sql("SELECT * FROM tweets")
print tweets.count()
tweets.cache()
Explanation: Load the data
In this section, we load the data from a parquet file that has been saved from a scala notebook (see tutorial here...) and create a SparkSQL DataFrame that contains all the data.
End of explanation
#create an array that will hold the count for each sentiment
sentimentDistribution=[0] * 13
#For each sentiment, run a sql query that counts the number of tweets for which the sentiment score is greater than 60%
#Store the data in the array
for i, sentiment in enumerate(tweets.columns[-13:]):
sentimentDistribution[i]=sqlContext.sql("SELECT count(*) as sentCount FROM tweets where " + sentiment + " > 60")\
.collect()[0].sentCount
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
ind=np.arange(13)
width = 0.35
bar = plt.bar(ind, sentimentDistribution, width, color='g', label = "distributions")
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches( (plSize[0]*2.5, plSize[1]*2) )
plt.ylabel('Tweet count')
plt.xlabel('Tone')
plt.title('Distribution of tweets by sentiments > 60%')
plt.xticks(ind+width, tweets.columns[-13:])
plt.legend()
plt.show()
from operator import add
import re
tagsRDD = tweets.flatMap( lambda t: re.split("\s", t.text))\
.filter( lambda word: word.startswith("#") )\
.map( lambda word : (word, 1 ))\
.reduceByKey(add, 10).map(lambda (a,b): (b,a)).sortByKey(False).map(lambda (a,b):(b,a))
top10tags = tagsRDD.take(10)
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
print(top10tags)
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches( (plSize[0]*2, plSize[1]*2) )
labels = [i[0] for i in top10tags]
sizes = [int(i[1]) for i in top10tags]
colors = ['yellowgreen', 'gold', 'lightskyblue', 'lightcoral', "beige", "paleturquoise", "pink", "lightyellow", "coral"]
plt.pie(sizes, labels=labels, colors=colors,autopct='%1.1f%%', shadow=True, startangle=90)
plt.axis('equal')
plt.show()
Explanation: Compute the distribution of tweets by sentiments > 60%
In this section, we demonstrate how to use SparkSQL queries to compute for each tone that number of tweets that are greater than 60%
End of explanation
cols = tweets.columns[-13:]
def expand( t ):
ret = []
for s in [i[0] for i in top10tags]:
if ( s in t.text ):
for tone in cols:
ret += [s.replace(':','').replace('-','') + u"-" + unicode(tone) + ":" + unicode(getattr(t, tone))]
return ret
def makeList(l):
return l if isinstance(l, list) else [l]
#Create RDD from tweets dataframe
tagsRDD = tweets.map(lambda t: t )
#Filter to only keep the entries that are in top10tags
tagsRDD = tagsRDD.filter( lambda t: any(s in t.text for s in [i[0] for i in top10tags] ) )
#Create a flatMap using the expand function defined above, this will be used to collect all the scores
#for a particular tag with the following format: Tag-Tone-ToneScore
tagsRDD = tagsRDD.flatMap( expand )
#Create a map indexed by Tag-Tone keys
tagsRDD = tagsRDD.map( lambda fullTag : (fullTag.split(":")[0], float( fullTag.split(":")[1]) ))
#Call combineByKey to format the data as follow
#Key=Tag-Tone
#Value=(count, sum_of_all_score_for_this_tone)
tagsRDD = tagsRDD.combineByKey((lambda x: (x,1)),
(lambda x, y: (x[0] + y, x[1] + 1)),
(lambda x, y: (x[0] + y[0], x[1] + y[1])))
#ReIndex the map to have the key be the Tag and value be (Tone, Average_score) tuple
#Key=Tag
#Value=(Tone, average_score)
tagsRDD = tagsRDD.map(lambda (key, ab): (key.split("-")[0], (key.split("-")[1], round(ab[0]/ab[1], 2))))
#Reduce the map on the Tag key, value becomes a list of (Tone,average_score) tuples
tagsRDD = tagsRDD.reduceByKey( lambda x, y : makeList(x) + makeList(y) )
#Sort the (Tone,average_score) tuples alphabetically by Tone
tagsRDD = tagsRDD.mapValues( lambda x : sorted(x) )
#Format the data as expected by the plotting code in the next cell.
#map the Values to a tuple as follow: ([list of tone], [list of average score])
#e.g. #someTag:([u'Agreeableness', u'Analytical', u'Anger', u'Cheerfulness', u'Confident', u'Conscientiousness', u'Negative', u'Openness', u'Tentative'], [1.0, 0.0, 0.0, 1.0, 0.0, 0.48, 0.0, 0.02, 0.0])
tagsRDD = tagsRDD.mapValues( lambda x : ([elt[0] for elt in x],[elt[1] for elt in x]) )
#Use custom sort function to sort the entries by order of appearance in top10tags
def customCompare( key ):
for (k,v) in top10tags:
if k == key:
return v
return 0
tagsRDD = tagsRDD.sortByKey(ascending=False, numPartitions=None, keyfunc = customCompare)
#Take the mean tone scores for the top 10 tags
top10tagsMeanScores = tagsRDD.take(10)
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches( (plSize[0]*3, plSize[1]*2) )
top5tagsMeanScores = top10tagsMeanScores[:5]
width = 0
ind=np.arange(13)
(a,b) = top5tagsMeanScores[0]
labels=b[0]
colors = ["beige", "paleturquoise", "pink", "lightyellow", "coral", "lightgreen", "gainsboro", "aquamarine","c"]
idx=0
for key, value in top5tagsMeanScores:
plt.bar(ind + width, value[1], 0.15, color=colors[idx], label=key)
width += 0.15
idx += 1
plt.xticks(ind+0.3, labels)
plt.ylabel('AVERAGE SCORE')
plt.xlabel('TONES')
plt.title('Breakdown of top hashtags by sentiment tones')
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc='center',ncol=5, mode="expand", borderaxespad=0.)
plt.show()
Explanation: Breakdown of the top 5 hashtags by sentiment scores
In this section, we demonstrate how to build a more complex analytic which decompose the top 5 hashtags by sentiment scores. The code below computes the mean of all the sentiment scores and visualize them in a multi-series bar chart
End of explanation |
4,118 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'miroc-es2h', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: MIROC
Source ID: MIROC-ES2H
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
4,119 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multi-layer Perceptron (MLP) Neural Network Implementation in Padasip - Basic Examples
This tutorial explains how to use MLP through several examples.
Lets start with importing Padasip. In the following examples we will also use numpy and matplotlib.
Step1: Classification According to a Truth Table
This task is strongly artificial, because if you know the full truth table of a function, you do not need any classificator. However, it is good simple example for understanding how to MLP can be used.
Let us consider a discrete function described by following table
<table style="width
Step2: where
Step3: Note, that after just only 200 epochs (200 times 16 iterations), and we obtained a pretty impressive result!
Time series prediction
Discrete-time Mackey-Glass chaotic time serie according to following equation
Step4: And now the full working example
Step5: The prediction output is pretty good, if we consider the fact, that the used time series is produced by chaotic system. And it is possible to achieve even better result, if you increase the number of epochs for training.
MLP as a Real-time Predictor
It is possible and simple to use MLP sample after sample to track the output of system with changing dynamics. Problem is, that MLP is learning speed is low - in comparison with adaptive filters. Because of that reason, you can really struggle to train the MLP, or even to be able to follow the changes in the process you want to predict.
In this tutorial we will use a really simple example. Let us consider a system described as follows
$d(k) = a_1 x_1(k) + a_2 x_2(k) + a_3 x_3(k)$
where
$a_i$ is unknown parameter of the system
$x_i$ is input of the system (random variable with zero mean and 0.5 standard deviation)
In this example we can measure all three $x_n$, and we need to find the weights of MLP to replace the system. Here is how to get it done | Python Code:
import numpy as np
import matplotlib.pylab as plt
import padasip as pa
%matplotlib inline
plt.style.use('ggplot') # nicer plots
np.random.seed(52102) # always use the same random seed to make results comparable
Explanation: Multi-layer Perceptron (MLP) Neural Network Implementation in Padasip - Basic Examples
This tutorial explains how to use MLP through several examples.
Lets start with importing Padasip. In the following examples we will also use numpy and matplotlib.
End of explanation
nn = pa.ann.NetworkMLP([5,6], 5, outputs=1, activation="tanh")
Explanation: Classification According to a Truth Table
This task is strongly artificial, because if you know the full truth table of a function, you do not need any classificator. However, it is good simple example for understanding how to MLP can be used.
Let us consider a discrete function described by following table
<table style="width:80%">
<tr>
<td># of input combination</td>
<td>0</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td>
<td>8</td><td>9</td><td>10</td><td>11</td><td>12</td><td>13</td><td>14</td><td>15</td>
</tr>
<tr>
<td>Input $x_1$</td>
<td>0</td><td>1</td><td>0</td><td>1</td><td>0</td><td>1</td><td>0</td><td>1</td>
<td>0</td><td>1</td><td>0</td><td>1</td><td>0</td><td>1</td><td>0</td><td>1</td>
</tr>
<tr>
<td>Input $x_2$</td>
<td>0</td><td>0</td><td>1</td><td>1</td><td>0</td><td>0</td><td>1</td><td>1</td>
<td>0</td><td>0</td><td>1</td><td>1</td><td>0</td><td>0</td><td>1</td><td>1</td>
</tr>
<tr>
<td>Input $x_3$</td>
<td>0</td><td>0</td><td>0</td><td>0</td><td>1</td><td>1</td><td>1</td><td>1</td>
<td>0</td><td>0</td><td>0</td><td>0</td><td>1</td><td>1</td><td>1</td><td>1</td>
</tr>
<tr>
<td>Input $x_4$</td>
<td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td>
<td>1</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td>
</tr>
<tr>
<td>Output - Target $d$</td>
<td>0</td><td>1</td><td>1</td><td>0</td><td>0</td><td>1</td><td>0</td><td>0</td>
<td>1</td><td>0</td><td>1</td><td>0</td><td>1</td><td>1</td><td>1</td><td>0</td>
</tr>
</table>
The task is to train the MLP, that it will produce correct value $\tilde y(k) = d(k)$ every time, when we pass another input vector $\textbf{x}(k) = [x_1(k), x_2(k), x_3(k), x_4(k)]$ to the network.
Now, how to create the MLP neural network with padasip:
End of explanation
# data creation
x = np.array([
[0,0,0,0], [1,0,0,0], [0,1,0,0], [1,1,0,0],
[0,0,1,0], [1,0,1,0], [0,1,1,0], [1,1,1,0],
[0,0,0,1], [1,0,0,1], [0,1,0,1], [1,1,0,1],
[0,0,1,1], [1,0,1,1], [0,1,1,1], [1,1,1,1]
])
d = np.array([0,1,1,0,0,1,0,0,1,0,1,0,1,1,1,0])
N = len(d)
n = 4
# creation of neural network (again)
nn = pa.ann.NetworkMLP([5,6], n, outputs=1, activation="tanh")
# training
e, mse = nn.train(x, d, epochs=200)
# see how it works (validation)
y = nn.run(x[-1000:])
# display of the result
plt.figure(figsize=(13,12))
plt.subplot(311)
plt.plot(e)
plt.title("Error during training"); plt.ylabel("Error"); plt.xlabel("Number of iteration")
plt.subplot(312)
plt.plot(10*np.log10(mse))
plt.title("10 times logarithm of mean-square-error (MSE) during training");
plt.ylabel("MSE [dB]"); plt.xlabel("Number of epoch")
plt.subplot(313)
plt.plot(d, label="Target")
plt.plot(y, label="MLP output")
plt.title("The final result"); plt.ylabel("Value"); plt.xlabel("# of input combination")
plt.legend(); plt.tight_layout(); plt.show()
Explanation: where:
the first argument (value [5, 6]) stands for amount of nodes in hidden layers. The first layer has 5 nodes, and the second layer has 6 nodes. If you use [3, 10, 3] instead, you would get three layers with 3, 10 and 3 nodes.
the second argument (value 5) stands for number of inputs (features).
the kwarg outputs=1 says that we want just one output node (it is possible to have more)
and the kwarg activation="tanh" stands for activation function what we want to use. In this case it is hyperbolic tangens.
And the full working example:
End of explanation
N = 3000
p1 = 0.2; p2 = 0.8; p3 = 0.9; p4 = 20; p5 = 10.0
d = np.zeros(N)
d[0] = 0.1
for k in range(0,N-1):
d[k+1] = (p3*d[k]) + ( (p1*d[k-p4]) / (p2 + ( d[k-p4]**p5)) )
plt.figure(figsize=(13,5))
plt.plot(range(N-2000), d[:-2000], label="Not used")
plt.plot(range(N-2000, N-1000), d[-2000:-1000], label="Training")
plt.plot(range(N-1000, N), d[-1000:], label="Validation (no adapt)")
plt.legend(); plt.tight_layout(); plt.show()
Explanation: Note, that after just only 200 epochs (200 times 16 iterations), and we obtained a pretty impressive result!
Time series prediction
Discrete-time Mackey-Glass chaotic time serie according to following equation:
$d(k+1) = p_1 \cdot d(k) + \frac{\large{p_2 \cdot d(k-p_3)}}{\large{p_2 + d^{\large{p_5}}(k-p_4)}}$
Part of the generated data we will use for training (in multiple epochs), and another part for validation - one run with no MLP update. See following code and figure.
End of explanation
# data creation
N = 3000
p1 = 0.2; p2 = 0.8; p3 = 0.9; p4 = 20; p5 = 10.0
d = np.zeros(N)
d[0] = 0.1
for k in range(0,N-1):
d[k+1] = (p3*d[k]) + ( (p1*d[k-p4]) / (p2 + ( d[k-p4]**p5)) )
# data normalization
d = (d - d.mean()) / d.std()
# input forming from historic values
n = 30
x = pa.input_from_history(d, n)[:-1]
d = d[n:]
N = len(d)
# creation of new neural network
nn = pa.ann.NetworkMLP([10,20,10], n, outputs=1, activation="sigmoid")
# training
e, mse = nn.train(x[1000:2000], d[1000:2000], epochs=300)
# see how it works (validation)
y = nn.run(x[-1000:])
# result display
plt.figure(figsize=(13,6))
plt.subplot(211)
plt.plot(10*np.log10(mse))
plt.title("10 times logarithm of mean-square-error (MSE) during training");
plt.ylabel("MSE [dB]"); plt.xlabel("Number of epoch")
plt.subplot(212)
plt.plot(d[-1000:], label="Target")
plt.plot(y, label="MLP output")
plt.title("The final result"); plt.ylabel("Value"); plt.xlabel("# of input combination")
plt.legend(); plt.tight_layout(); plt.show()
Explanation: And now the full working example
End of explanation
def measure_x():
# this is your measurement of the process inputs (3 values)
x = np.random.normal(0, 0.5, 3)
return x
def measure_d(x):
# this is your measurement of the system output - your target
d = 0.8*x[0] + 0.2*x[1] - 1.*x[2]
return d
# creation of new neural network
nn = pa.ann.NetworkMLP([20,20], 3, outputs=1, activation="sigmoid")
# run for N samples
N = 1000
e = np.zeros(N)
for k in range(N):
x = measure_x()
y = nn.predict(x)
# do the stuff with predicted value
# ...
# when possible, measure what was the real value of output and update MLP
d = measure_d(x)
e[k] = nn.update(d)
plt.figure(figsize=(13,5))
plt.plot(e)
plt.title("Error of prediction"); plt.ylabel("Error"); plt.xlabel("Number of iteration")
plt.tight_layout(); plt.show()
Explanation: The prediction output is pretty good, if we consider the fact, that the used time series is produced by chaotic system. And it is possible to achieve even better result, if you increase the number of epochs for training.
MLP as a Real-time Predictor
It is possible and simple to use MLP sample after sample to track the output of system with changing dynamics. Problem is, that MLP is learning speed is low - in comparison with adaptive filters. Because of that reason, you can really struggle to train the MLP, or even to be able to follow the changes in the process you want to predict.
In this tutorial we will use a really simple example. Let us consider a system described as follows
$d(k) = a_1 x_1(k) + a_2 x_2(k) + a_3 x_3(k)$
where
$a_i$ is unknown parameter of the system
$x_i$ is input of the system (random variable with zero mean and 0.5 standard deviation)
In this example we can measure all three $x_n$, and we need to find the weights of MLP to replace the system. Here is how to get it done:
End of explanation |
4,120 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I am struggling with the basic task of constructing a DataFrame of counts by value from a tuple produced by np.unique(arr, return_counts=True), such as: | Problem:
import numpy as np
import pandas as pd
np.random.seed(123)
birds = np.random.choice(['African Swallow', 'Dead Parrot', 'Exploding Penguin'], size=int(5e4))
someTuple = np.unique(birds, return_counts=True)
def g(someTuple):
return pd.DataFrame(np.column_stack(someTuple),columns=['birdType','birdCount'])
result = g(someTuple) |
4,121 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step3: Fast Style Transfer for Arbitrary Styles
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step4: Let's get as well some images to play with.
Step5: Import TF Hub module
Step6: The signature of this hub module for image stylization is
Step7: Let's try it on more images | Python Code:
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import functools
import os
from matplotlib import gridspec
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
print("TF Version: ", tf.__version__)
print("TF Hub version: ", hub.__version__)
print("Eager mode enabled: ", tf.executing_eagerly())
print("GPU available: ", tf.config.list_physical_devices('GPU'))
# @title Define image loading and visualization functions { display-mode: "form" }
def crop_center(image):
Returns a cropped square image.
shape = image.shape
new_shape = min(shape[1], shape[2])
offset_y = max(shape[1] - shape[2], 0) // 2
offset_x = max(shape[2] - shape[1], 0) // 2
image = tf.image.crop_to_bounding_box(
image, offset_y, offset_x, new_shape, new_shape)
return image
@functools.lru_cache(maxsize=None)
def load_image(image_url, image_size=(256, 256), preserve_aspect_ratio=True):
Loads and preprocesses images.
# Cache image file locally.
image_path = tf.keras.utils.get_file(os.path.basename(image_url)[-128:], image_url)
# Load and convert to float32 numpy array, add batch dimension, and normalize to range [0, 1].
img = tf.io.decode_image(
tf.io.read_file(image_path),
channels=3, dtype=tf.float32)[tf.newaxis, ...]
img = crop_center(img)
img = tf.image.resize(img, image_size, preserve_aspect_ratio=True)
return img
def show_n(images, titles=('',)):
n = len(images)
image_sizes = [image.shape[1] for image in images]
w = (image_sizes[0] * 6) // 320
plt.figure(figsize=(w * n, w))
gs = gridspec.GridSpec(1, n, width_ratios=image_sizes)
for i in range(n):
plt.subplot(gs[i])
plt.imshow(images[i][0], aspect='equal')
plt.axis('off')
plt.title(titles[i] if len(titles) > i else '')
plt.show()
Explanation: Fast Style Transfer for Arbitrary Styles
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/hub/tutorials/tf2_arbitrary_image_stylization"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_arbitrary_image_stylization.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/tf2_arbitrary_image_stylization.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/tf2_arbitrary_image_stylization.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
Based on the model code in magenta and the publication:
Exploring the structure of a real-time, arbitrary neural artistic stylization
network.
Golnaz Ghiasi, Honglak Lee,
Manjunath Kudlur, Vincent Dumoulin, Jonathon Shlens,
Proceedings of the British Machine Vision Conference (BMVC), 2017.
Setup
Let's start with importing TF2 and all relevant dependencies.
End of explanation
# @title Load example images { display-mode: "form" }
content_image_url = 'https://upload.wikimedia.org/wikipedia/commons/thumb/f/fd/Golden_Gate_Bridge_from_Battery_Spencer.jpg/640px-Golden_Gate_Bridge_from_Battery_Spencer.jpg' # @param {type:"string"}
style_image_url = 'https://upload.wikimedia.org/wikipedia/commons/0/0a/The_Great_Wave_off_Kanagawa.jpg' # @param {type:"string"}
output_image_size = 384 # @param {type:"integer"}
# The content image size can be arbitrary.
content_img_size = (output_image_size, output_image_size)
# The style prediction model was trained with image size 256 and it's the
# recommended image size for the style image (though, other sizes work as
# well but will lead to different results).
style_img_size = (256, 256) # Recommended to keep it at 256.
content_image = load_image(content_image_url, content_img_size)
style_image = load_image(style_image_url, style_img_size)
style_image = tf.nn.avg_pool(style_image, ksize=[3,3], strides=[1,1], padding='SAME')
show_n([content_image, style_image], ['Content image', 'Style image'])
Explanation: Let's get as well some images to play with.
End of explanation
# Load TF Hub module.
hub_handle = 'https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2'
hub_module = hub.load(hub_handle)
Explanation: Import TF Hub module
End of explanation
# Stylize content image with given style image.
# This is pretty fast within a few milliseconds on a GPU.
outputs = hub_module(tf.constant(content_image), tf.constant(style_image))
stylized_image = outputs[0]
# Visualize input images and the generated stylized image.
show_n([content_image, style_image, stylized_image], titles=['Original content image', 'Style image', 'Stylized image'])
Explanation: The signature of this hub module for image stylization is:
outputs = hub_module(content_image, style_image)
stylized_image = outputs[0]
Where content_image, style_image, and stylized_image are expected to be 4-D Tensors with shapes [batch_size, image_height, image_width, 3].
In the current example we provide only single images and therefore the batch dimension is 1, but one can use the same module to process more images at the same time.
The input and output values of the images should be in the range [0, 1].
The shapes of content and style image don't have to match. Output image shape
is the same as the content image shape.
Demonstrate image stylization
End of explanation
# @title To Run: Load more images { display-mode: "form" }
content_urls = dict(
sea_turtle='https://upload.wikimedia.org/wikipedia/commons/d/d7/Green_Sea_Turtle_grazing_seagrass.jpg',
tuebingen='https://upload.wikimedia.org/wikipedia/commons/0/00/Tuebingen_Neckarfront.jpg',
grace_hopper='https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg',
)
style_urls = dict(
kanagawa_great_wave='https://upload.wikimedia.org/wikipedia/commons/0/0a/The_Great_Wave_off_Kanagawa.jpg',
kandinsky_composition_7='https://upload.wikimedia.org/wikipedia/commons/b/b4/Vassily_Kandinsky%2C_1913_-_Composition_7.jpg',
hubble_pillars_of_creation='https://upload.wikimedia.org/wikipedia/commons/6/68/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg',
van_gogh_starry_night='https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg',
turner_nantes='https://upload.wikimedia.org/wikipedia/commons/b/b7/JMW_Turner_-_Nantes_from_the_Ile_Feydeau.jpg',
munch_scream='https://upload.wikimedia.org/wikipedia/commons/c/c5/Edvard_Munch%2C_1893%2C_The_Scream%2C_oil%2C_tempera_and_pastel_on_cardboard%2C_91_x_73_cm%2C_National_Gallery_of_Norway.jpg',
picasso_demoiselles_avignon='https://upload.wikimedia.org/wikipedia/en/4/4c/Les_Demoiselles_d%27Avignon.jpg',
picasso_violin='https://upload.wikimedia.org/wikipedia/en/3/3c/Pablo_Picasso%2C_1911-12%2C_Violon_%28Violin%29%2C_oil_on_canvas%2C_Kr%C3%B6ller-M%C3%BCller_Museum%2C_Otterlo%2C_Netherlands.jpg',
picasso_bottle_of_rum='https://upload.wikimedia.org/wikipedia/en/7/7f/Pablo_Picasso%2C_1911%2C_Still_Life_with_a_Bottle_of_Rum%2C_oil_on_canvas%2C_61.3_x_50.5_cm%2C_Metropolitan_Museum_of_Art%2C_New_York.jpg',
fire='https://upload.wikimedia.org/wikipedia/commons/3/36/Large_bonfire.jpg',
derkovits_woman_head='https://upload.wikimedia.org/wikipedia/commons/0/0d/Derkovits_Gyula_Woman_head_1922.jpg',
amadeo_style_life='https://upload.wikimedia.org/wikipedia/commons/8/8e/Untitled_%28Still_life%29_%281913%29_-_Amadeo_Souza-Cardoso_%281887-1918%29_%2817385824283%29.jpg',
derkovtis_talig='https://upload.wikimedia.org/wikipedia/commons/3/37/Derkovits_Gyula_Talig%C3%A1s_1920.jpg',
amadeo_cardoso='https://upload.wikimedia.org/wikipedia/commons/7/7d/Amadeo_de_Souza-Cardoso%2C_1915_-_Landscape_with_black_figure.jpg'
)
content_image_size = 384
style_image_size = 256
content_images = {k: load_image(v, (content_image_size, content_image_size)) for k, v in content_urls.items()}
style_images = {k: load_image(v, (style_image_size, style_image_size)) for k, v in style_urls.items()}
style_images = {k: tf.nn.avg_pool(style_image, ksize=[3,3], strides=[1,1], padding='SAME') for k, style_image in style_images.items()}
#@title Specify the main content image and the style you want to use. { display-mode: "form" }
content_name = 'sea_turtle' # @param ['sea_turtle', 'tuebingen', 'grace_hopper']
style_name = 'munch_scream' # @param ['kanagawa_great_wave', 'kandinsky_composition_7', 'hubble_pillars_of_creation', 'van_gogh_starry_night', 'turner_nantes', 'munch_scream', 'picasso_demoiselles_avignon', 'picasso_violin', 'picasso_bottle_of_rum', 'fire', 'derkovits_woman_head', 'amadeo_style_life', 'derkovtis_talig', 'amadeo_cardoso']
stylized_image = hub_module(tf.constant(content_images[content_name]),
tf.constant(style_images[style_name]))[0]
show_n([content_images[content_name], style_images[style_name], stylized_image],
titles=['Original content image', 'Style image', 'Stylized image'])
Explanation: Let's try it on more images
End of explanation |
4,122 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Find permutation of numbers upto N with a specific sum in a specific range Function to check if sum is possible with remaining numbers ; Stores the minimum sum possible with x numbers ; Stores the maximum sum possible with x numbers ; If S lies in the range[minSum , maxSum]; Function to find the resultant permutation ; Stores the count of numbers in the given segment ; If the sum is not possible with numbers in the segment ; Output - 1 ; Stores the numbers present within the given segment ; Iterate over the numbers from 1 to N ; If(S - i)is a positive non - zero sum and if it is possible to obtain(S - i)remaining numbers ; Update sum S ; Update required numbers in the segement ; Push i in vector v ; If sum has been obtained ; Break from the loop ; If sum is not obtained ; Output - 1 ; Stores the numbers which are not present in given segment ; Loop to check the numbers not present in the segment ; Pointer to check if i is present in vector v or not ; If i is not present in v ; Push i in vector v1 ; Point to the first elements of v1 and v respectively ; Print the required permutation ; Driver Code
| Python Code::
def possible(x,S,N):
minSum = (x * (x + 1))//2
maxSum = (x * ((2 * N) - x + 1))//2
if(S < minSum or S > maxSum):
return False
return True
def findPermutation(N ,L ,R ,S ):
x = R - L + 1
if (not possible( x , S , N)) :
print(" - 1")
return
else :
v = []
for i in range(N , 0 , - 1):
if(( S - i)>= 0 and possible(x - 1 , S - i , i - 1)) :
S = S - i
x -= 1
v . append(i)
if(S == 0):
break
if(S != 0):
print(- 1)
return
v1 = []
for i in range(1 , N + 1):
it = i in v
if(not it):
v1 . append(i)
j = 0
f = 0
for i in range(1 , L):
print(v1[j], end = " ▁ ")
j += 1
for i in range(L , R + 1):
print(v[f], end = " ▁ ")
f += 1
for i in range(R + 1 , N + 1):
print(v1[j], end = " ▁ ")
j += 1
return
if __name__ == " _ _ main _ _ " :
N = 6
L = 3
R = 5
S = 8
findPermutation(N , L , R , S)
|
4,123 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building a text classification model with TF Hub
In this notebook, we'll walk you through building a model to predict the genres of a movie given its description. The emphasis here is not on accuracy, but instead how to use TF Hub layers in a text classification model.
To start, import the necessary dependencies for this project.
Step1: The dataset
We need a lot of text inputs to train our model. For this model we'll use this awesome movies dataset from Kaggle. To simplify things I've made the movies_metadata.csv file available in a public Cloud Storage bucket so we can download it with wget. I've preprocessed the dataset already to limit the number of genres we'll use for our model, but first let's take a look at the original data so we can see what we're working with.
Step2: Next we'll convert the dataset to a Pandas dataframe and print the first 5 rows. For this model we're only using 2 of these columns
Step3: Preparing the data for our model
I've done some preprocessing to limit the dataset to the top 9 genres, and I've saved the Pandas dataframes as public Pickle files in GCS. Here we download those files. The resulting descriptions and genres variables are Pandas Series containing all descriptions and genres from our dataset respectively.
Step4: Splitting our data
When we train our model, we'll use 80% of the data for training and set aside 20% of the data to evaluate how our model performed.
Step5: Formatting our labels
When we train our model we'll provide the labels (in this case genres) associated with each movie. We can't pass the genres in as strings directly, we'll transform them into multi-hot vectors. Since we have 9 genres, we'll have a 9 element vector for each movie with 0s and 1s indicating which genres are present in each description.
Step6: Create our TF Hub embedding layer
TF Hub provides a library of existing pre-trained model checkpoints for various kinds of models (images, text, and more) In this model we'll use the TF Hub universal-sentence-encoder module for our pre-trained word embeddings. We only need one line of code to instantiate module. When we train our model, it'll convert our array of movie description strings to embeddings. When we train our model, we'll use this as a feature column.
Step7: Instantiating our DNNEstimator Model
The first parameter we pass to our DNNEstimator is called a head, and defines the type of labels our model should expect. Since we want our model to output multiple labels, we’ll use multi_label_head here. Then we'll convert our features and labels to numpy arrays and instantiate our Estimator. batch_size and num_epochs are hyperparameters - you should experiment with different values to see what works best on your dataset.
Step8: Training and serving our model
To train our model, we simply call train() passing it the input function we defined above. Once our model is trained, we'll define an evaluation input function similar to the one above and call evaluate(). When this completes we'll get a few metrics we can use to evaluate our model's accuracy.
Step9: Generating predictions on new data
Now for the most fun part! Let's generate predictions on movie descriptions our model hasn't seen before. We'll define an array of 3 new description strings (the comments indicate the correct genres) and create a predict_input_fn. Then we'll display the top 2 genres along with their confidence percentages for each of the 3 movies. | Python Code:
import os
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
import json
import pickle
import urllib
from sklearn.preprocessing import MultiLabelBinarizer
print(tf.__version__)
Explanation: Building a text classification model with TF Hub
In this notebook, we'll walk you through building a model to predict the genres of a movie given its description. The emphasis here is not on accuracy, but instead how to use TF Hub layers in a text classification model.
To start, import the necessary dependencies for this project.
End of explanation
# Download the data from GCS
!wget 'https://storage.googleapis.com/movies_data/movies_metadata.csv'
Explanation: The dataset
We need a lot of text inputs to train our model. For this model we'll use this awesome movies dataset from Kaggle. To simplify things I've made the movies_metadata.csv file available in a public Cloud Storage bucket so we can download it with wget. I've preprocessed the dataset already to limit the number of genres we'll use for our model, but first let's take a look at the original data so we can see what we're working with.
End of explanation
data = pd.read_csv('movies_metadata.csv')
data.head()
Explanation: Next we'll convert the dataset to a Pandas dataframe and print the first 5 rows. For this model we're only using 2 of these columns: genres and overview.
End of explanation
urllib.request.urlretrieve('https://storage.googleapis.com/bq-imports/descriptions.p', 'descriptions.p')
urllib.request.urlretrieve('https://storage.googleapis.com/bq-imports/genres.p', 'genres.p')
descriptions = pickle.load(open('descriptions.p', 'rb'))
genres = pickle.load(open('genres.p', 'rb'))
Explanation: Preparing the data for our model
I've done some preprocessing to limit the dataset to the top 9 genres, and I've saved the Pandas dataframes as public Pickle files in GCS. Here we download those files. The resulting descriptions and genres variables are Pandas Series containing all descriptions and genres from our dataset respectively.
End of explanation
train_size = int(len(descriptions) * .8)
train_descriptions = descriptions[:train_size].astype('str')
train_genres = genres[:train_size]
test_descriptions = descriptions[train_size:].astype('str')
test_genres = genres[train_size:]
Explanation: Splitting our data
When we train our model, we'll use 80% of the data for training and set aside 20% of the data to evaluate how our model performed.
End of explanation
encoder = MultiLabelBinarizer()
encoder.fit_transform(train_genres)
train_encoded = encoder.transform(train_genres)
test_encoded = encoder.transform(test_genres)
num_classes = len(encoder.classes_)
# Print all possible genres and the labels for the first movie in our training dataset
print(encoder.classes_)
print(train_encoded[0])
Explanation: Formatting our labels
When we train our model we'll provide the labels (in this case genres) associated with each movie. We can't pass the genres in as strings directly, we'll transform them into multi-hot vectors. Since we have 9 genres, we'll have a 9 element vector for each movie with 0s and 1s indicating which genres are present in each description.
End of explanation
description_embeddings = hub.text_embedding_column("descriptions", module_spec="https://tfhub.dev/google/universal-sentence-encoder/2", trainable=False)
Explanation: Create our TF Hub embedding layer
TF Hub provides a library of existing pre-trained model checkpoints for various kinds of models (images, text, and more) In this model we'll use the TF Hub universal-sentence-encoder module for our pre-trained word embeddings. We only need one line of code to instantiate module. When we train our model, it'll convert our array of movie description strings to embeddings. When we train our model, we'll use this as a feature column.
End of explanation
multi_label_head = tf.contrib.estimator.multi_label_head(
num_classes,
loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE
)
features = {
"descriptions": np.array(train_descriptions).astype(np.str)
}
labels = np.array(train_encoded).astype(np.int32)
train_input_fn = tf.estimator.inputs.numpy_input_fn(features, labels, shuffle=True, batch_size=32, num_epochs=25)
estimator = tf.contrib.estimator.DNNEstimator(
head=multi_label_head,
hidden_units=[64,10],
feature_columns=[description_embeddings])
Explanation: Instantiating our DNNEstimator Model
The first parameter we pass to our DNNEstimator is called a head, and defines the type of labels our model should expect. Since we want our model to output multiple labels, we’ll use multi_label_head here. Then we'll convert our features and labels to numpy arrays and instantiate our Estimator. batch_size and num_epochs are hyperparameters - you should experiment with different values to see what works best on your dataset.
End of explanation
estimator.train(input_fn=train_input_fn)
# Define our eval input_fn and run eval
eval_input_fn = tf.estimator.inputs.numpy_input_fn({"descriptions": np.array(test_descriptions).astype(np.str)}, test_encoded.astype(np.int32), shuffle=False)
estimator.evaluate(input_fn=eval_input_fn)
Explanation: Training and serving our model
To train our model, we simply call train() passing it the input function we defined above. Once our model is trained, we'll define an evaluation input function similar to the one above and call evaluate(). When this completes we'll get a few metrics we can use to evaluate our model's accuracy.
End of explanation
# Test our model on some raw description data
raw_test = [
"An examination of our dietary choices and the food we put in our bodies. Based on Jonathan Safran Foer's memoir.", # Documentary
"After escaping an attack by what he claims was a 70-foot shark, Jonas Taylor must confront his fears to save those trapped in a sunken submersible.", # Action, Adventure
"A teenager tries to survive the last week of her disastrous eighth-grade year before leaving to start high school.", # Comedy
]
# Generate predictions
predict_input_fn = tf.estimator.inputs.numpy_input_fn({"descriptions": np.array(raw_test).astype(np.str)}, shuffle=False)
results = estimator.predict(predict_input_fn)
# Display predictions
for movie_genres in results:
top_2 = movie_genres['probabilities'].argsort()[-2:][::-1]
for genre in top_2:
text_genre = encoder.classes_[genre]
print(text_genre + ': ' + str(round(movie_genres['probabilities'][genre] * 100, 2)) + '%')
print('')
Explanation: Generating predictions on new data
Now for the most fun part! Let's generate predictions on movie descriptions our model hasn't seen before. We'll define an array of 3 new description strings (the comments indicate the correct genres) and create a predict_input_fn. Then we'll display the top 2 genres along with their confidence percentages for each of the 3 movies.
End of explanation |
4,124 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyzing the Air Pollution Decrease Caused by the Global COVID-19 Pandemic
Last December 2019, we heard about the first COVID-19 cases in China.
Now, three months later, the WHO has officially declared Coronavirus outbreak as a pandemic and also an emergency of international concern.
The ongoing outbreak doesn't giving signs of getting better in any way, however, there is alway something good in bad. The air pollution has decreased dramatically over past month and they are saying that it could same even more lives than COVID-19 takes. In light of that, we would like to introduce a high-quality global air pollution reanalysis and high-quality global air pollution near-realtime forecast dataset we have in the Planet OS Datahub where the first one provides air quality data from 2008-2018 and second a 5-day air quality forecast.
The Copernicus Atmosphere Monitoring Service uses a comprehensive global monitoring and forecasting system that estimates the state of the atmosphere on a daily basis, combining information from models and observations, to provide a daily 5-day global surface forecast.
The CAMS reanalysis dataset covers the period January 2003 to 2018. The CAMS reanalysis is the latest global reanalysis data set of atmospheric composition (AC) produced by the Copernicus Atmosphere Monitoring Service (CAMS), consisting of 3-dimensional time-consistent AC fields, including aerosols, chemical species and greenhouse gases (GHGs). The data set builds on the experience gained during the production of the earlier MACC reanalysis and CAMS interim reanalysis.
In this analysis we’ve used PM2.5 in the analysis as these particles, often described as the fine particles, are up to 30 times smaller than the width of a human hair. These tiny particles are small enough to be breathed deep into the lungs, making them very dangerous to people’s health.
As we would like to have data about large areas we will download data by using Package API.
Step1: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
Step2: At first, we need to define the dataset name and a variable we want to use.
Step3: Then we define spatial range. We decided to analyze US, where unfortunately catastrofic wildfires are taking place at the moment and influeces air quality.
Step4: Download the data with package API
Create package objects
Send commands for the package creation
Download the package files
Step5: Work with the downloaded files
We start with opening the files with xarray and adding PM2.5 as micrograms per cubic meter as well to make the values easier to understand and compare. After that, we will create a map plot with a time slider, then make a GIF using the images, and finally, we will look into a specific location.
Step6: Here we are making a Basemap of the US that we will use for showing the data.
Step7: Now it is time to plot all the data. A great way to do it is to make an interactive widget, where you can choose time stamp by using a slider.
As the minimum and maximum values are very different, we are using logarithmic colorbar to visualize it better.
On the map we can see that the very high PM2.5 values are in different states. Maximums are most of the time near 1000 µg/m3, which is way larger than the norm (25 µg/m3). By using the slider we can see the air quality forecast, which shows how the pollution is expected to expand.
We are also adding a red dot to the map to mark the area, where the PM2.5 is the highest. Seems like it is moving a lot and many wild fires are influencing it. We can also see that most of the Continental US is having PM2.5 values below the standard, which is 25 µg/m3, but in the places where wild fires taking place, values tend to be at least over 100 µg/m3.
Step8: Let's include an image from the last time-step as well, because GitHub Preview doesn't show the time slider images.
With the function below we will save images you saw above to the local filesystem as a GIF, so it is easily shareable with others.
Step9: To see data more specifically we need to choose the location. This time we decided to look into Los Angeles and San Fransisco, as the most populated cities in California.
Step10: In the plot below we can see the PM2.5 forecast on the surface layer. Note that the time zone on the graph is UTC while the time zone in San Fransisco and Los Angeles is UTC-08
Step11: Thankfully, San Fransisco air quality is in the norm even in the night time. However, we have to be careful as it could easily change with the wind direction as the fires are pretty close to the city. We can also see that in the end of the forecast values are rising quite rapidly.
Step12: Finally, we will remove the package we downloaded. | Python Code:
%matplotlib notebook
%matplotlib inline
import numpy as np
import dh_py_access.lib.datahub as datahub
import xarray as xr
import matplotlib.pyplot as plt
import ipywidgets as widgets
from mpl_toolkits.basemap import Basemap,shiftgrid
import dh_py_access.package_api as package_api
import matplotlib.colors as colors
import pandas as pd
import warnings
import shutil
import imageio
import datetime
import os
warnings.filterwarnings("ignore")
Explanation: Analyzing the Air Pollution Decrease Caused by the Global COVID-19 Pandemic
Last December 2019, we heard about the first COVID-19 cases in China.
Now, three months later, the WHO has officially declared Coronavirus outbreak as a pandemic and also an emergency of international concern.
The ongoing outbreak doesn't giving signs of getting better in any way, however, there is alway something good in bad. The air pollution has decreased dramatically over past month and they are saying that it could same even more lives than COVID-19 takes. In light of that, we would like to introduce a high-quality global air pollution reanalysis and high-quality global air pollution near-realtime forecast dataset we have in the Planet OS Datahub where the first one provides air quality data from 2008-2018 and second a 5-day air quality forecast.
The Copernicus Atmosphere Monitoring Service uses a comprehensive global monitoring and forecasting system that estimates the state of the atmosphere on a daily basis, combining information from models and observations, to provide a daily 5-day global surface forecast.
The CAMS reanalysis dataset covers the period January 2003 to 2018. The CAMS reanalysis is the latest global reanalysis data set of atmospheric composition (AC) produced by the Copernicus Atmosphere Monitoring Service (CAMS), consisting of 3-dimensional time-consistent AC fields, including aerosols, chemical species and greenhouse gases (GHGs). The data set builds on the experience gained during the production of the earlier MACC reanalysis and CAMS interim reanalysis.
In this analysis we’ve used PM2.5 in the analysis as these particles, often described as the fine particles, are up to 30 times smaller than the width of a human hair. These tiny particles are small enough to be breathed deep into the lungs, making them very dangerous to people’s health.
As we would like to have data about large areas we will download data by using Package API.
End of explanation
server = 'api.planetos.com'
API_key = open('APIKEY').readlines()[0].strip() #'<YOUR API KEY HERE>'
version = 'v1'
Explanation: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
End of explanation
dh = datahub.datahub(server,version,API_key)
dataset_nrt = 'cams_nrt_forecasts_global'
dataset_rean = 'ecmwf_cams_reanalysis_global_v1'
variable_name1 = 'pm2p5'
Explanation: At first, we need to define the dataset name and a variable we want to use.
End of explanation
area_name = 'Europe'
latitude_north = 63; longitude_west = -18
latitude_south = 35; longitude_east = 30
time_start = '2008-01-01T00:00:00'
time_end = '2019-01-01T00:00:00'
Explanation: Then we define spatial range. We decided to analyze US, where unfortunately catastrofic wildfires are taking place at the moment and influeces air quality.
End of explanation
package_cams = package_api.package_api(dh,dataset_rean,variable_name1,longitude_west,longitude_east,latitude_south,latitude_north,time_start=time_start,time_end=time_end,area_name=area_name)
package_cams.make_package()
package_cams.download_package()
package_cams_nrt = package_api.package_api(dh,dataset_nrt,variable_name1,longitude_west,longitude_east,latitude_south,latitude_north,area_name=area_name)
package_cams_nrt.make_package()
package_cams_nrt.download_package()
Explanation: Download the data with package API
Create package objects
Send commands for the package creation
Download the package files
End of explanation
dd1 = xr.open_dataset(package_cams.local_file_name)
dd1['lon'] = dd1['lon']
dd1['pm2p5_micro'] = dd1.pm2p5 * 1000000000.
dd1.pm2p5_micro.data[dd1.pm2p5_micro.data < 0] = np.nan
dd2 = xr.open_dataset(package_cams_nrt.local_file_name)
dd2['pm2p5_micro'] = dd2.pm2p5 * 1000000000.
dd2.pm2p5_micro.data[dd2.pm2p5_micro.data < 0] = np.nan
year_ago = (pd.to_datetime(dd2.time[0].data) - datetime.timedelta(days=365+366)).strftime('%Y-%m-%dT%H:%M:%S')
data_rean = dd1.pm2p5_micro.sel(time=str(year_ago))
data_nrt= dd2.pm2p5_micro[0]
data_rean_shifted, lon1 = shiftgrid(180,data_rean,dd1.lon.values,start=False)
data_nrt_shifted, lon2 = shiftgrid(180,data_nrt,dd2.longitude.values,start=False)
Explanation: Work with the downloaded files
We start with opening the files with xarray and adding PM2.5 as micrograms per cubic meter as well to make the values easier to understand and compare. After that, we will create a map plot with a time slider, then make a GIF using the images, and finally, we will look into a specific location.
End of explanation
dd2
m = Basemap(projection='merc', lat_0 = 55, lon_0 = -4,
resolution = 'i', area_thresh = 0.05,
llcrnrlon=longitude_west, llcrnrlat=latitude_south,
urcrnrlon=longitude_east, urcrnrlat=latitude_north)
lons,lats = np.meshgrid(lon1,dd1.lat.data)
lonmap,latmap = m(lons,lats)
lons_n,lats_n = np.meshgrid(lon2,dd2.latitude.data)
lonmap_nrt,latmap_nrt = m(lons_n,lats_n)
Explanation: Here we are making a Basemap of the US that we will use for showing the data.
End of explanation
vmax = 100
vmin = 1
dd1
dd2
fig=plt.figure(figsize=(10,7))
ax = fig.add_subplot(121)
pcm = m.pcolormesh(lonmap,latmap,data_rean_shifted,
vmin = vmin,vmax=vmax,cmap = 'rainbow')
plt.title(str(data_rean.time.data)[:-10])
m.drawcoastlines()
m.drawcountries()
m.drawstates()
ax2 = fig.add_subplot(122)
pcm2 = m.pcolormesh(lonmap_nrt,latmap_nrt,data_nrt_shifted,
vmin = vmin,vmax=vmax,cmap = 'rainbow')
m.drawcoastlines()
m.drawcountries()
m.drawstates()
cbar = plt.colorbar(pcm,fraction=0.03, pad=0.040)
plt.title(str(data_nrt.time.data)[:-10])
cbar.set_label('micrograms m^3')
plt.savefig('201819marchvs2020.png',dpi=300)
Explanation: Now it is time to plot all the data. A great way to do it is to make an interactive widget, where you can choose time stamp by using a slider.
As the minimum and maximum values are very different, we are using logarithmic colorbar to visualize it better.
On the map we can see that the very high PM2.5 values are in different states. Maximums are most of the time near 1000 µg/m3, which is way larger than the norm (25 µg/m3). By using the slider we can see the air quality forecast, which shows how the pollution is expected to expand.
We are also adding a red dot to the map to mark the area, where the PM2.5 is the highest. Seems like it is moving a lot and many wild fires are influencing it. We can also see that most of the Continental US is having PM2.5 values below the standard, which is 25 µg/m3, but in the places where wild fires taking place, values tend to be at least over 100 µg/m3.
End of explanation
def make_ani():
folder = './anim/'
for k in range(len(dd1.pm2p5_micro)):
filename = folder + 'ani_' + str(k).rjust(3,'0') + '.png'
if not os.path.exists(filename):
fig=plt.figure(figsize=(10,7))
ax = fig.add_subplot(111)
pcm = m.pcolormesh(lonmap,latmap,dd1.pm2p5_micro.data[k],
norm = colors.LogNorm(vmin=vmin, vmax=vmax),cmap = 'rainbow')
m.drawcoastlines()
m.drawcountries()
m.drawstates()
cbar = plt.colorbar(pcm,fraction=0.02, pad=0.040,ticks=[10**0, 10**1, 10**2,10**3])
cbar.ax.set_yticklabels([0,10,100,1000])
plt.title(str(dd1.pm2p5_micro.time[k].data)[:-10])
ax.set_xlim()
cbar.set_label('micrograms m^3')
if not os.path.exists(folder):
os.mkdir(folder)
plt.savefig(filename,bbox_inches = 'tight')
plt.close()
files = sorted(os.listdir(folder))
images = []
for file in files:
if not file.startswith('.'):
filename = folder + file
images.append(imageio.imread(filename))
kargs = { 'duration': 0.1,'quantizer':2,'fps':5.0}
imageio.mimsave('cams_pm2p5.gif', images, **kargs)
print ('GIF is saved as cams_pm2p5.gif under current working directory')
shutil.rmtree(folder)
make_ani()
Explanation: Let's include an image from the last time-step as well, because GitHub Preview doesn't show the time slider images.
With the function below we will save images you saw above to the local filesystem as a GIF, so it is easily shareable with others.
End of explanation
lon = -118; lat = 34
data_in_spec_loc = dd1.sel(longitude = lon,latitude=lat,method='nearest')
print ('Latitude ' + str(lat) + ' ; Longitude ' + str(lon))
Explanation: To see data more specifically we need to choose the location. This time we decided to look into Los Angeles and San Fransisco, as the most populated cities in California.
End of explanation
fig = plt.figure(figsize=(10,5))
plt.plot(data_in_spec_loc.time,data_in_spec_loc.pm2p5_micro, '*-',linewidth = 1,c='blue',label = dataset)
plt.xlabel('Time')
plt.title('PM2.5 forecast for Los Angeles')
plt.grid()
lon = -122.4; lat = 37.7
data_in_spec_loc = dd1.sel(longitude = lon,latitude=lat,method='nearest')
print ('Latitude ' + str(lat) + ' ; Longitude ' + str(lon))
Explanation: In the plot below we can see the PM2.5 forecast on the surface layer. Note that the time zone on the graph is UTC while the time zone in San Fransisco and Los Angeles is UTC-08:00. The air pollution from the wildfire has exceeded a record 100 µg/m3, while the hourly norm is 25 µg/m3. We can also see some peaks every day around 12 pm UTC (4 am PST) and the lowest values are around 12 am UTC (4 pm PST).
Daily pm 2.5 values are mostly in the norm, while the values will continue to be high during the night. This daily pattern where the air quality is the worst at night is caused by the temperature inversion. As the land is not heated by the sun during the night, and the winds tend to be weaker as well, the pollution gets trapped near the ground. Pollution also tends to be higher in the winter time when the days are shorter. Thankfully day time values are much smaller.
End of explanation
fig = plt.figure(figsize=(10,5))
plt.plot(data_in_spec_loc.time,data_in_spec_loc.pm2p5_micro, '*-',linewidth = 1,c='blue',label = dataset)
plt.xlabel('Time')
plt.title('PM2.5 forecast for San Fransisco')
plt.grid()
Explanation: Thankfully, San Fransisco air quality is in the norm even in the night time. However, we have to be careful as it could easily change with the wind direction as the fires are pretty close to the city. We can also see that in the end of the forecast values are rising quite rapidly.
End of explanation
os.remove(package_cams.local_file_name)
Explanation: Finally, we will remove the package we downloaded.
End of explanation |
4,125 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TOC trends 2015
Step1: Woohoo - that was much easier than expected! Right, on with the data cleaning, staring with the easiest stuff first...
2. Remove some of US sites from the analysis
The definitive list of US LTM sites to be included in the analysis is attached to John’s e-mail from 26/05/2016 at 16
Step2: Next, a quick manual check in RESA2 shows that the ICPW_TOCTRENDS_2015_US_LTM project has PROJECT_ID=3870, so let's get the sites associated with that project from the database.
Step3: So there are 3 more sites in RESA2 than in John's spreadsheet (and that's including the sites John would like removing). It looks as though the station IDs are compatible between the two datasets, so to try to work out what's going on we can match based on these columns and (hopefully) identify the 3 mystery sites.
Step4: Argh! There are 5 sites in the table above, not 3. This implies that
Step5: The first 5 rows of this table are sites that appear in RESA2, but which are not in John's spreadsheet; the last two rows appear in John's spreadsheet but are not in RESA2 (or, at least, they're not associated with the ICPW_TOCTRENDS_2015_US_LTM project).
Decide to do the following
Step6: So far so good. The final step is to use an update query to change the project code for these sites from 3870 to 4150. NB
Step7: Having made these changes, there should now be fewer sites associated with the ICPW_TOCTRENDS_2015_US_LTM project. There were originally 90 sites in RESA2 and we've just excluded 19, so there should be 71 left. John's spreadsheet has 72 US sites on the "include" list, but this is OK because we haven't yet added Mud Pond, Maine - that's the next step.
2.2. Add a new station
John's spreadsheet includes a Station ID for Mud Pond, Maine (23709), which I assume he got from RESA2, so perhaps the site is actually in the database after all, but simply not associated with the ICPW_TOCTRENDS_2015_US_LTM project. Let's check.
Step8: OK, so the site is there (it looks as though Tore added it back in March 2011), it just hasn't been associated with the updated TOC trends analysis. This should be a fairly quick fix via Access. Try the following
Step9: Good. The final thing for this task is to make sure the site details in RESA2 match the information in John's spreadsheet. To check this we first need to extract the site details from RESA2.
Step10: And then join them to the information in John's spreadsheet, so we can check whether the columns agree.
Step11: 2.3. Summary for US LTM sites
The RESA2 project ICPW_TOCTRENDS_2015_US_LTM now contains only the 72 sites marked INCLUDE in John's spreadsheet. <br><br>
The station IDs, codes, names and geographic co-ordinates for these 72 sites in RESA2 are identical to those in John's spreadsheet. I haven't checked any of the other site properties yet, though - that's another task.
3. Site properties (location, land use and elevation)
The next main task is to make sure the site properties for all stations are as complete as possible. This is likely to be fiddly.
In particular, we're interested in having the following details for each site
Step12: 3.1. Sites with missing geographic co-ordinates
Step13: Slightly ironically, Langtjern is one of the sites with missing co-ordinates! In reality, RESA2 does have location information for Langtjern, but it's specified using UTM Zone 33N (EPSG 32633) co-ordinates
Step14: I've used an Access ODBC connection to update the co-ordinates for these two sites in RESA2.
3.2. Sites with missing altitude information
The next step is to identify sites without mean catchment elevation data. There are two columns in the database recording this kind of information
Step15: (Aside
Step16: 3.2.3. Sweden
The two Swedish sites without elevation data are shown below. Searching the web, I can find find data download pages for Svinarydsjön and Gosjön, but neither of these seem to have catchment characteristics such as mean elevation. Salar is probably a good person to ask about this initially - e-mail sent 20/06/2016 at 17
Step17: 3.2.4. United States
The five US sites with missing elevation information are shown below.
Step18: In his e-mail sent 23/05/2016 at 16
Step19: 3.3. Sites with missing land use information
It seems as though different countires have reported their land use in different ways, so I suspect this is going to be complicated. The best approach is probably going to be to consider each country in turn.
3.3.1. Canada
There is a small amount of land use information present for the Canadian sites, but it's very patchy and the values don't add up to anything like 100%. As I need to ask for elevation information for all of these sites anyway, it's probably a good idea to ask for land use at the same time.
3.3.2. Czech Republic
All the land use proportions add to 100%. Woohoo!
3.3.3. Finland
Finland seems to have provided fairly comprehensive land use information, but the values rarely add up to 100% (108% in one case!). We can either query this with the Finns, or just assume the remaining land belongs to an "Other" category (not sure what to do with the 108% site though). Ask Heleen what she'd like to do.
3.3.4. Norway
Like Finland, the Norwegian data seems rather patchy and rarely sums to 100%. Ask Heleen where these values came from originally and whether she'd like to refine/improve them.
3.3.5. Poland
The Polish data is a mystery. Total land cover percentages range from 113% to 161% and the values for Grassland are identical to those for Wetland. Even allowing for this duplication, the numbers still don't make much sense. I also can't find the raw data on the network anywhere - see if Heleen knows where the values come from?
3.3.6. Slovenia
The situation for Slovenia is very similar to Poland, with total land cover percentages well over 100%. Also, just like Poland, the Grassland and Wetland percentages are identical, which is obviously an error. This strongly suggests some kind of problem in the data upload procedure, but the issue isn't present for all countries. I'm not sure what's going on here yet, but this is definitely something to watch out for.
Actually, I'm a bit surprised Slovenia is in here at all - it's not one of the countries listed in project codes at the start of section 3. Nor is Poland for that matter! After a quick check in RESA2, it seems that Poland and Slovenia are both grouped under the Czech Republic project. Hmm.
As with Poland, I'm struggling to find the original metadata for the Slovenian sites. Ask Heleen to see if she can point me in the right direction.
3.3.7. Sweden
The Swedish data look fairly complete, although there's no distinction made between deciduous and coniferous forestry (there's just an aggregated Total forest class. Most of the land use proportions add to somewhere close to 100%, but there are about 45 sites where the total is less than 90% and 19 sites where it's less than 50%. Some of these probably need following up. Ask Salar if he knows where to get land use proportions for Swedish sites.
3.3.8. United Kingdom
Land use proportions for all but two of the UK sites sum to exactly 100%. It's late on a Friday afternoon, and this simple result makes me feel so grateful I could almost weep. It must be nearly home time!
The two sites that need correcting are
Step20: John's file LTM_WSHED_LANDCOVER.csv contains proportions for the first 4 of these, but I'm still lacking land cover data for Mud Pond, Maine (station code US74). Ask John for this data and also double check forestry proportions.
4. Distance to coast
During the meeting on 27/05/2016, Don and John thought it would be helpful to know the distance from each site to the nearest piece of coastline (in any direction). In Euclidean co-ordinates this is a simple calculation, but at intercontinental scale using WGS84 we need to perform geodesic calculations on the ellipsoid, which are more complicated. The most recent versions of ArcGIS introduced a method='GEODESIC' option for exacltly this situation, but this software isn't yet available at NIVA. Major spatial database platforms such as Oracle Spatial and PostGIS also include this functionality, but the NIVA Oracle instance isn't spatially enabled and setting up a PostGIS server just for this calculation seems a bit over the top. I can think of two ways of doing this
Step21: 4.2. The Vincenty formula
To check the results above, I've converted the Natural Earth coastline into a set of points spaced at 0.05 degree intervals. I've then deleted all the points that are very obviously far away from our stations of interest (e.g. Antarctica) and use the Add XY Co-ordinates tool to calculate the latitude and longitude for each of these points. This leaves me with 21,718 coastline points in total. A very naive and inefficent way to estimate the nearest point on the coast is therefore to calcualte the distances from each station to all 21,718 coastal points, and then choose the smallest value in each case. This is incredibly badly optimised and requires $605 * 21,718 = 13,139,390$ calculations, but I can let it run while I do other things and it should provide a useful check on the (hopefully more accurate) results from SpatiaLite.
Begin by reading the site co-ordinates and coastal point co-ordinates into two dataframes, and then loop over all combinations.
Step22: 4.3. Comparison
We can now compare the two distance estimates. They will not be exactly the same due to the way I've discretised the coastline for the Vincenty estimate, but they should be similar.
Step23: The plot above is "interactive"
Step24: The sites with very large errors are mostly located well inland, approximately equidistant from two possible coastlines. The methods used to calculate these distances are numerical (i.e. iterative), so I suspect the differences I'm seeing are associated with convergence of the algorithms, combined with my coarse discretisation of the coastline in the case of Vincenty.
For now, I'll use the estimates from SpatiaLite, as these should be more accurate.
4.4. Comparison to John's 2006 estimates
In his e-mail from 27/05/2016 at 14 | Python Code:
# Create connection
# Use custom RESA2 function to connect to db
r2_func_path = r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\Upload_Template\useful_resa2_code.py'
resa2 = imp.load_source('useful_resa2_code', r2_func_path)
engine, conn = resa2.connect_to_resa2()
# Test SQL statement
sql = ('SELECT project_id, project_number, project_name '
'FROM resa2.projects')
df = pd.read_sql_query(sql, engine)
df.head(10)
Explanation: TOC trends 2015: database clean-up (part 1)
My "to do" list for RESA2 following the meeting on 27/05/2016 is on the network here:
K:\Prosjekter\langtransporterte forurensninger\O-23300 - ICP-WATERS - HWI\Database\JamesS\RESA2_To_Do_HWI.docx
I want to start working through these items, prioritising stuff relating to the TOC trends analysis because I know I'm running a bit behind here. I also want to document my changes carefully as this is the first time I've modified the "live" version of RESA2, and I need to be able to undo things again in case I mess it all up. This notebook documents my workflow.
NB for James: Some of the code below modifies RESA2. It also documents what the database looked like before making any changes. Do not re-run this workbook without checking the code below first, as the results will change and you'll lose the record of what you've done.
1. Connect Python to RESA2
As a first step, I'd like to try connecting Python directly to the Oracle database underlying RESA2. If this works, I should be able to interact with the database directly from my code (i.e. bypassing the RESA2 interface), which will make some of the steps below much easier.
As a quick test, start by creating a connection and running a simple query against the database.
End of explanation
# Read John's list of US sites
us_xlsx = (r'\\niva-of5\osl-userdata$\JES\Documents\James_Work\Staff\Heleen_d_W\ICP_Waters'
r'\TOC_Trends_Analysis_2015\Data\US.sites.include.exclude.xlsx')
j_df = pd.read_excel(us_xlsx, sheetname='Sheet1')
print 'Total number of US sites in spreadsheet: %s.' % len(j_df)
j_df.head()
Explanation: Woohoo - that was much easier than expected! Right, on with the data cleaning, staring with the easiest stuff first...
2. Remove some of US sites from the analysis
The definitive list of US LTM sites to be included in the analysis is attached to John’s e-mail from 26/05/2016 at 16:35. After discussion with Tore, the best way to remove the unwanted sites is to create a new project called XXX_US_LTM_EXCLUDED and shift the sites marked EXCLUDE in John's spreadsheet over to that.
First, let's see how the US sites currently in the database compare to those in John's spreadsheet. Start by reading the spreadsheet.
End of explanation
# Get the ICPW_TOCTRENDS_2015_US_LTM sites from RESA2
sql = ('SELECT * '
'FROM resa2.projects_stations '
'WHERE project_id = 3870')
r_df = pd.read_sql_query(sql, engine)
print 'Total number of US sites in RESA2: %s.' % len(r_df)
r_df.head()
Explanation: Next, a quick manual check in RESA2 shows that the ICPW_TOCTRENDS_2015_US_LTM project has PROJECT_ID=3870, so let's get the sites associated with that project from the database.
End of explanation
all_df = pd.merge(r_df, j_df, how='left',
left_on='station_id', right_on='Station ID')
all_df[pd.isnull(all_df['Station ID'])]
Explanation: So there are 3 more sites in RESA2 than in John's spreadsheet (and that's including the sites John would like removing). It looks as though the station IDs are compatible between the two datasets, so to try to work out what's going on we can match based on these columns and (hopefully) identify the 3 mystery sites.
End of explanation
all_df = pd.merge(r_df, j_df, how='outer',
left_on='station_id', right_on='Station ID')
all_df[pd.isnull(all_df['Station ID']) | pd.isnull(all_df['station_id'])]
Explanation: Argh! There are 5 sites in the table above, not 3. This implies that:
There are sites in RESA2 that are not in John's spreadsheet, and <br><br>
There are sites in John's spreadsheet that are not in RESA2.
Repeating the above code using an outer join rather then a left join should make this apparent.
End of explanation
# List of sites marked 'EXCLUDE' in John's spreadsheet
j_exc = j_df.query('INC_EXC == "EXCLUDE"')
j_exc_li = list(j_exc['Station ID'].values)
# List of sites already in RESA2 to exclude
r_exc = all_df[pd.isnull(all_df['Station ID'])]
r_exc_li = [int(i) for i in list(r_exc['station_id'].values)]
# Concatenate lists and convert to tuple
exc_tu = tuple(j_exc_li + r_exc_li)
# Extract matches from database
sql = ('SELECT * '
'FROM resa2.projects_stations '
'WHERE project_id = 3870 '
'AND station_id IN %s' % str(exc_tu))
exc_df = pd.read_sql_query(sql, engine)
print 'Number of sites to exclude from US_LTM project: % s.' % len(exc_df)
exc_df
Explanation: The first 5 rows of this table are sites that appear in RESA2, but which are not in John's spreadsheet; the last two rows appear in John's spreadsheet but are not in RESA2 (or, at least, they're not associated with the ICPW_TOCTRENDS_2015_US_LTM project).
Decide to do the following:
Assume the 5 sites not mentioned in John's spreadsheet are not needed in the analysis, so exclude them from RESA2 along with the others marked for exclusion (check this with John). <br><br>
Add one new site (US74; Mud Pond, Maine) to the project (or database?). <br><br>
Do nothing for Willis Lake, Adirondacks, because John has it marked for exclusion anyway.
NB: There is another Mud Pond in both RESA2 and John's spreadsheet (station ID 37063; site code 1E1-134). This Mud Pond has an almost identical latitude to Mud Pond, Maine, but a significantly different longitude. I assume these really are two different sites, as specified in John's spreadsheet, but perhaps double-check this with John too. For reference, the two relevant entries from John's spreadsheet are:
| Station ID | Station Code | Station name | Latitude | Longitude | INC_EXC |
|:----------:|:------------:|:---------------:|:--------:|:---------:|:-------:|
| 23709 | US74 | Mud Pond, Maine | 44.6306 | -65.0939 | INCLUDE |
| 37063 | 1E1-134 | MUD POND | 44.633 | -68.0908 | INCLUDE |
NB2: There is a Willis Lake already in RESA2, with exactly the same geographic co-ordinates as the Willis Lake, Adirondacks highlighted in the above table. John's spreadsheet actually contains two versions of this site as well (shown in the table below) and both have valid station IDs, so I guess he must have found it duplicated somewhere in RESA2.
| Station ID | Station Code | Station name | Latitude | Longitude | INC_EXC |
|:----------:|:------------:|:------------------------:|:--------:|:---------:|:-------:|
| 23683 | US48 | Willis Lake, Adirondacks | 43.3714 | -74.2463 | EXCLUDE |
| 37003 | 050215O | WILLIS LAKE | 43.3714 | -74.2463 | EXCLUDE |
Regardless, both these are marked as EXCLUDE, so as long as I make sure both are removed from the project in RESA2 everything should be OK.
2.1. Exclude sites
Following Tore's advice, I've created a new project in RESA2 called ICPW_TOCTRENDS_2015_US_LTM_EXCLUDED, which has project code 4150. I want to associate all the sites marked EXCLUDE in John's spreadsheet, plus the five sites listed above, with this new code. To do this, I first need a list of all the stations I want to move. In total, there should be 20 stations marked for exclusion (15 from John's spreadsheet and 5 from above), but as Willis Lake, Adirondacks (station code US48) is not currently linked to the project, I'm only expecting 19 matches from the database itself. Let's check.
End of explanation
# Move sites to EXCLUDED project
sql = ('UPDATE resa2.projects_stations '
'SET project_id=4150 '
'WHERE project_id = 3870 '
'AND station_id IN %s' % str(exc_tu))
# Only run the code below when you're sure it's correct and you're
# finished with the code above!
result = conn.execute(sql)
Explanation: So far so good. The final step is to use an update query to change the project code for these sites from 3870 to 4150. NB: This cannot be undone, and you should only run this code once. It will also make changes to the database, so all the code above here will return different results if you run it again in the future.
End of explanation
# Search all sites in RESA2 for site ID 23709
sql = ('SELECT * '
'FROM resa2.stations '
'WHERE station_id = 23709')
df = pd.read_sql_query(sql, engine)
df
Explanation: Having made these changes, there should now be fewer sites associated with the ICPW_TOCTRENDS_2015_US_LTM project. There were originally 90 sites in RESA2 and we've just excluded 19, so there should be 71 left. John's spreadsheet has 72 US sites on the "include" list, but this is OK because we haven't yet added Mud Pond, Maine - that's the next step.
2.2. Add a new station
John's spreadsheet includes a Station ID for Mud Pond, Maine (23709), which I assume he got from RESA2, so perhaps the site is actually in the database after all, but simply not associated with the ICPW_TOCTRENDS_2015_US_LTM project. Let's check.
End of explanation
# Get the updated ICPW_TOCTRENDS_2015_US_LTM sites from RESA2
sql = ('SELECT * '
'FROM resa2.projects_stations '
'WHERE project_id = 3870')
r_df = pd.read_sql_query(sql, engine)
print 'Total number of US LTM sites in RESA2: %s.' % len(r_df)
Explanation: OK, so the site is there (it looks as though Tore added it back in March 2011), it just hasn't been associated with the updated TOC trends analysis. This should be a fairly quick fix via Access. Try the following:
Open Access and make an ODBC connection to RESA2's PROJECTS_STATIONS table. <br><br>
Add a new row linking station_id 23709 to project_id 3870.
With a bit of luck, if we now query the database for the list of stations associated with ICPW_TOCTRENDS_2015_US_LTM, we should get all 72 sites.
End of explanation
# Get the site details from RESA2 for the 72 sites
sql = ('SELECT st.station_id, st.station_code, st.station_name, st.latitude, st.longitude '
'FROM resa2.projects_stations ps, resa2.stations st '
'WHERE ps.station_id=st.station_id '
'AND ps.project_id=3870')
r_df = pd.read_sql_query(sql, engine)
r_df.head()
Explanation: Good. The final thing for this task is to make sure the site details in RESA2 match the information in John's spreadsheet. To check this we first need to extract the site details from RESA2.
End of explanation
# Join John's data to RESA2 data
us_df = pd.merge(r_df, j_df, how='left',
left_on='station_id', right_on='Station ID')
# Check columns match
print 'Station IDs match: ', (us_df['station_id'] == us_df['Station ID']).all()
print 'Station codes match: ', (us_df['station_code'] == us_df['Station Code']).all()
print 'Station names match: ', (us_df['station_name'] == us_df['Station name']).all()
print 'Station latitudes match: ', np.allclose(us_df['latitude'], us_df['Latitude']) # Within tolerance of 1E-8
print 'Station longitudes match:', np.allclose(us_df['longitude'], us_df['Longitude']) # Within tolerance of 1E-8
Explanation: And then join them to the information in John's spreadsheet, so we can check whether the columns agree.
End of explanation
stn_xlsx = (r'\\niva-of5\osl-userdata$\JES\Documents\James_Work\Staff\Heleen_d_W'
r'\ICP_Waters\TOC_Trends_Analysis_2015\Data\sites_2015_tidied.xlsx')
stn_df = pd.read_excel(stn_xlsx, sheetname='DATA')
# Replace spaces in header with underscores
stn_df.columns = [x.replace(' ', '_') for x in stn_df.columns]
stn_df.head()
Explanation: 2.3. Summary for US LTM sites
The RESA2 project ICPW_TOCTRENDS_2015_US_LTM now contains only the 72 sites marked INCLUDE in John's spreadsheet. <br><br>
The station IDs, codes, names and geographic co-ordinates for these 72 sites in RESA2 are identical to those in John's spreadsheet. I haven't checked any of the other site properties yet, though - that's another task.
3. Site properties (location, land use and elevation)
The next main task is to make sure the site properties for all stations are as complete as possible. This is likely to be fiddly.
In particular, we're interested in having the following details for each site:
Geographic location, specified as latitude and longitude using the WGS84 GCS (ESPG 4326). <br><br>
Mean catchment elevation (not just sampling point elevation). <br><br>
Land cover proportions, using the original land cover classes where possible:
Coniferous forest
Deciduous forest
Heathlands/scrub/shrub
Grasslands
Wetlands/peatlands
Bare rock/barren
Agriculture
Water (excluding lakes)
Water
NB: If separate coniferous and deciduous classes are not available, an aggregated forestry class is acceptable.
NB2: Wetlands and peatlands are not the same. Keep them separate for now, although they’ll probably be treated together in the final analysis. This means countries should report either wetlands or peatlands – not both.
NB3: Land use proportions should sum to 100% and there should be no blanks/data gaps. This may require the addition of an “Other” category to allow for the fact that not all land classes are covered by the classification scheme above. Where possible, I’ll try to assign values for any new categories explicitly (i.e. based on information from the focal centres) rather than implicitly (i.e. as 100% minus the sum of everything else).
I suspect filling in the missing information will require a bit of to-ing and fro-ing with the focal centres, which unfortunately could take a while.
The first step is to see what information we already have. The easiest way to do this is to extract all the relevant site details from RESA2 to Excel. The following RESA2 projects look like they should be relevant:
ICPW_TOCTRENDS_2015_CA_ATL
ICPW_TOCTRENDS_2015_CA_DO
ICPW_TOCTRENDS_2015_CA_ICPW
ICPW_TOCTRENDS_2015_CA_NF
ICPW_TOCTRENDS_2015_CA_QU
ICPW_TOCTRENDS_2015_CZ
ICPW_TOCTRENDS_2015_CZ2
ICPW_TOCTRENDS_2015_FL
ICPW_TOCTRENDS_2015_NO
ICPW_TOCTRENDS_2015_SE
ICPW_TOCTRENDS_2015_SE_RIVERS
ICPW_TOCTRENDS_2015_UK
ICPW_TOCTRENDS_2015_US_LTM
ICPW_TOCTRENDS_2015_US_TIME
Taken together, these projects are associated with 704 sites. This makes sense, because when I previously extracted data for matching sites between the 2006 and 2015 analyses (see notebook here), I found 722 sites for the 2015 analysis, including 90 sites for the US LTM project. Having now cleaned up the US sites (above), we have 72 sites in this project, and $722 - (90 - 72) = 704$.
However, before going any further, I'd like to agree on a definitive list of RESA2 projects for inclusion in the subsequent analysis. After an e-mail discussion with Heleen (e.g. 16/06/2016 at 16:53), we've decided to include the following 13 projects:
ICPW_TOCTRENDS_2015_CA_ATL
ICPW_TOCTRENDS_2015_CA_DO
ICPW_TOCTRENDS_2015_CA_ICPW
ICPW_TOCTRENDS_2015_CA_NF
ICPW_TOCTRENDS_2015_CA_QU
ICPW_TOCTRENDS_2015_CZ
ICPW_TOCTRENDS_2015_CZ2
ICPW_TOCTRENDS_2015_FL
ICPW_TOCTRENDS_2015_NO
ICPW_TOCTRENDS_2015_SE
ICPW_TOCTRENDS_2015_UK
ICPW_TOCTRENDS_2015_US_LTM
ICPWaters Ca
(The last of these is a project I've not looked at before, but hopefully it won't cause any new problems).
In total, the are 605 sites associated with these projects, and I've exported all their relevant properties to an Excel file called sites_2015_tidied.xlsx. Let's read the spreadsheet and see what's missing.
End of explanation
# For details of the 'query NaN' hack used below, see:
# http://stackoverflow.com/questions/26535563/querying-for-nan-and-other-names-in-pandas
# Stns with missing co-ords
stn_df.query('(Latitude != Latitude) or (Longitude != Longitude)')
Explanation: 3.1. Sites with missing geographic co-ordinates
End of explanation
# Project Langtjern co-ordinates to WGS84 lat/long
from pyproj import Proj, transform
# Initialise spatial refs.
nor_proj = Proj(init='epsg:32633')
wgs_proj = Proj(init='epsg:4326')
# Transform
lon, lat = transform(nor_proj, wgs_proj, 209388, 6704530)
print 'Langtjern lat:', lat
print 'Langtjern lon:', lon
Explanation: Slightly ironically, Langtjern is one of the sites with missing co-ordinates! In reality, RESA2 does have location information for Langtjern, but it's specified using UTM Zone 33N (EPSG 32633) co-ordinates: (209388, 6704530). Converting these to WGS84 is easy enough, and is done by the code below.
Metadata for the UK sites can be found here:
K:\Prosjekter\langtransporterte forurensninger\O-23300 - ICP-WATERS - HWI\Database\2015 DOC analysis\data delivery\UK\copy of metadata UK from Don.xlsx
However, this file doesn't include details for Loch Coire Fionnaraich. Searching online, it seems that the loch is part of the UK Upland Water Monitoring Network, which states the site co-ordinates as latitude 57.4917, longitude -5.4306. Some information regarding land cover and altitude for this site can also be found at the UK Lakes Portal, which could be useful later. Contact Don to see whether this site should be included and, if so, check these data are correct.
End of explanation
# Stns with missing mean altitude
df = stn_df.query('(Catchment_mean_eleveation != Catchment_mean_eleveation) and '
'(ECCO_ALTITUDE_AVERAGE != ECCO_ALTITUDE_AVERAGE)')
df = df[['Station_ID', 'Station_Code', 'Station_name', 'Country']]
grpd = df.groupby(['Country'])
mis_elev = grpd.count()[['Station_ID']]
mis_elev.columns = ['Number of sites missing elevation data',]
print 'Total number of sites without average elevation data:', len(df)
mis_elev
Explanation: I've used an Access ODBC connection to update the co-ordinates for these two sites in RESA2.
3.2. Sites with missing altitude information
The next step is to identify sites without mean catchment elevation data. There are two columns in the database recording this kind of information: ECCO_ALTITUDE_AVERAGE and Catchment_mean_eleveation (sic), but it looks as though quite a few sites have blanks in both these columns. Let's group the sites with missing data by country and count them.
End of explanation
# Norwegian stns with missing mean altitude
df = stn_df.query('(Catchment_mean_eleveation != Catchment_mean_eleveation) and '
'(ECCO_ALTITUDE_AVERAGE != ECCO_ALTITUDE_AVERAGE) and '
'(Country == "Norway")')
df = df[['Station_ID', 'Station_Code', 'Station_name', 'Country']]
df
Explanation: (Aside: Note that one of the US sites appears to be labelled "US", whereas the others are all labelled "United States". I've corrected this in RESA2 via Access ODBC, so it shouldn't show up again).
Of the 605 sites, 131 have missing entries in both the Catchment_mean_eleveation (sic) and ECCO_ALTITUDE_AVERAGE columns. Consider each country in turn.
3.2.1. United Kingdom
The single UK site is Loch Coire Fionnaraich. As noted above, this does not appear in the spreadsheet of UK site metadata, so check with Don to see whether this site should be included and, if so, ask for the mean catchment elevation.
3.2.2. Norway
I assume NIVA will be able to supply catchment properties for the 7 Norwegian sites pretty easily. The sites in question are listed below. Contact Heleen to see whether she knows of elevation data (or catchment boundaries) for these locations (e-mail sent 20/06/2016 at 17:09).
End of explanation
# Swedish stns with missing mean altitude
df = stn_df.query('(Catchment_mean_eleveation != Catchment_mean_eleveation) and '
'(ECCO_ALTITUDE_AVERAGE != ECCO_ALTITUDE_AVERAGE) and '
'(Country == "Sweden")')
df = df[['Station_ID', 'Station_Code', 'Station_name', 'Country']]
df
Explanation: 3.2.3. Sweden
The two Swedish sites without elevation data are shown below. Searching the web, I can find find data download pages for Svinarydsjön and Gosjön, but neither of these seem to have catchment characteristics such as mean elevation. Salar is probably a good person to ask about this initially - e-mail sent 20/06/2016 at 17:09.
End of explanation
# US stns with missing mean altitude
df = stn_df.query('(Catchment_mean_eleveation != Catchment_mean_eleveation) and '
'(ECCO_ALTITUDE_AVERAGE != ECCO_ALTITUDE_AVERAGE) and '
'(Country == "United States")')
df = df[['Station_ID', 'Station_Code', 'Station_name', 'Country', 'Latitude', 'Longitude']]
df
Explanation: 3.2.4. United States
The five US sites with missing elevation information are shown below.
End of explanation
can_xlsx = (r'\\niva-of5\osl-userdata$\JES\Documents\James_Work\Staff\Heleen_d_W'
r'\ICP_Waters\TOC_Trends_Analysis_2015\Data\canadian_raw_vs_resa2.xlsx')
can_df = pd.read_excel(can_xlsx, sheetname='RESA2_CA')
can_df = can_df[['Station ID', 'Strip_X15_Code', 'Station name',
'Latitude', 'Longitude', 'In_Raw']]
can_df.query('In_Raw == 0')
Explanation: In his e-mail sent 23/05/2016 at 16:24, John sent me a file containing the mean elevation and land use proportions for 86 US sites. Subsequently, some of these sites have been excluded (see John's e-mail from 26/05/2016 at 16:35), but the original file nevertheless includes land use proportions and mean elevations for the first four sites in the table above. However, although the station codes appear to match, some of the geographic co-ordinates are slightly different. The table below shows the co-ordinates in John's LTM_WSHED_LANDCOVER.csv file, which can be compared to the values in the table above (which are from RESA2, and which also match the figures in John's US.sites.include.exclude.xlsx spreadsheet).
| SITE_ID | LATDD | LONDD |
|:---------:|:-------:|:--------:|
| 1364959 | 41.9367 | -74.3764 |
| 1434025 | 41.9869 | -74.5031 |
| 1434021 | 42.0111 | -74.4147 |
| 143400680 | 41.9725 | -74.4483 |
The differences in co-ordinates are small (< 0.1 degrees of longitude, which is approximately 11 km), and they are probably due to people using to slightly different co-ordinate transformations from the original (projected?) spatial reference system into WGS84. Nevertheless, check with John before using this spreadsheet to update the elevation and land use information for the US LTM sites. Also need to ask John for elevation and land land use statistics for Mud Pond, Maine (US74). This station code doesn't appear in LTM_WSHED_LANDCOVER.csv (or, if it does, it's co-ordinates and station code are specified differently), so at present I don't have any land use or elevation data for this site.
3.2.5. Canada
The majority of the sites with missing data are Canadian (116 sites in total), and the metadata files we currently have for Canada are on the network here:
K:\Prosjekter\langtransporterte forurensninger\O-23300 - ICP-WATERS - HWI\Database\2015 DOC analysis\data delivery\CA
None of these files include catchment mean elevations, though (and most are missing land use information too - see below). Most of them do have the elevation of the sampling location, but in our meeting back in May Don didn't think this would be sufficient for the analysis.
Strangely, there are only 109 Canadian sites in the raw data folder on the network, so where have the other 7 come from?
| Country | Region code | No. of sites | Contact |
|:-------:|:-----------:|:------------:|:---------------:|
| Canada | Dorset | 8 | Andrew Paterson |
| Canada | QC | 37 | Suzanne Couture |
| Canada | Atl | 52 | Suzanne Couture |
| Canada | NF | 12 | Suzanne Couture |
| | | | |
| Total | | 109 | |
Sorting this out is likely to be fiddly. As a start, I've created a new Excel file (canadian_raw_vs_resa2.xlsx) and copied the raw data for the 109 sites into one sheet and the RESA2 data for the 116 sites into another. Unfortunately, the site codes, names and even co-ordinates are significantly different between the two sheets (!), so there's no robust way of matching them automatically. Linking by name using a simple VLOOKUP function in Excel returns 93 matches; doing the same using the site code (after stripping off the X15:) gets me another 5. The rest will need to be done manually. During this process I've discovered that MOUNT TOM LAKE appears twice in the raw data (in the file Site characteristics ICP Waters trend analysis_Atl_final.xlsx), so there are actually only 108 Canadian sites in the folder above.
The overall conclusion here is that I need to contact Suzanne and Andrew to see if they have mean catchment elevation data for all their sites. In addition, there are another 8 Canadian sites $(= 116 - 108)$ where I don't know who supplied the original data. These sites are listed below. E-mail Heleen (20/06/2016 at 17:09) to ask if she knows where these data came from originally.
End of explanation
# US stns with missing mean altitude
df = stn_df.query('(Catchment_mean_eleveation != Catchment_mean_eleveation) and '
'(ECCO_ALTITUDE_AVERAGE != ECCO_ALTITUDE_AVERAGE) and '
'(Country == "United States")')
df = df[['Station_ID', 'Station_Code', 'Station_name', 'Country', 'Latitude', 'Longitude']]
df
Explanation: 3.3. Sites with missing land use information
It seems as though different countires have reported their land use in different ways, so I suspect this is going to be complicated. The best approach is probably going to be to consider each country in turn.
3.3.1. Canada
There is a small amount of land use information present for the Canadian sites, but it's very patchy and the values don't add up to anything like 100%. As I need to ask for elevation information for all of these sites anyway, it's probably a good idea to ask for land use at the same time.
3.3.2. Czech Republic
All the land use proportions add to 100%. Woohoo!
3.3.3. Finland
Finland seems to have provided fairly comprehensive land use information, but the values rarely add up to 100% (108% in one case!). We can either query this with the Finns, or just assume the remaining land belongs to an "Other" category (not sure what to do with the 108% site though). Ask Heleen what she'd like to do.
3.3.4. Norway
Like Finland, the Norwegian data seems rather patchy and rarely sums to 100%. Ask Heleen where these values came from originally and whether she'd like to refine/improve them.
3.3.5. Poland
The Polish data is a mystery. Total land cover percentages range from 113% to 161% and the values for Grassland are identical to those for Wetland. Even allowing for this duplication, the numbers still don't make much sense. I also can't find the raw data on the network anywhere - see if Heleen knows where the values come from?
3.3.6. Slovenia
The situation for Slovenia is very similar to Poland, with total land cover percentages well over 100%. Also, just like Poland, the Grassland and Wetland percentages are identical, which is obviously an error. This strongly suggests some kind of problem in the data upload procedure, but the issue isn't present for all countries. I'm not sure what's going on here yet, but this is definitely something to watch out for.
Actually, I'm a bit surprised Slovenia is in here at all - it's not one of the countries listed in project codes at the start of section 3. Nor is Poland for that matter! After a quick check in RESA2, it seems that Poland and Slovenia are both grouped under the Czech Republic project. Hmm.
As with Poland, I'm struggling to find the original metadata for the Slovenian sites. Ask Heleen to see if she can point me in the right direction.
3.3.7. Sweden
The Swedish data look fairly complete, although there's no distinction made between deciduous and coniferous forestry (there's just an aggregated Total forest class. Most of the land use proportions add to somewhere close to 100%, but there are about 45 sites where the total is less than 90% and 19 sites where it's less than 50%. Some of these probably need following up. Ask Salar if he knows where to get land use proportions for Swedish sites.
3.3.8. United Kingdom
Land use proportions for all but two of the UK sites sum to exactly 100%. It's late on a Friday afternoon, and this simple result makes me feel so grateful I could almost weep. It must be nearly home time!
The two sites that need correcting are:
Loch Coire Fionnaraich (station code UK_26), which has no land use (or elevation) data at all yet (but information is available from the UK Lakes Portal), and <br><br>
Llyn Llagi (station code UK_15), where the summed proportions equal 105%. This error seems to stem from the raw metadata for the UK sites, which can be found here:
K:\Prosjekter\langtransporterte forurensninger\O-23300 - ICP-WATERS - HWI\Database\2015 DOC analysis\data delivery\UK\copy of metadata UK from Don.xlsx
Ask Don if the data from the UK Lakes Portal is suitable and also whether he can supply corrected proportions for Llyn Llagi.
3.3.9. United States
The land use proportions for the US sites are almost complete and they sum to (very close to) 100%, which is great. There are two things to check:
The sum of the Deciduous and Coniferous forestry classes is often significantly less than for the Total forest class. The overall land use proportions only add to 100% if the Total forest class is used, rather than considering deciduous and coniferous classes separately. <br><br>
Land use proportions are missing for 5 US sites. These are the same sites identified above as having missing elevation data, but for completeness they are shown again in the table below.
End of explanation
# Read distances to coastline table
dist_csv = (r'\\niva-of5\osl-userdata$\JES\Documents\James_Work\Staff\Heleen_d_W'
r'\ICP_Waters\TOC_Trends_Analysis_2015\Data\distance_to_coast.csv')
dist_df = pd.read_csv(dist_csv)
dist_df.head()
Explanation: John's file LTM_WSHED_LANDCOVER.csv contains proportions for the first 4 of these, but I'm still lacking land cover data for Mud Pond, Maine (station code US74). Ask John for this data and also double check forestry proportions.
4. Distance to coast
During the meeting on 27/05/2016, Don and John thought it would be helpful to know the distance from each site to the nearest piece of coastline (in any direction). In Euclidean co-ordinates this is a simple calculation, but at intercontinental scale using WGS84 we need to perform geodesic calculations on the ellipsoid, which are more complicated. The most recent versions of ArcGIS introduced a method='GEODESIC' option for exacltly this situation, but this software isn't yet available at NIVA. Major spatial database platforms such as Oracle Spatial and PostGIS also include this functionality, but the NIVA Oracle instance isn't spatially enabled and setting up a PostGIS server just for this calculation seems a bit over the top. I can think of two ways of doing this:
Code my own solution using the Vincenty formula. This is not too difficult in principle, but optimising it so that it runs efficiently might prove difficult. <br><br>
Use SpatiaLite, which is the spatially enabled version of SQLite. This should be much simpler to setup than PostGIS, but last time I used SpatiaLite for geodesic calculations (several years ago) I had some reservations about the accuracy of the output. These issues have likely been fixed in the meantime, but it would be good to check.
I think the best way is to perfrom the calculation using SpatiaLite and then check the results are sensible using my own code as well. I can also compare my output to John's results for the 2006 analysis, althoigh I think John may have done something a bit more sophisticated, such as measuring distances in a particular direction depending on the prevailing weather.
4.1. SpatiaLite
Start off by downloading and installing SpatiaLite and the SpatiaLite GUI from here. Then obtain a medium resolution (1:50 million scale) global coastline dataset in WGS84 co-ordinates from Natural Earth and, using ArcGIS, merge each of the separate coastline features into a single MULTILINESTRING object. Next, create a point shapefile (using the WGS84 spatial reference system) from the latitudes and longitudes in RESA2. Import both these shapefiles into a new SpatiaLite database called dist_to_coast.sqlite.
The SpatiLite SQL code below calculates the distance from each station to the nearest point on the coast.
CREATE TABLE distances AS
SELECT
s.Station_ID AS station_id,
s.Station_Co AS station_code,
s.Station_na AS station_name,
s.Country AS country,
ST_DISTANCE(c.Geometry, s.Geometry, 1) AS distance_m
FROM ne_50m_coastline AS c, toc_2015_sites AS s;
This table has been exported as distance_to_coast.csv.
End of explanation
from geopy.distance import vincenty
# File paths
coast_path = (r'\\niva-of5\osl-userdata$\JES\Documents\James_Work\Staff\Heleen_d_W'
r'\ICP_Waters\TOC_Trends_Analysis_2015\Data\coast_points.csv')
site_path = (r'\\niva-of5\osl-userdata$\JES\Documents\James_Work\Staff\Heleen_d_W'
r'\ICP_Waters\TOC_Trends_Analysis_2015\Data\sites_2015_locs.csv')
# Read co-ords
coast_df = pd.read_csv(coast_path)
coast_df = coast_df[['POINT_Y', 'POINT_X']]
coast_df.columns = ['lat', 'lon']
site_df = pd.read_csv(site_path, delimiter=';')
# List of min distances
min_dists = []
# Loop over sites
for site in site_df.itertuples():
s_lat, s_lon = site.Latitude, site.Longitude
# Pre-allocate array for distances
dists = np.zeros(shape=(len(coast_df)))
# Loop over coast
for cst_pt in coast_df.itertuples():
# Co-ords on coast
c_lat, c_lon = cst_pt.lat, cst_pt.lon
# Index for array
idx = cst_pt.Index
# Insert into array
dists[idx] = vincenty((s_lat, s_lon),
(c_lat, c_lon)).meters
# Append min distance
min_dists.append(dists.min())
# Add distance column to site_df
site_df['distance_m'] = np.array(min_dists)
site_df.head()
Explanation: 4.2. The Vincenty formula
To check the results above, I've converted the Natural Earth coastline into a set of points spaced at 0.05 degree intervals. I've then deleted all the points that are very obviously far away from our stations of interest (e.g. Antarctica) and use the Add XY Co-ordinates tool to calculate the latitude and longitude for each of these points. This leaves me with 21,718 coastline points in total. A very naive and inefficent way to estimate the nearest point on the coast is therefore to calcualte the distances from each station to all 21,718 coastal points, and then choose the smallest value in each case. This is incredibly badly optimised and requires $605 * 21,718 = 13,139,390$ calculations, but I can let it run while I do other things and it should provide a useful check on the (hopefully more accurate) results from SpatiaLite.
Begin by reading the site co-ordinates and coastal point co-ordinates into two dataframes, and then loop over all combinations.
End of explanation
# Compare Vincenty and SpatiaLite distance estimates
# SpatiaLite data
df1 = dist_df[['station_id', 'distance_m']]
df1.columns =['stn_id', 'spatialite']
# Vincenty data
df2 = site_df[['Station ID', 'distance_m']]
df2.columns =['stn_id', 'vincenty']
# Combine
df = pd.merge(df1, df2, how='outer',
left_on='stn_id', right_on='stn_id')
# Convert to km
df['vincenty'] = df['vincenty'] / 1000.
df['spatialite'] = df['spatialite'] / 1000.
# Plot
fig, ax = plt.subplots()
# Get labels for points
labs = list(df['stn_id'].values)
labs = [str(i) for i in labs]
# Scatter plot
scatter = ax.scatter(df['vincenty'].values, df['spatialite'].values)
tooltip = mpld3.plugins.PointLabelTooltip(scatter, labels=labs)
mpld3.plugins.connect(fig, tooltip)
ax.set_title("Distance to coast")
ax.set_xlabel("Vincenty distance (km)")
ax.set_ylabel("SpatiaLite distance (km)")
mpld3.display()
Explanation: 4.3. Comparison
We can now compare the two distance estimates. They will not be exactly the same due to the way I've discretised the coastline for the Vincenty estimate, but they should be similar.
End of explanation
from mpl_toolkits.basemap import Basemap
# Get sites where distance to coast estimates differ by > 100 km
df['diff'] = np.absolute(df['spatialite'] - df['vincenty'])
big_dif = df.query('diff > 75')
big_dif = pd.merge(big_dif, site_df, how='left',
left_on='stn_id', right_on='Station ID')
fig = plt.figure(figsize=(10, 10))
# Use a basic Mercator projection
m = Basemap(projection='merc',
llcrnrlat=50,
urcrnrlat=70,
llcrnrlon=0,
urcrnrlon=40,
lat_ts=20,
resolution='i')
# Add map components
m.fillcontinents (color='darkkhaki', lake_color='darkkhaki')
m.drawcountries(linewidth=1)
m.drawmapboundary(fill_color='paleturquoise')
# Map (long, lat) to (x, y) for plotting
x, y = m(big_dif['Longitude'].values, big_dif['Latitude'].values)
# Plot
plt.plot(x, y, 'or', markersize=5)
Explanation: The plot above is "interactive": hovering the mouse over the plot area should display pan and zoom tools near the plot's bottom left hand corner, and hovering the mouse over any point will display the station ID for that location. These tools can be used to explore the plot. Most of the distance estimates are essentially identical, but a few are significnatly different. Lets plot those where the difference is large (> 75 km).
End of explanation
# Read John's distance data
j_dist_path = (r'\\niva-of5\osl-userdata$\JES\Documents\James_Work\Staff\Heleen_d_W'
r'\ICP_Waters\TOC_Trends_Analysis_2015\Data\Distance.from.coast.data.from.2006.xlsx')
j_df = pd.read_excel(j_dist_path, sheetname='Sheet1')
# Join to the other estimates
df2 = pd.merge(df, j_df, how='inner',
left_on='stn_id', right_on='SITE_NO')
df2 = df2[['spatialite', 'vincenty', 'DIST_COAST']]
df2.columns = ['spatialite', 'vincenty', 'john']
df2.dropna(how='any', inplace=True)
# Scatterplot matrix
sn.pairplot(df2, kind='reg', diag_kind='kde')
Explanation: The sites with very large errors are mostly located well inland, approximately equidistant from two possible coastlines. The methods used to calculate these distances are numerical (i.e. iterative), so I suspect the differences I'm seeing are associated with convergence of the algorithms, combined with my coarse discretisation of the coastline in the case of Vincenty.
For now, I'll use the estimates from SpatiaLite, as these should be more accurate.
4.4. Comparison to John's 2006 estimates
In his e-mail from 27/05/2016 at 14:42, John sent me the "distance to coast" dataset he created for the 2006 analysis. It's not easy to match sites between 2006 and 2015, but for those locations with a common site code we can easily compare the three estimates.
End of explanation |
4,126 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Scrape swear word list
We scrape swear words from the web from the site
Step4: Testing TextBlob
I don't really like TextBlob as it tries to be "nice", but lacks a lot of basic functionality.
Stop words not included
Tokenizer is pretty meh.
No built in way to obtain word frequency
Step5: Repetitive songs skewing data?
Some songs may be super reptitive. Lets look at a couple of songs that have the word in the title. These songs probably repeat the title a decent amount in their song. Hence treating all lyrics as one group of text less reliable in analyzing frequency.
To simplify this process, we can look at only single word titles. This will at least give us a general idea if the data could be skewed by a single song or not.
Step6: Seems pretty reptitive
There are a handful of single word song titles that repeat the title within the song at least 10% of the time. This gives us a general idea that there is most likely a skew to the data. I think it is safe to assume that if a single word is repeated many times, the song is most likely reptitive.
Lets look at the song "water" by Ugly God to confirm.
Step7: Looking at swear word distribution
Let's look at the distribution of swear words... | Python Code:
import string
import os
import requests
from fake_useragent import UserAgent
from lxml import html
def requests_get(url):
ua = UserAgent().random
return requests.get(url, headers={'User-Agent': ua})
def get_swear_words(save_file='swear-words.txt'):
Scrapes a comprehensive list of swear words from noswearing.com
words = ['niggas']
if os.path.isfile(save_file):
with open(save_file, 'rt') as f:
for line in f:
words.append(line.strip())
return words
base_url = 'http://www.noswearing.com/dictionary/'
letters = '1' + string.ascii_lowercase
for letter in letters:
full_url = base_url + letter
result = requests_get(full_url)
tree = html.fromstring(result.text)
search = tree.xpath("//td[@valign='top']/a[@name and string-length(@name) != 0]")
if search is None:
continue
for result in search:
words.append(result.get('name').lower())
with open(save_file, 'wt') as f:
for word in words:
f.write(word)
f.write('\n')
return words
print(get_swear_words())
Explanation: Scrape swear word list
We scrape swear words from the web from the site:
http://www.noswearing.com/
It is a community driven list of swear words.
End of explanation
import os
import operator
import pandas as pd
from textblob import TextBlob, WordList
from nltk.corpus import stopwords
def get_data_paths():
dir_path = os.path.dirname(os.path.realpath('.'))
data_dir = os.path.join(dir_path, 'billboard-hot-100-data')
dirs = [os.path.join(data_dir, d, 'songs.csv') for d in os.listdir(data_dir)
if os.path.isdir(os.path.join(data_dir, d))]
return dirs
def lyric_file_to_text_blob(row):
Transform lyrics column to TextBlob instances.
return TextBlob(row['lyrics'])
def remove_stop_words(word_list):
wl = WordList([])
stop_words = stopwords.words('english')
for word in word_list:
if word.lower() not in stop_words:
wl.append(word)
return wl
def word_freq(words, sort='desc'):
Returns frequency table for all words provided in the list.
reverse = sort == 'desc'
freq = {}
for word in words:
if word in freq:
freq[word] = freq[word] + 1
else:
freq[word] = 1
return sorted(freq.items(), key=operator.itemgetter(1), reverse=reverse)
data_paths = corpus.raw_data_dirs()
songs = corpus.load_songs(data_paths[0])
songs = pd.DataFrame.from_dict(songs)
songs["lyrics"] = songs.apply(lyric_file_to_text_blob, axis=1)
all_words = WordList([])
for i, row in songs.iterrows():
all_words.extend(row['lyrics'].words)
cleaned_all_words = remove_stop_words(all_words)
cleaned_all_words = pd.DataFrame(word_freq(cleaned_all_words.lower()), columns=['word', 'frequency'])
cleaned_all_words
import pandas as pd
import nltk
def remove_extra_junk(word_list):
words = []
remove = [",", "n't", "'m", ")", "(", "'s", "'", "]", "["]
for word in word_list:
if word not in remove:
words.append(word)
return words
data_paths = corpus.raw_data_dirs()
songs = corpus.load_songs(data_paths[0])
songs = pd.DataFrame.from_dict(songs)
all_words = []
for i, row in songs.iterrows():
all_words.extend(nltk.tokenize.word_tokenize(row['lyrics']))
cleaned_all_words = [w.lower() for w in remove_extra_junk(remove_stop_words(all_words))]
freq_dist = nltk.FreqDist(cleaned_all_words)
freq_dist.plot(50)
freq_dist.most_common(100)
#cleaned_all_words = pd.DataFrame(word_freq(cleaned_all_words), columns=['word', 'frequency'])
#cleaned_all_words
Explanation: Testing TextBlob
I don't really like TextBlob as it tries to be "nice", but lacks a lot of basic functionality.
Stop words not included
Tokenizer is pretty meh.
No built in way to obtain word frequency
End of explanation
for i, song in songs.iterrows():
title = song['title']
title_words = title.split(' ')
if len(title_words) > 1:
continue
lyrics = song['lyrics']
words = nltk.tokenize.word_tokenize(lyrics)
clean_words = [w.lower() for w in remove_extra_junk(remove_stop_words(words))]
dist = nltk.FreqDist(clean_words)
freq = dist.freq(title_words[0].lower())
if freq > .1:
print(song['artist'], title)
Explanation: Repetitive songs skewing data?
Some songs may be super reptitive. Lets look at a couple of songs that have the word in the title. These songs probably repeat the title a decent amount in their song. Hence treating all lyrics as one group of text less reliable in analyzing frequency.
To simplify this process, we can look at only single word titles. This will at least give us a general idea if the data could be skewed by a single song or not.
End of explanation
song_title_to_analyze = 'Water'
lyrics = songs['lyrics'].where(songs['title'] == song_title_to_analyze, '').max()
print(lyrics)
words = nltk.tokenize.word_tokenize(lyrics)
clean_words = [w.lower() for w in remove_extra_junk(remove_stop_words(words))]
water_dist = nltk.FreqDist(clean_words)
water_dist.plot(25)
water_dist.freq(song_title_to_analyze.lower())
Explanation: Seems pretty reptitive
There are a handful of single word song titles that repeat the title within the song at least 10% of the time. This gives us a general idea that there is most likely a skew to the data. I think it is safe to assume that if a single word is repeated many times, the song is most likely reptitive.
Lets look at the song "water" by Ugly God to confirm.
End of explanation
sws = []
for sw in set(corpus.swear_words()):
sws.append({'word': sw,
'dist': freq_dist.freq(sw)})
sw_df = pd.DataFrame.from_dict(sws)
sw_df.nlargest(10, 'dist').plot(x='word', kind='bar')
Explanation: Looking at swear word distribution
Let's look at the distribution of swear words...
End of explanation |
4,127 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
VARMAX models
This is a brief introduction notebook to VARMAX models in Statsmodels. The VARMAX model is generically specified as
Step1: Model specification
The VARMAX class in Statsmodels allows estimation of VAR, VMA, and VARMA models (through the order argument), optionally with a constant term (via the trend argument). Exogenous regressors may also be included (as usual in Statsmodels, by the exog argument), and in this way a time trend may be added. Finally, the class allows measurement error (via the measurement_error argument) and allows specifying either a diagonal or unstructured innovation covariance matrix (via the error_cov_type argument).
Example 1
Step2: From the estimated VAR model, we can plot the impulse response functions of the endogenous variables.
Step3: Example 2
Step4: Caution | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
dta = sm.datasets.webuse('lutkepohl2', 'https://www.stata-press.com/data/r12/')
dta.index = dta.qtr
endog = dta.loc['1960-04-01':'1978-10-01', ['dln_inv', 'dln_inc', 'dln_consump']]
Explanation: VARMAX models
This is a brief introduction notebook to VARMAX models in Statsmodels. The VARMAX model is generically specified as:
$$
y_t = \nu + A_1 y_{t-1} + \dots + A_p y_{t-p} + B x_t + \epsilon_t +
M_1 \epsilon_{t-1} + \dots M_q \epsilon_{t-q}
$$
where $y_t$ is a $\text{k_endog} \times 1$ vector.
End of explanation
exog = endog['dln_consump']
mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(2,0), trend='nc', exog=exog)
res = mod.fit(maxiter=1000, disp=False)
print(res.summary())
Explanation: Model specification
The VARMAX class in Statsmodels allows estimation of VAR, VMA, and VARMA models (through the order argument), optionally with a constant term (via the trend argument). Exogenous regressors may also be included (as usual in Statsmodels, by the exog argument), and in this way a time trend may be added. Finally, the class allows measurement error (via the measurement_error argument) and allows specifying either a diagonal or unstructured innovation covariance matrix (via the error_cov_type argument).
Example 1: VAR
Below is a simple VARX(2) model in two endogenous variables and an exogenous series, but no constant term. Notice that we needed to allow for more iterations than the default (which is maxiter=50) in order for the likelihood estimation to converge. This is not unusual in VAR models which have to estimate a large number of parameters, often on a relatively small number of time series: this model, for example, estimates 27 parameters off of 75 observations of 3 variables.
End of explanation
ax = res.impulse_responses(10, orthogonalized=True).plot(figsize=(13,3))
ax.set(xlabel='t', title='Responses to a shock to `dln_inv`');
Explanation: From the estimated VAR model, we can plot the impulse response functions of the endogenous variables.
End of explanation
mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(0,2), error_cov_type='diagonal')
res = mod.fit(maxiter=1000, disp=False)
print(res.summary())
Explanation: Example 2: VMA
A vector moving average model can also be formulated. Below we show a VMA(2) on the same data, but where the innovations to the process are uncorrelated. In this example we leave out the exogenous regressor but now include the constant term.
End of explanation
mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(1,1))
res = mod.fit(maxiter=1000, disp=False)
print(res.summary())
Explanation: Caution: VARMA(p,q) specifications
Although the model allows estimating VARMA(p,q) specifications, these models are not identified without additional restrictions on the representation matrices, which are not built-in. For this reason, it is recommended that the user proceed with error (and indeed a warning is issued when these models are specified). Nonetheless, they may in some circumstances provide useful information.
End of explanation |
4,128 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PMOD ALS Sensor demonstration
This demonstration shows how to use the PmodALS. You will also see how to plot a graph using matplotlib.
The PmodALS and a light source is required. E.g. cell phone flashlight.
The ambient light sensor is initialized and set to log a reading every 1 second. The sensor can be covered to reduce the light reading, and a light source can be used to increase the light reading
1. Use ALS read() to read the current room light
Step1: 2. Starting logging light once every second
Step2: 3. Modifying the light
Decrease the light reading by covering the sensor
Increase the light by shining a flashlight on the device
Stop the logging whenever you are finished trying to change the sensor's value.
Step3: 4. Plot values over time | Python Code:
from pynq import Overlay
Overlay("base.bit").download()
from pynq.iop import Pmod_ALS
from pynq.iop import PMODB
# ALS sensor is on PMODB
my_als = Pmod_ALS(PMODB)
my_als.read()
Explanation: PMOD ALS Sensor demonstration
This demonstration shows how to use the PmodALS. You will also see how to plot a graph using matplotlib.
The PmodALS and a light source is required. E.g. cell phone flashlight.
The ambient light sensor is initialized and set to log a reading every 1 second. The sensor can be covered to reduce the light reading, and a light source can be used to increase the light reading
1. Use ALS read() to read the current room light
End of explanation
my_als.start_log()
Explanation: 2. Starting logging light once every second
End of explanation
my_als.stop_log()
log = my_als.get_log()
Explanation: 3. Modifying the light
Decrease the light reading by covering the sensor
Increase the light by shining a flashlight on the device
Stop the logging whenever you are finished trying to change the sensor's value.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(range(len(log)), log, 'ro')
plt.title('ALS Sensor log')
plt.axis([0, len(log), min(log), max(log)])
plt.show()
Explanation: 4. Plot values over time
End of explanation |
4,129 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training, Tuning and Deploying a PyTorch Text Classification Model on Vertex AI
Fine-tuning pre-trained BERT model for sentiment classification task
Overview
This example is inspired from Token-Classification notebook and run_glue.py.
We will be fine-tuning bert-base-cased (pre-trained) model for sentiment classification task.
You can find the details about this model at Hugging Face Hub.
For more notebooks with the state of the art PyTorch/Tensorflow/JAX, you can explore Hugging FaceNotebooks.
Dataset
We will be using IMDB movie review dataset from Hugging Face Datasets.
Objective
How to Build, Train, Tune and Deploy PyTorch models on Vertex AI and emphasize first class support for training and deploying PyTorch models on Vertex AI.
Table of Contents
This notebook covers following sections
Step1: We will be using Vertex AI SDK for Python to interact with Vertex AI services. The high-level aiplatform library is designed to simplify common data science workflows by using wrapper classes and opinionated defaults.
Install Vertex AI SDK for Python
Step2: Restart the Kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
Select a GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable following APIs in your project required for running the tutorial
Vertex AI API
Cloud Storage API
Container Registry API
Cloud Build API
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
Step5: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step6: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package containing your training code to a Cloud Storage bucket. Vertex AI runs the code from this package. In this tutorial, Vertex AI also saves the trained model that results from your job in the same bucket. Using this model artifact, you can then create Vertex AI model and endpoint resources in order to serve online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
You may also change the REGION variable, which is used for operations throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are available. You may not use a Multi-Regional Storage bucket for training with Vertex AI.
Step7: Only if your bucket doesn't already exist
Step8: Finally, validate access to your Cloud Storage bucket by examining its contents
Step9: Import libraries and define constants
Step10: Training
In this section, we will train a PyTorch model by fine-tuning pre-trained model from Hugging Face Transformers. We will train the model locally first and then on Vertex AI training service.
Training locally in the notebook
Loading the dataset
For this example we will use IMDB movie review dataset from Hugging Face Datasets for sentiment classification task. We use the Hugging Face Datasets library to download the data. This can be easily done with the function load_dataset.
Step11: The datasets object itself is DatasetDict, which contains one key for the training, validation and test set.
Step12: To access an actual element, you need to select a split first, then give an index
Step13: Using the unique method to extract label list. This will allow us to experiment with other datasets without hard-coding labels.
Step14: To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset (automatically decoding the labels in passing).
Step15: Preprocessing the data
Before we can feed those texts to our model, we need to preprocess them. This is done by a pre-trained Hugging Face Transformers Tokenizer class which tokenizes the inputs (including converting the tokens to their corresponding IDs in the pre-trained vocabulary) and put it in a format the model expects, as well as generate the other inputs that model requires.
To do all of this, we instantiate our tokenizer with the AutoTokenizer.from_pretrained method, which ensures
Step16: You can check type of models available with a fast tokenizer on the big table of models.
You can directly call this tokenizer on one sentence
Step17: Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here (just know they are required by the model we will instantiate later), you can learn more about them in this tutorial if you're interested.
NOTE
Step19: Note that transformers are often pre-trained with sub-word tokenizers, meaning that even if your inputs have been split into words already, each of those words could be split again by the tokenizer. Let's look at an example of that
Step20: Fine-tuning the model
Now that our data is ready, we can download the pre-trained model and fine-tune it.
Fine Tuning involves taking a model that has already been trained for a given task and then tweaking the model for another similar task. Specifically, the tweaking involves replicating all the layers in the pre-trained model including weights and parameters, except the output layer. Then adding a new output classifier layer that predicts labels for the current task. The final step is to train the output layer from scratch, while the parameters of all layers from the pre-trained model are frozen. This allows learning from the pre-trained representations and "fine-tuning" the higher-order feature representations more relevant for the concrete task, such as analyzing sentiments in this case.
For the scenario in the notebook analyzing sentiments, the pre-trained BERT model already encodes a lot of information about the language as the model was trained on a large corpus of English data in a self-supervised fashion. Now we only need to slightly tune them using their outputs as features for the sentiment classification task. This means quicker development iteration on a much smaller dataset, instead of training a specific Natural Language Processing (NLP) model with a larger training dataset.
Since all our tasks are about token classification, we use the AutoModelForSequenceClassification class. Like with the tokenizer, the from_pretrained method will download and cache the model for us. The only thing we have to specify is the number of labels for our problem (which we can get from the features, as seen before)
Step21: NOTE
Step22: Here we set the evaluation to be done at the end of each epoch, tweak the learning rate, use the batch_size defined at the top of the notebook and customize the number of epochs for training, as well as the weight decay.
The last thing to define for our Trainer is how to compute the metrics from the predictions. You can define your custom compute_metrics function. It takes an EvalPrediction object (a namedtuple with a predictions and label_ids field) and has to return a dictionary string to float.
Step23: Now we create the Trainer object and we are almost ready to train.
Step24: You can add callbacks to the trainer object to customize the behavior of the training loop such as early stopping, reporting metrics at the end of evaluation phase or taking any decisions. In the hyperparameter tuning section of this notebook, we add a callback to trainer for automating hyperparameter tuning process.
We can now fine-tune our model by just calling the train method
Step25: The evaluate method allows you to evaluate again on the evaluation dataset or on another dataset
Step28: To get the other metrics computed such as precision, recall or F1 score for each category, we can apply the same function as before on the result of the predict method.
Run predictions locally with sample examples
Using the trained model, we can predict the sentiment label for an input text after applying the preprocessing function that was used during the training. We will run the predictions locally in the notebook and later show how you can deploy the model to an endpoint using TorchServe on Vertex AI Predictions.
Step29: Training on Vertex AI
You can do local experimentation on your Notebooks instance. However, for larger datasets or models often a vertically scaled compute or horizontally distributed training is required. The most effective way to perform this task is to leverage Vertex AI custom training service for following reasons
Step30: Following is the setup.py file for the training application. The find_packages() function inside setup.py includes the trainer directory in the package as it contains __init__.py which tells Python Setuptools to include all subdirectories of the parent directory as dependencies.
Step31: Run the following command to create a source distribution, dist/trainer-0.1.tar.gz
Step32: Now upload the source distribution with training application to Cloud Storage bucket
Step33: Validate the source distribution exists on Cloud Storage bucket
Step34: [Optional] Run custom training job locally
Before submitting the job to cloud, you can run the training job locally by calling the trainer.task module directly
Step35: Run custom training job on Vertex AI
We use Vertex AI SDK for Python to create and submit training job to the Vertex AI training service.
Initialize the Vertex AI SDK for Python
Step36: Configure and submit Custom Job to Vertex AI Training service
Configure a Custom Job with the pre-built container image for PyTorch and training code packaged as Python source distribution.
NOTE
Step37: Monitoring progress of the Custom Job
You can monitor the custom job launched from Cloud Console following the link here or use gcloud CLI command gcloud beta ai custom-jobs stream-logs
Validate the model artifacts written to GCS by the training code after the job completes successfully
Step38: [Optional] Submit custom job using gcloud CLI using Python source distribution
You can submit the training job to Vertex AI training service using gcloud beta ai custom-jobs create. gcloud command stages your training application on GCS bucket and submits the training job.
gcloud beta ai custom-jobs create \
--display-name=${JOB_NAME} \
--region ${REGION} \
--python-package-uris=${PACKAGE_PATH} \
--worker-pool-spec=replica-count=1,machine-type='n1-standard-8',accelerator-type='NVIDIA_TESLA_V100',accelerator-count=1,executor-image-uri=${IMAGE_URI},python-module='trainer.task',local-package-path="../python_package/" \
--args="--model-name","finetuned-bert-classifier","--job-dir",$JOB_DIR
worker-pool-spec parameter defines the worker pool configuration used by the custom job. Following are the fields within worker-pool-spec
Step39: In addition to Cloud Console, you can monitor job progress by streaming logs using gcloud CLI by passing the job id
Step40: Build the image and tag the Container Registry path (gcr.io) that you will push to.
Step41: [Optional] Run Training Job Locally with Custom Container
Run the container locally in detached mode to test. When running with machine with GPUs, you can use --gpus all command line flag.
Step42: Run custom training job on Vertex AI with Custom Container
Before submitting the training job to Vertex AI, push the custom container image to Google Cloud Container Registry and then submit the training job to Vertex AI.
NOTE
Step43: Validate the custom container image in Container Registry
Step44: Initialize the Vertex AI SDK for Python
Step45: Configure and submit Custom Job to Vertex AI Training service
Configure a Custom Job with the custom container image with training code and other dependencies
NOTE
Step46: Monitoring progress of the Custom Job
You can monitor the custom job launched from Cloud Console following the link here or use gcloud CLI command gcloud beta ai custom-jobs stream-logs
[Optional] Submit Custom Job using gcloud CLI using custom container
You can submit the training job to Vertex AI training service using gcloud beta ai custom-jobs create with custom container spec. gcloud command submits the training job and launches worker pool with the custom container image specified.
gcloud beta ai custom-jobs create \
--display-name=${JOB_NAME} \
--region ${REGION} \
--worker-pool-spec=replica-count=1,machine-type='n1-standard-8',accelerator-type='NVIDIA_TESLA_V100',accelerator-count=1,container-image-uri=${CUSTOM_TRAIN_IMAGE_URI} \
--args="--model-name","finetuned-bert-classifier","--job-dir",$JOB_DIR
worker-pool-spec parameter defines the worker pool configuration used by the custom job. Following are the fields within worker-pool-spec
Step47: In addition to Cloud Console, you can monitor job progress by streaming logs using gcloud CLI by passing the job id
Step50: Initialize the Vertex AI SDK for Python
Step51: Configure and submit Hyperparameter Tuning Job to Vertex AI Training service
Configure a Hyperparameter Tuning Job with the custom container image with training code and other dependencies.
When configuring and submitting a Hyperparameter Tuning job, you need to attach a Custom Job definition with worker pool specs defining machine type, accelerators and URI for container image representing the custom container.
Step52: Define the training arguments with hp-tune argument set to y so that training application code can report metrics to Vertex AI
Step53: Create a CustomJob with worker pool specs to define machine types, accelerators and customer container spec with the training application code
Step54: Define the parameter_spec as a Python dictionary object with the search space i.e. parameters to search and optimize. They key is the hyperparameter name passed as command line argument to the training code and value is the parameter specification. The spec requires to specify the hyperparameter data type as an instance of a parameter value specification.
Refer to the documentation on selecting the hyperparaneter to tune and how to define parameter specification.
Step55: Define the metric_spec with name and goal of metric to optimize. The goal specifies whether you want to tune your model to maximize or minimize the value of this metric.
Step56: Configure and submit a Hyperparameter Tuning Job with the Custom Job, metric spec, parameter spec and trial limits.
max_trial_count
Step57: Monitoring progress of the Custom Job
You can monitor the hyperparameter tuning job launched from Cloud Console following the link here or use gcloud CLI command gcloud beta ai custom-jobs stream-logs
After the job is finished, you can view and format the results of the hyperparameter tuning Trials (run by Vertex AI Training service) as a Pandas dataframe
Step58: Now from the results of Trials, you can pick the best performing Trial to deploy to Vertex AI Predictions
Step59: You can validate the model artifacts written to GCS by the training code by running the following command
Step60: [Optional] Submit hyperparameter tuning job using gcloud CLI
You can submit the hyperparameter tuning job to Vertex AI training service using gcloud beta ai hp-tuning-jobs create. gcloud command submits the hyperparameter tuning job and launches multiple trials with worker pool based on custom container image specified and number of trials and the criteria set. The command requires hyperparameter tuning job configuration provided as configuration file in YAML format with job name.
The following cell shows how to submit a hyperparameter tuning job on Vertex AI using gcloud CLI
Step65: Deploying
Deploying a PyTorch model on Vertex AI Predictions requires to use a custom container that serves online predictions. You will deploy a container running PyTorch's TorchServe tool in order to serve predictions from a fine-tuned transformer model from Hugging Face Transformers for sentiment analysis task. You can then use Vertex AI Predictions to classify sentiment of input texts.
Deploying model on Vertex AI Predictions with custom container
To use a custom container to serve predictions from a PyTorch model, you must provide Vertex AI with a Docker container image that runs an HTTP server, such as TorchServe in this case. Please refer to documentation that describes the container image requirements to be compatible with Vertex AI Predictions.
Essentially, to deploy a PyTorch model on Vertex AI Predictions following are the steps
Step66: Generate target label to name file [Optional]
In the custom handler, we refer to a mapping file between target labels and their meaningful names that will be used to format the prediction response. Here we are mapping target label "0" as "Negative" and "1" as "Positive".
Step67: Create custom container image to serve predictions
We will use Cloud Build to create the custom container image with following build steps
Step68: Validate model artifact files in the Cloud Storage bucket
Step69: Copy files from Cloud Storage to local directory
Step70: Build the container image
Create a Dockerfile with TorchServe as base image
Step71: Build the docker image tagged with Container Registry (gcr.io) path
Step72: Run the container locally [Optional]
Before push the container image to Container Registry to use it with Vertex AI Predictions, you can run it as a container in your local environment to verify that the server works as expected
To run the container image as a container locally, run the following command
Step73: To send the container's server a health check, run the following command
Step74: If successful, the server returns the following response
Step75: This request uses a test sentence. If successful, the server returns prediction in below format
Step76: Deploying the serving container to Vertex AI Predictions
We create a model resource on Vertex AI and deploy the model to a Vertex AI Endpoints. You must deploy a model to an endpoint before using the model. The deployed model runs the custom container image to serve predictions.
Push the serving container to Container Registry
Push your container image with inference code and dependencies to your Container Registry
Step77: Initialize the Vertex AI SDK for Python
Step78: Create a Model resource with custom serving container
Step79: For more context on upload or importing a model, refer documentation
Create an Endpoint for Model with Custom Container
Step80: Deploy the Model to Endpoint
Deploying a model associates physical resources with the model so it can serve online predictions with low latency.
NOTE
Step81: Invoking the Endpoint with deployed Model using Vertex AI SDK to make predictions
Get the Endpoint id
Step82: Formatting input for online prediction
This notebook uses Torchserve's KServe based inference API which is also Vertex AI Predictions compatible format. For online prediction requests, format the prediction input instances as JSON with base64 encoding as shown here
Step83: Sending an online prediction request
Format input text string and call prediction endpoint with formatted input request and get the response
Step85: [Optional] Make prediction requests using gcloud CLI
You can also call the Vertex AI Endpoint to make predictions using gcloud beta ai endpoints predict.
The following cell shows how to make a prediction request to Vertex AI Endpoints using gcloud CLI
Step86: Cleaning up
Cleaning up training and deployment resources
To clean up all Google Cloud resources used in this notebook, you can delete the Google Cloud project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial
Step87: Define clients for jobs, models and endpoints
Step88: Define functions to list the jobs, models and endpoints starting with APP_NAME defined earlier in the notebook
Step89: Deleting custom training jobs
Step90: Deleting hyperparameter tuning jobs
Step91: Undeploy models and Delete endpoints
Step92: Deleting models
Step93: Delete contents from the staging bucket
NOTE
Step94: Delete images from Container Registry
Deletes all the container images created in this tutorial with prefix defined by variable APP_NAME from the registry. All associated tags are also deleted. | Python Code:
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
!pip -q install {USER_FLAG} --upgrade transformers
!pip -q install {USER_FLAG} --upgrade datasets
!pip -q install {USER_FLAG} --upgrade tqdm
!pip -q install {USER_FLAG} --upgrade cloudml-hypertune
Explanation: Training, Tuning and Deploying a PyTorch Text Classification Model on Vertex AI
Fine-tuning pre-trained BERT model for sentiment classification task
Overview
This example is inspired from Token-Classification notebook and run_glue.py.
We will be fine-tuning bert-base-cased (pre-trained) model for sentiment classification task.
You can find the details about this model at Hugging Face Hub.
For more notebooks with the state of the art PyTorch/Tensorflow/JAX, you can explore Hugging FaceNotebooks.
Dataset
We will be using IMDB movie review dataset from Hugging Face Datasets.
Objective
How to Build, Train, Tune and Deploy PyTorch models on Vertex AI and emphasize first class support for training and deploying PyTorch models on Vertex AI.
Table of Contents
This notebook covers following sections:
Creating Notebooks instance
Training
Run Training Locally in the Notebook
Run Training Job on Vertex AI
Training with pre-built container
Training with custom container
Tuning
Run Hyperparameter Tuning job on Vertex AI
Deploying
Deploying model on Vertex AI Predictions with custom container
Costs
This tutorial uses billable components of Google Cloud Platform (GCP):
Vertex AI Workbench
Vertex AI Training
Vertex AI Predictions
Cloud Storage
Container Registry
Cloud Build [Optional]
Learn about Vertex AI Pricing, Cloud Storage Pricing and Cloud Build Pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage.
Creating Notebooks instance on Google Cloud
This notebook assumes you are working with PyTorch 1.9 DLVM development environment with GPU runtime. You can create a Notebook instance using Google Cloud Console or gcloud command.
gcloud notebooks instances create example-instance \
--vm-image-project=deeplearning-platform-release \
--vm-image-family=pytorch-1-9-cu110-notebooks \
--machine-type=n1-standard-4 \
--location=us-central1-a \
--boot-disk-size=100 \
--accelerator-core-count=1 \
--accelerator-type=NVIDIA_TESLA_V100 \
--install-gpu-driver \
--network=default
NOTE: You must have GPU quota before you can create instances with GPUs. Check the quotas page to ensure that you have enough GPUs available in your project. If GPUs are not listed on the quotas page or you require additional GPU quota, request a quota increase. Free Trial accounts do not receive GPU quota by default.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Install additional packages
Python dependencies required for this notebook are Transformers, Datasets and hypertune will be installed in the Notebooks instance itself.
End of explanation
!pip -q install {USER_FLAG} --upgrade google-cloud-aiplatform
Explanation: We will be using Vertex AI SDK for Python to interact with Vertex AI services. The high-level aiplatform library is designed to simplify common data science workflows by using wrapper classes and opinionated defaults.
Install Vertex AI SDK for Python
End of explanation
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the Kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # <---CHANGE THIS TO YOUR PROJECT
import os
# Get your Google Cloud project ID using google.auth
if not os.getenv("IS_TESTING"):
import google.auth
_, PROJECT_ID = google.auth.default()
print("Project ID: ", PROJECT_ID)
# validate PROJECT_ID
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
print(
f"Please set your project id before proceeding to next step. Currently it's set as {PROJECT_ID}"
)
Explanation: Before you begin
Select a GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable following APIs in your project required for running the tutorial
Vertex AI API
Cloud Storage API
Container Registry API
Cloud Build API
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud or google.auth.
End of explanation
from datetime import datetime
def get_timestamp():
return datetime.now().strftime("%Y%m%d%H%M%S")
TIMESTAMP = get_timestamp()
print(f"TIMESTAMP = {TIMESTAMP}")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI" into the filter box, and select Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # <---CHANGE THIS TO YOUR BUCKET
REGION = "us-central1" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = f"gs://{PROJECT_ID}aip-{get_timestamp()}"
print(f"PROJECT_ID = {PROJECT_ID}")
print(f"BUCKET_NAME = {BUCKET_NAME}")
print(f"REGION = {REGION}")
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package containing your training code to a Cloud Storage bucket. Vertex AI runs the code from this package. In this tutorial, Vertex AI also saves the trained model that results from your job in the same bucket. Using this model artifact, you can then create Vertex AI model and endpoint resources in order to serve online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
You may also change the REGION variable, which is used for operations throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are available. You may not use a Multi-Regional Storage bucket for training with Vertex AI.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import base64
import json
import os
import random
import sys
import google.auth
from google.cloud import aiplatform
from google.cloud.aiplatform import gapic as aip
from google.cloud.aiplatform import hyperparameter_tuning as hpt
from google.protobuf.json_format import MessageToDict
from IPython.display import HTML, display
import datasets
import numpy as np
import pandas as pd
import torch
import transformers
from datasets import ClassLabel, Sequence, load_dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
EvalPrediction, Trainer, TrainingArguments,
default_data_collator)
print(f"Notebook runtime: {'GPU' if torch.cuda.is_available() else 'CPU'}")
print(f"PyTorch version : {torch.__version__}")
print(f"Transformers version : {datasets.__version__}")
print(f"Datasets version : {transformers.__version__}")
APP_NAME = "finetuned-bert-classifier"
os.environ["TOKENIZERS_PARALLELISM"] = "false"
Explanation: Import libraries and define constants
End of explanation
datasets = load_dataset("imdb")
datasets
Explanation: Training
In this section, we will train a PyTorch model by fine-tuning pre-trained model from Hugging Face Transformers. We will train the model locally first and then on Vertex AI training service.
Training locally in the notebook
Loading the dataset
For this example we will use IMDB movie review dataset from Hugging Face Datasets for sentiment classification task. We use the Hugging Face Datasets library to download the data. This can be easily done with the function load_dataset.
End of explanation
print(
"Total # of rows in training dataset {} and size {:5.2f} MB".format(
datasets["train"].shape[0], datasets["train"].size_in_bytes / (1024 * 1024)
)
)
print(
"Total # of rows in test dataset {} and size {:5.2f} MB".format(
datasets["test"].shape[0], datasets["test"].size_in_bytes / (1024 * 1024)
)
)
Explanation: The datasets object itself is DatasetDict, which contains one key for the training, validation and test set.
End of explanation
datasets["train"][0]
Explanation: To access an actual element, you need to select a split first, then give an index:
End of explanation
label_list = datasets["train"].unique("label")
label_list
Explanation: Using the unique method to extract label list. This will allow us to experiment with other datasets without hard-coding labels.
End of explanation
def show_random_elements(dataset, num_examples=2):
assert num_examples <= len(
dataset
), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset) - 1)
while pick in picks:
pick = random.randint(0, len(dataset) - 1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
for column, typ in dataset.features.items():
if isinstance(typ, ClassLabel):
df[column] = df[column].transform(lambda i: typ.names[i])
elif isinstance(typ, Sequence) and isinstance(typ.feature, ClassLabel):
df[column] = df[column].transform(
lambda x: [typ.feature.names[i] for i in x]
)
display(HTML(df.to_html()))
show_random_elements(datasets["train"])
Explanation: To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset (automatically decoding the labels in passing).
End of explanation
batch_size = 16
max_seq_length = 128
model_name_or_path = "bert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(
model_name_or_path,
use_fast=True,
)
# 'use_fast' ensure that we use fast tokenizers (backed by Rust) from the 🤗 Tokenizers library.
Explanation: Preprocessing the data
Before we can feed those texts to our model, we need to preprocess them. This is done by a pre-trained Hugging Face Transformers Tokenizer class which tokenizes the inputs (including converting the tokens to their corresponding IDs in the pre-trained vocabulary) and put it in a format the model expects, as well as generate the other inputs that model requires.
To do all of this, we instantiate our tokenizer with the AutoTokenizer.from_pretrained method, which ensures:
we get a tokenizer that corresponds to the model architecture we want to use,
we download the vocabulary used when pre-training this specific checkpoint.
That vocabulary will be cached, so it's not downloaded again the next time we run the cell.
End of explanation
tokenizer("Hello, this is one sentence!")
Explanation: You can check type of models available with a fast tokenizer on the big table of models.
You can directly call this tokenizer on one sentence:
End of explanation
example = datasets["train"][4]
print(example)
tokenizer(
["Hello", ",", "this", "is", "one", "sentence", "split", "into", "words", "."],
is_split_into_words=True,
)
Explanation: Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here (just know they are required by the model we will instantiate later), you can learn more about them in this tutorial if you're interested.
NOTE: If, as is the case here, your inputs have already been split into words, you should pass the list of words to your tokenizer with the argument is_split_into_words=True:
End of explanation
# Dataset loading repeated here to make this cell idempotent
# Since we are over-writing datasets variable
datasets = load_dataset("imdb")
# Mapping labels to ids
# NOTE: We can extract this automatically but the `Unique` method of the datasets
# is not reporting the label -1 which shows up in the pre-processing.
# Hence the additional -1 term in the dictionary
label_to_id = {1: 1, 0: 0, -1: 0}
def preprocess_function(examples):
Tokenize the input example texts
NOTE: The same preprocessing step(s) will be applied
at the time of inference as well.
args = (examples["text"],)
result = tokenizer(
*args, padding="max_length", max_length=max_seq_length, truncation=True
)
# Map labels to IDs (not necessary for GLUE tasks)
if label_to_id is not None and "label" in examples:
result["label"] = [label_to_id[example] for example in examples["label"]]
return result
# apply preprocessing function to input examples
datasets = datasets.map(preprocess_function, batched=True, load_from_cache_file=True)
Explanation: Note that transformers are often pre-trained with sub-word tokenizers, meaning that even if your inputs have been split into words already, each of those words could be split again by the tokenizer. Let's look at an example of that:
End of explanation
model = AutoModelForSequenceClassification.from_pretrained(
model_name_or_path, num_labels=len(label_list)
)
Explanation: Fine-tuning the model
Now that our data is ready, we can download the pre-trained model and fine-tune it.
Fine Tuning involves taking a model that has already been trained for a given task and then tweaking the model for another similar task. Specifically, the tweaking involves replicating all the layers in the pre-trained model including weights and parameters, except the output layer. Then adding a new output classifier layer that predicts labels for the current task. The final step is to train the output layer from scratch, while the parameters of all layers from the pre-trained model are frozen. This allows learning from the pre-trained representations and "fine-tuning" the higher-order feature representations more relevant for the concrete task, such as analyzing sentiments in this case.
For the scenario in the notebook analyzing sentiments, the pre-trained BERT model already encodes a lot of information about the language as the model was trained on a large corpus of English data in a self-supervised fashion. Now we only need to slightly tune them using their outputs as features for the sentiment classification task. This means quicker development iteration on a much smaller dataset, instead of training a specific Natural Language Processing (NLP) model with a larger training dataset.
Since all our tasks are about token classification, we use the AutoModelForSequenceClassification class. Like with the tokenizer, the from_pretrained method will download and cache the model for us. The only thing we have to specify is the number of labels for our problem (which we can get from the features, as seen before):
End of explanation
args = TrainingArguments(
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=1,
weight_decay=0.01,
output_dir="/tmp/cls",
)
Explanation: NOTE: The warning is telling us we are throwing away some weights (the vocab_transform and vocab_layer_norm layers) and randomly initializing some other (the pre_classifier and classifier layers). This is absolutely normal in this case, because we are removing the head used to pre-train the model on a masked language modeling objective and replacing it with a new head for which we don't have pre-trained weights, so the library warns us we should fine-tune this model before using it for inference, which is exactly what we are going to do.
To instantiate a Trainer, we will need to define three more things. The most important is the TrainingArguments, which is a class that contains all the attributes to customize the training. It requires one folder name, which will be used to save the checkpoints of the model, and all other arguments are optional:
End of explanation
def compute_metrics(p: EvalPrediction):
preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions
preds = np.argmax(preds, axis=1)
return {"accuracy": (preds == p.label_ids).astype(np.float32).mean().item()}
Explanation: Here we set the evaluation to be done at the end of each epoch, tweak the learning rate, use the batch_size defined at the top of the notebook and customize the number of epochs for training, as well as the weight decay.
The last thing to define for our Trainer is how to compute the metrics from the predictions. You can define your custom compute_metrics function. It takes an EvalPrediction object (a namedtuple with a predictions and label_ids field) and has to return a dictionary string to float.
End of explanation
trainer = Trainer(
model,
args,
train_dataset=datasets["train"],
eval_dataset=datasets["test"],
data_collator=default_data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
Explanation: Now we create the Trainer object and we are almost ready to train.
End of explanation
trainer.train()
saved_model_local_path = "./models"
!mkdir ./models
trainer.save_model(saved_model_local_path)
Explanation: You can add callbacks to the trainer object to customize the behavior of the training loop such as early stopping, reporting metrics at the end of evaluation phase or taking any decisions. In the hyperparameter tuning section of this notebook, we add a callback to trainer for automating hyperparameter tuning process.
We can now fine-tune our model by just calling the train method:
End of explanation
history = trainer.evaluate()
history
Explanation: The evaluate method allows you to evaluate again on the evaluation dataset or on another dataset:
End of explanation
model_name_or_path = "bert-base-cased"
label_text = {0: "Negative", 1: "Positive"}
saved_model_path = saved_model_local_path
def predict(input_text, saved_model_path):
# initialize tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
# preprocess and encode input text
tokenizer_args = (input_text,)
predict_input = tokenizer(
*tokenizer_args,
padding="max_length",
max_length=128,
truncation=True,
return_tensors="pt",
)
# load trained model
loaded_model = AutoModelForSequenceClassification.from_pretrained(saved_model_path)
# get predictions
output = loaded_model(predict_input["input_ids"])
# return labels
label_id = torch.argmax(*output.to_tuple(), dim=1)
print(f"Review text: {input_text}")
print(f"Sentiment : {label_text[label_id.item()]}\n")
# example #1
review_text = (
Jaw dropping visual affects and action! One of the best I have seen to date.
)
predict_input = predict(review_text, saved_model_path)
# example #2
review_text = Take away the CGI and the A-list cast and you end up with film with less punch.
predict_input = predict(review_text, saved_model_path)
Explanation: To get the other metrics computed such as precision, recall or F1 score for each category, we can apply the same function as before on the result of the predict method.
Run predictions locally with sample examples
Using the trained model, we can predict the sentiment label for an input text after applying the preprocessing function that was used during the training. We will run the predictions locally in the notebook and later show how you can deploy the model to an endpoint using TorchServe on Vertex AI Predictions.
End of explanation
PRE_BUILT_TRAINING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/training/pytorch-gpu.1-7:latest"
)
PYTHON_PACKAGE_APPLICATION_DIR = "python_package"
source_package_file_name = f"{PYTHON_PACKAGE_APPLICATION_DIR}/dist/trainer-0.1.tar.gz"
python_package_gcs_uri = (
f"{BUCKET_NAME}/pytorch-on-gcp/{APP_NAME}/train/python_package/trainer-0.1.tar.gz"
)
python_module_name = "trainer.task"
Explanation: Training on Vertex AI
You can do local experimentation on your Notebooks instance. However, for larger datasets or models often a vertically scaled compute or horizontally distributed training is required. The most effective way to perform this task is to leverage Vertex AI custom training service for following reasons:
Automatically provision and de-provision resources: Training job on Vertex AI will automatically provision computing resources, performs the training task and ensures deletion of compute resources once the training job is finished.
Reusability and portability: You can package training code with its parameters and dependencies into a container and create a portable component. This container can then be run with different scenarios such as hyperparameter tuning, different data sources and more.
Training at scale: You can run a distributed training job with AI allowing you to train models in a cluster across multiple nodes in parallel and resulting in faster training time.
Logging and Monitoring: The training service logs messages from the job to Cloud Logging and can be monitored while the job is running.
In this part of the notebook, we show how to scale the training job with Vertex AI by packaging the code and create a training pipeline to orchestrate a training job. There are three steps to run a training job using Vertex AI custom training service:
STEP 1: Determine training code structure - Packaging as a Python source distribution or as a custom container image
STEP 2: Chose a custom training method - custom job, hyperparameter training job or training pipeline
STEP 3: Run the training job
Custom training methods
There are three types of Vertex AI resources you can create to train custom models on Vertex AI:
Custom jobs: With a custom job you configure the settings to run your training code on Vertex AI such as worker pool specs - machine types, accelerators, Python training spec or custom container spec.
Hyperparameter tuning jobs: Hyperparameter tuning jobs automate tuning of hyperparameters of your model based on the criteria you configure such as goal/metric to optimize, hyperparameters values and number of trials to run.
Training pipelines: Orchestrates custom training jobs or hyperparameter tuning jobs with additional steps after the training job is successfully completed.
Please refer to the documentation for further details.
In this notebook, we will cover Custom Jobs and Hyperparameter tuning jobs.
Packaging the training application
Before running the training job on Vertex AI, the training application code and any dependencies must be packaged and uploaded to Cloud Storage bucket or Container Registry or Artifact Registry that your Google Cloud project can access. This sections shows how to package and stage your application in the cloud.
There are two ways to package your application and dependencies and train on Vertex AI:
Create a Python source distribution with the training code and dependencies to use with a pre-built containers on Vertex AI
Use custom containers to package dependencies using Docker containers
This notebook shows both packaging options to run a custom training job on Vertex AI.
Recommended Training Application Structure
You can structure your training application in any way you like. However, the following structure is commonly used in Vertex AI samples, and having your project's organization be similar to the samples can make it easier for you to follow the samples.
We have two directories python_package and custom_container showing both the packaging approaches. README.md files inside each directory has details on the directory structure and instructions on how to run application locally and on the cloud.
.
├── custom_container
│ ├── Dockerfile
│ ├── README.md
│ ├── scripts
│ │ └── train-cloud.sh
│ └── trainer -> ../python_package/trainer/
├── python_package
│ ├── README.md
│ ├── scripts
│ │ └── train-cloud.sh
│ ├── setup.py
│ └── trainer
│ ├── __init__.py
│ ├── experiment.py
│ ├── metadata.py
│ ├── model.py
│ ├── task.py
│ └── utils.py
└── pytorch-text-classification-vertex-ai-train-tune-deploy.ipynb --> This notebook
Main project directory contains your setup.py file or Dockerfile with the dependencies.
Use a subdirectory named trainer to store your main application module and scripts to submit training jobs locally or cloud
Inside trainer directory:
task.py - Main application module 1) initializes and parse task arguments (hyper parameters), and 2) entry point to the trainer
model.py - Includes function to create model with a sequence classification head from a pre-trained model.
experiment.py - Runs the model training and evaluation experiment, and exports the final model.
metadata.py - Defines metadata for classification task such as predefined model dataset name, target labels
utils.py - Includes utility functions such as data input functions to read data, save model to GCS bucket
Run Custom Job on Vertex AI Training with a pre-built container
Vertex AI provides Docker container images that can be run as pre-built containers for custom training. These containers include common dependencies used in training code based on the Machine Learning framework and framework version.
In this notebook, we are using Hugging Face Datasets and fine tuning a transformer model from Hugging Face Transformers Library for sentiment analysis task using PyTorch. We will use pre-built container for PyTorch and package the training application code by adding standard Python dependencies - transformers, datasets and tqdm - in the setup.py file.
Initialize the variables to define pre-built container image, location of training application and training module.
End of explanation
%%writefile ./{PYTHON_PACKAGE_APPLICATION_DIR}/setup.py
from setuptools import find_packages
from setuptools import setup
import setuptools
from distutils.command.build import build as _build
import subprocess
REQUIRED_PACKAGES = [
'transformers',
'datasets',
'tqdm',
'cloudml-hypertune'
]
setup(
name='trainer',
version='0.1',
install_requires=REQUIRED_PACKAGES,
packages=find_packages(),
include_package_data=True,
description='Vertex AI | Training | PyTorch | Text Classification | Python Package'
)
Explanation: Following is the setup.py file for the training application. The find_packages() function inside setup.py includes the trainer directory in the package as it contains __init__.py which tells Python Setuptools to include all subdirectories of the parent directory as dependencies.
End of explanation
!cd {PYTHON_PACKAGE_APPLICATION_DIR} && python3 setup.py sdist --formats=gztar
Explanation: Run the following command to create a source distribution, dist/trainer-0.1.tar.gz:
End of explanation
!gsutil cp {source_package_file_name} {python_package_gcs_uri}
Explanation: Now upload the source distribution with training application to Cloud Storage bucket
End of explanation
!gsutil ls -l {python_package_gcs_uri}
Explanation: Validate the source distribution exists on Cloud Storage bucket
End of explanation
!cd {PYTHON_PACKAGE_APPLICATION_DIR} && python -m trainer.task
Explanation: [Optional] Run custom training job locally
Before submitting the job to cloud, you can run the training job locally by calling the trainer.task module directly
End of explanation
aiplatform.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Run custom training job on Vertex AI
We use Vertex AI SDK for Python to create and submit training job to the Vertex AI training service.
Initialize the Vertex AI SDK for Python
End of explanation
print(f"APP_NAME={APP_NAME}")
print(
f"PRE_BUILT_TRAINING_CONTAINER_IMAGE_URI={PRE_BUILT_TRAINING_CONTAINER_IMAGE_URI}"
)
print(f"python_package_gcs_uri={python_package_gcs_uri}")
print(f"python_module_name={python_module_name}")
JOB_NAME = f"{APP_NAME}-pytorch-pkg-ar-{get_timestamp()}"
print(f"JOB_NAME={JOB_NAME}")
job = aiplatform.CustomPythonPackageTrainingJob(
display_name=f"{JOB_NAME}",
python_package_gcs_uri=python_package_gcs_uri,
python_module_name=python_module_name,
container_uri=PRE_BUILT_TRAINING_CONTAINER_IMAGE_URI,
)
training_args = ["--num-epochs", "2", "--model-name", "finetuned-bert-classifier"]
model = job.run(
replica_count=1,
machine_type="n1-standard-8",
accelerator_type="NVIDIA_TESLA_V100",
accelerator_count=1,
args=training_args,
sync=False,
)
Explanation: Configure and submit Custom Job to Vertex AI Training service
Configure a Custom Job with the pre-built container image for PyTorch and training code packaged as Python source distribution.
NOTE: When using Vertex AI SDK for Python for submitting a training job, it creates a Training Pipeline which launches the Custom Job on Vertex AI Training service.
End of explanation
job_response = MessageToDict(job._gca_resource._pb)
gcs_model_artifacts_uri = job_response["trainingTaskInputs"]["baseOutputDirectory"][
"outputUriPrefix"
]
print(f"Model artifacts are available at {gcs_model_artifacts_uri}")
!gsutil ls -lr $gcs_model_artifacts_uri/
Explanation: Monitoring progress of the Custom Job
You can monitor the custom job launched from Cloud Console following the link here or use gcloud CLI command gcloud beta ai custom-jobs stream-logs
Validate the model artifacts written to GCS by the training code after the job completes successfully
End of explanation
!cd python_package && ./scripts/train-cloud.sh
Explanation: [Optional] Submit custom job using gcloud CLI using Python source distribution
You can submit the training job to Vertex AI training service using gcloud beta ai custom-jobs create. gcloud command stages your training application on GCS bucket and submits the training job.
gcloud beta ai custom-jobs create \
--display-name=${JOB_NAME} \
--region ${REGION} \
--python-package-uris=${PACKAGE_PATH} \
--worker-pool-spec=replica-count=1,machine-type='n1-standard-8',accelerator-type='NVIDIA_TESLA_V100',accelerator-count=1,executor-image-uri=${IMAGE_URI},python-module='trainer.task',local-package-path="../python_package/" \
--args="--model-name","finetuned-bert-classifier","--job-dir",$JOB_DIR
worker-pool-spec parameter defines the worker pool configuration used by the custom job. Following are the fields within worker-pool-spec:
Set the executor-image-uri to us-docker.pkg.dev/vertex-ai/training/pytorch-gpu.1-7:latest for training on pre-built PyTorch v1.7 image for GPU
Set the local-package-path to the path to the training code
Set the python-module to the trainer.task which is the main module to start your application
Set the accelerator-type and machine-type to set the compute type to run the application
Refer documentation for further details.
The script at ./python_package/scripts/train-cloud.sh contains the gcloud commands to launch the custom job and monitor the logs.
End of explanation
%%writefile ./custom_container/Dockerfile
# Use pytorch GPU base image
FROM us-docker.pkg.dev/vertex-ai/training/pytorch-gpu.1-10:latest
# set working directory
WORKDIR /app
# Install required packages
RUN pip install google-cloud-storage transformers datasets tqdm cloudml-hypertune
# Copies the trainer code to the docker image.
COPY ./trainer/__init__.py /app/trainer/__init__.py
COPY ./trainer/experiment.py /app/trainer/experiment.py
COPY ./trainer/utils.py /app/trainer/utils.py
COPY ./trainer/metadata.py /app/trainer/metadata.py
COPY ./trainer/model.py /app/trainer/model.py
COPY ./trainer/task.py /app/trainer/task.py
# Set up the entry point to invoke the trainer.
ENTRYPOINT ["python", "-m", "trainer.task"]
Explanation: In addition to Cloud Console, you can monitor job progress by streaming logs using gcloud CLI by passing the job id:
gcloud ai custom-jobs stream-logs <job_id> --region=$REGION
You can validate the model artifacts written to GCS by the training code by running the following command:
!gsutil ls -l $JOB_DIR/
Run Custom Job on Vertex AI Training with custom container
To create a training job with custom container, you define a Dockerfile to install or add the dependencies required for the training job. Then, you build and test your Docker image locally to verify, push the image to Container Registry and submit a Custom Job to Vertex AI Training service.
Build your container using Dockerfile with Training Code and Dependencies
In the previous section, we wrapped the training application code and dependencies as Python source distribution. An alternate way to package the training application and dependencies is to create a custom container using Dockerfile. We create a Dockerfile with a pre-built PyTorch container image provided by Vertex AI as the base image, install the dependencies - transformers, datasets , tqdm and cloudml-hypertune and copy the training application code.
End of explanation
CUSTOM_TRAIN_IMAGE_URI = f"gcr.io/{PROJECT_ID}/pytorch_gpu_train_{APP_NAME}"
!cd ./custom_container/ && docker build -f Dockerfile -t $CUSTOM_TRAIN_IMAGE_URI ../python_package
Explanation: Build the image and tag the Container Registry path (gcr.io) that you will push to.
End of explanation
!docker run --gpus all -it --rm $CUSTOM_TRAIN_IMAGE_URI
Explanation: [Optional] Run Training Job Locally with Custom Container
Run the container locally in detached mode to test. When running with machine with GPUs, you can use --gpus all command line flag.
End of explanation
!docker push $CUSTOM_TRAIN_IMAGE_URI
Explanation: Run custom training job on Vertex AI with Custom Container
Before submitting the training job to Vertex AI, push the custom container image to Google Cloud Container Registry and then submit the training job to Vertex AI.
NOTE: Container Registry is a central repository to store, manage, and secure your Docker container images.
Push the container to Container Registry
Push your container image with training application code and dependencies to your Container Registry.
End of explanation
!gcloud container images describe $CUSTOM_TRAIN_IMAGE_URI
Explanation: Validate the custom container image in Container Registry
End of explanation
aiplatform.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize the Vertex AI SDK for Python
End of explanation
JOB_NAME = f"{APP_NAME}-pytorch-cstm-cntr-{get_timestamp()}"
print(f"APP_NAME={APP_NAME}")
print(f"CUSTOM_TRAIN_IMAGE_URI={CUSTOM_TRAIN_IMAGE_URI}")
print(f"JOB_NAME={JOB_NAME}")
# configure the job with container image spec
job = aiplatform.CustomContainerTrainingJob(
display_name=f"{JOB_NAME}", container_uri=f"{CUSTOM_TRAIN_IMAGE_URI}"
)
# define training code arguments
training_args = ["--num-epochs", "2", "--model-name", "finetuned-bert-classifier"]
# submit the custom job to Vertex AI training service
model = job.run(
replica_count=1,
machine_type="n1-standard-8",
accelerator_type="NVIDIA_TESLA_V100",
accelerator_count=1,
args=training_args,
sync=False,
)
Explanation: Configure and submit Custom Job to Vertex AI Training service
Configure a Custom Job with the custom container image with training code and other dependencies
NOTE: When using Vertex AI SDK for Python for submitting a training job, it creates a Training Pipeline which launches the Custom Job to train on Vertex AI Training.
End of explanation
!cd custom_container && ./scripts/train-cloud.sh
Explanation: Monitoring progress of the Custom Job
You can monitor the custom job launched from Cloud Console following the link here or use gcloud CLI command gcloud beta ai custom-jobs stream-logs
[Optional] Submit Custom Job using gcloud CLI using custom container
You can submit the training job to Vertex AI training service using gcloud beta ai custom-jobs create with custom container spec. gcloud command submits the training job and launches worker pool with the custom container image specified.
gcloud beta ai custom-jobs create \
--display-name=${JOB_NAME} \
--region ${REGION} \
--worker-pool-spec=replica-count=1,machine-type='n1-standard-8',accelerator-type='NVIDIA_TESLA_V100',accelerator-count=1,container-image-uri=${CUSTOM_TRAIN_IMAGE_URI} \
--args="--model-name","finetuned-bert-classifier","--job-dir",$JOB_DIR
worker-pool-spec parameter defines the worker pool configuration used by the custom job. Following are the fields within worker-pool-spec:
Set the container-image-uri to the custom container image pushed to Google Cloud Container Registry for training
Set the accelerator-type and machine-type to set the compute type to run the application
Refer documentation for further details.
The script at ./custom_container/scripts/train-cloud.sh contains the gcloud commands to launch the custom job and monitor the logs.
End of explanation
!gcloud container images describe $CUSTOM_TRAIN_IMAGE_URI
Explanation: In addition to Cloud Console, you can monitor job progress by streaming logs using gcloud CLI by passing the job id:
gcloud ai custom-jobs stream-logs <job_id> --region=$REGION
You can validate the model artifacts written to GCS by the training code by running the following command:
!gsutil ls -l $JOB_DIR/
Hyperparameter Tuning
The training application code for fine-tuning a transformer model for sentiment analysis task uses hyperparameters such as learning rate and weight decay. These hyperparameters control the behavior of the training algorithm and can have a significant effect on the performance of the resulting model. This part of the notebook show how you can automate tuning these hyperparameters with Vertex AI Training service.
We submit a Hyperparameter Tuning job to Vertex AI Training service by packaging the training application code and dependencies in a Docker container and push the container to Google Container Registry, similar to running a Custom Job on Vertex AI with Custom Container.
How hyperparameter tuning works in Vertex AI?
Following are the high level steps involved in running a Hyperparameter Tuning job on Vertex AI Training service:
You define the hyperparameters to tune the model along with the metric (or goal) to optimize
Vertex AI runs multiple trials of your training application with the hyperparameters and limits you specified - maximum number of trials to run and number of parallel trials.
Vertex AI keeps track of the results from each trial and makes adjustments for subsequent trials. This requires your training application to report the metrics to Vertex AI using the Python package cloudml-hypertune.
When the job is finished, get the summary of all the trials with the most effective configuration of values based on the criteria you configured
Refer to the Vertex AI documentation to understand how to configure and select hyperparameters for tuning, configure tuning strategy and how Vertex AI optimizes the hyperparameter tuning jobs. The default tuning strategy uses results of previous trials to inform the assignment of values in subsequent trials.
Changes to training application code for hyperparameter tuning
There are few requirements to follow specific to hyperparameter tuning in Vertex AI:
To pass the hyperparameter values to training code, you mist define a command-line argument in the main training module for each tuned hyperparameter. Use the value passed in those arguments to set the corresponding hyperparameter in the training application's code
You must pass metrics from the training application to Vertex AI to evaluate the effectiveness of a trial. You can use cloudml-hypertune Python package to report metrics.
Previously, in the training application code to fine-tune the transformer model for sentiment analysis task, we instantiated Trainer with hyperparameters passed as training arguments (training_args).
```
# Estimator arguments
args_parser.add_argument(
'--learning-rate',
help='Learning rate value for the optimizers.',
default=2e-5,
type=float)
args_parser.add_argument(
'--weight-decay',
help=
The factor by which the learning rate should decay by the end of the
training.
decayed_learning_rate =
learning_rate * decay_rate ^ (global_step / decay_steps)
If set to 0 (default), then no decay will occur.
If set to 0.5, then the learning rate should reach 0.5 of its original
value at the end of the training.
Note that decay_steps is set to train_steps.
,
default=0.01,
type=float)
# Enable hyperparameter
args_parser.add_argument(
'--hp-tune',
default="n",
help='Enable hyperparameter tuning. Valida values are: "y" - enable, "n" - disable')
```
These hyperparameters are passed as command line arguments to the training module trainer.task which are then passed to the training_args. Refer to ./python_package/trainer module for training application code.
```
# set training arguments
training_args = TrainingArguments(
evaluation_strategy="epoch",
learning_rate=args.learning_rate,
per_device_train_batch_size=args.batch_size,
per_device_eval_batch_size=args.batch_size,
num_train_epochs=args.num_epochs,
weight_decay=args.weight_decay,
output_dir=os.path.join("/tmp", args.model_name)
)
# initialize our Trainer
trainer = Trainer(
model,
training_args,
train_dataset=train_dataset,
eval_dataset=test_dataset,
data_collator=default_data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics
)
```
To report metrics to Vertex AI when hyperparameter tuning is enabled, we call cloudml-hypertune Python package after the evaluation phase which is added as a callback to the trainer. The trainer objects passes the metrics computed by the last evaluation phase to the callback which will be reported by hypertune library to Vertex AI for evaluating trials.
```
add hyperparameter tuning callback to report metrics when enabled
if args.hp_tune == "y":
trainer.add_callback(HPTuneCallback("accuracy", "eval_accuracy"))
class HPTuneCallback(TrainerCallback):
A custom callback class that reports a metric to hypertuner
at the end of each epoch.
def __init__(self, metric_tag, metric_value):
super(HPTuneCallback, self).__init__()
self.metric_tag = metric_tag
self.metric_value = metric_value
self.hpt = hypertune.HyperTune()
def on_evaluate(self, args, state, control, **kwargs):
print(f"HP metric {self.metric_tag}={kwargs['metrics'][self.metric_value]}")
self.hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag=self.metric_tag,
metric_value=kwargs['metrics'][self.metric_value],
global_step=state.epoch)
```
Run Hyperparameter Tuning Job on Vertex AI
Before submitting the hyperparameter tuning job to Vertex AI, push the custom container image with training application to Google Cloud Container Registry and then submit the job to Vertex AI. We will be using the same image used for running Custom Job on Vertex AI Training service.
Validate the custom container image in Container Registry
End of explanation
aiplatform.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize the Vertex AI SDK for Python
End of explanation
JOB_NAME = f"{APP_NAME}-pytorch-hptune-{get_timestamp()}"
print(f"APP_NAME={APP_NAME}")
print(f"CUSTOM_TRAIN_IMAGE_URI={CUSTOM_TRAIN_IMAGE_URI}")
print(f"JOB_NAME={JOB_NAME}")
Explanation: Configure and submit Hyperparameter Tuning Job to Vertex AI Training service
Configure a Hyperparameter Tuning Job with the custom container image with training code and other dependencies.
When configuring and submitting a Hyperparameter Tuning job, you need to attach a Custom Job definition with worker pool specs defining machine type, accelerators and URI for container image representing the custom container.
End of explanation
training_args = [
"--num-epochs",
"2",
"--model-name",
"finetuned-bert-classifier",
"--hp-tune",
"y",
]
Explanation: Define the training arguments with hp-tune argument set to y so that training application code can report metrics to Vertex AI
End of explanation
# The spec of the worker pools including machine type and Docker image
worker_pool_specs = [
{
"machine_spec": {
"machine_type": "n1-standard-8",
"accelerator_type": "NVIDIA_TESLA_V100",
"accelerator_count": 1,
},
"replica_count": 1,
"container_spec": {"image_uri": CUSTOM_TRAIN_IMAGE_URI, "args": training_args},
}
]
custom_job = aiplatform.CustomJob(
display_name=JOB_NAME, worker_pool_specs=worker_pool_specs
)
Explanation: Create a CustomJob with worker pool specs to define machine types, accelerators and customer container spec with the training application code
End of explanation
# Dictionary representing parameters to optimize.
# The dictionary key is the parameter_id, which is passed into your training
# job as a command line argument,
# And the dictionary value is the parameter specification of the metric.
parameter_spec = {
"learning-rate": hpt.DoubleParameterSpec(min=1e-6, max=0.001, scale="log"),
"weight-decay": hpt.DiscreteParameterSpec(
values=[0.0001, 0.001, 0.01, 0.1], scale=None
),
}
Explanation: Define the parameter_spec as a Python dictionary object with the search space i.e. parameters to search and optimize. They key is the hyperparameter name passed as command line argument to the training code and value is the parameter specification. The spec requires to specify the hyperparameter data type as an instance of a parameter value specification.
Refer to the documentation on selecting the hyperparaneter to tune and how to define parameter specification.
End of explanation
# Dictionary representing metrics to optimize.
# The dictionary key is the metric_id, which is reported by your training job,
# And the dictionary value is the optimization goal of the metric.
metric_spec = {"accuracy": "maximize"}
Explanation: Define the metric_spec with name and goal of metric to optimize. The goal specifies whether you want to tune your model to maximize or minimize the value of this metric.
End of explanation
hp_job = aiplatform.HyperparameterTuningJob(
display_name=JOB_NAME,
custom_job=custom_job,
metric_spec=metric_spec,
parameter_spec=parameter_spec,
max_trial_count=5,
parallel_trial_count=2,
search_algorithm=None,
)
model = hp_job.run(sync=False)
Explanation: Configure and submit a Hyperparameter Tuning Job with the Custom Job, metric spec, parameter spec and trial limits.
max_trial_count: Maximum # of Trials run by the service. We recommend to start with a smaller value to understand the impact of the hyperparameters chosen before scaling up.
parallel_trial_count: Number of Trials to run in parallel. We recommend to start with a smaller value as Vertex AI uses results from the previous trials to inform the assignment of values in subsequent trials. Large # of parallel trials mean these trials start without having the benefit of the results of any trials still running.
search_algorithm: Search algorithm specified for the Study. If you do not specify an algorithm, Vertex AI by default applies Bayesian optimization to arrive at the optimal solution to search over the parameter space.
Refer to the documentation to understand the hyperparameter training job configuration.
End of explanation
def get_trials_as_df(trials):
results = []
for trial in trials:
row = {}
t = MessageToDict(trial._pb)
# print(t)
row["Trial ID"], row["Status"], row["Start time"], row["End time"] = (
t["id"],
t["state"],
t["startTime"],
t.get("endTime", None),
)
for param in t["parameters"]:
row[param["parameterId"]] = param["value"]
if t["state"] == "SUCCEEDED":
row["Training step"] = t["finalMeasurement"]["stepCount"]
for metric in t["finalMeasurement"]["metrics"]:
row[metric["metricId"]] = metric["value"]
results.append(row)
_df = pd.DataFrame(results)
return _df
df_trials = get_trials_as_df(hp_job.trials)
df_trials
Explanation: Monitoring progress of the Custom Job
You can monitor the hyperparameter tuning job launched from Cloud Console following the link here or use gcloud CLI command gcloud beta ai custom-jobs stream-logs
After the job is finished, you can view and format the results of the hyperparameter tuning Trials (run by Vertex AI Training service) as a Pandas dataframe
End of explanation
# get trial id of the best run from the Trials
best_trial_id = df_trials.loc[df_trials["accuracy"].idxmax()]["Trial ID"]
# get base output directory where artifacts are saved
base_output_dir = MessageToDict(hp_job._gca_resource._pb)["trialJobSpec"][
"baseOutputDirectory"
]["outputUriPrefix"]
# get the model artifacts of the best trial id
best_model_artifact_uri = f"{base_output_dir}/{best_trial_id}"
print(
f"Model artifacts from the Hyperparameter Tuning Job with bbest trial id {best_trial_id} are located at {best_model_artifact_uri}"
)
Explanation: Now from the results of Trials, you can pick the best performing Trial to deploy to Vertex AI Predictions
End of explanation
!gsutil ls -r $best_model_artifact_uri/
Explanation: You can validate the model artifacts written to GCS by the training code by running the following command:
End of explanation
%%bash -s $BUCKET_NAME $APP_NAME
# ========================================================
# set job parameters
# ========================================================
# PROJECT_ID: Change to your project id
PROJECT_ID=$(gcloud config list --format 'value(core.project)')
# set job display name
JOB_PREFIX="finetuned-bert-classifier"
JOB_NAME=${JOB_PREFIX}-pytorch-hptune-$(date +%Y%m%d%H%M%S)
echo "Launching hyperparameter tuning job with display name as "$JOB_NAME
# BUCKET_NAME is a required parameter to run the cell.
BUCKET_NAME=$1
# APP_NAME: get application name
APP_NAME=$2
# JOB_DIR: Where to store prepared package and upload output model.
JOB_DIR=${BUCKET_NAME}/${JOB_PREFIX}/model/${JOB_NAME}
# custom container image URI
CUSTOM_TRAIN_IMAGE_URI='gcr.io/'${PROJECT_ID}'/pytorch_gpu_train_'${APP_NAME}
# ========================================================
# create hyperparameter tuning configuration file
# ========================================================
cat << EOF > ./python_package/hptuning_job.yaml
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
- parameterId: learning-rate
scaleType: UNIT_LOG_SCALE
doubleValueSpec:
minValue: 0.000001
maxValue: 0.001
- parameterId: weight-decay
scaleType: SCALE_TYPE_UNSPECIFIED
discreteValueSpec:
values: [
0.0001,
0.001,
0.01,
0.1
]
measurementSelectionType: BEST_MEASUREMENT
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: n1-standard-8
acceleratorType: NVIDIA_TESLA_V100
acceleratorCount: 1
replicaCount: 1
containerSpec:
imageUri: $CUSTOM_TRAIN_IMAGE_URI
args: ["--num-epochs", "2", "--model-name", "finetuned-bert-classifier", "--hp-tune", "y"]
baseOutputDirectory:
outputUriPrefix: $JOB_DIR/
EOF
# ========================================================
# submit hyperparameter tuning job
# ========================================================
gcloud beta ai hp-tuning-jobs create \
--config ./python_package/hptuning_job.yaml \
--display-name $JOB_NAME \
--algorithm algorithm-unspecified \
--max-trial-count 5 \
--parallel-trial-count 2 \
--region=us-central1
Explanation: [Optional] Submit hyperparameter tuning job using gcloud CLI
You can submit the hyperparameter tuning job to Vertex AI training service using gcloud beta ai hp-tuning-jobs create. gcloud command submits the hyperparameter tuning job and launches multiple trials with worker pool based on custom container image specified and number of trials and the criteria set. The command requires hyperparameter tuning job configuration provided as configuration file in YAML format with job name.
The following cell shows how to submit a hyperparameter tuning job on Vertex AI using gcloud CLI:
End of explanation
%%writefile predictor/custom_handler.py
import os
import json
import logging
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from ts.torch_handler.base_handler import BaseHandler
logger = logging.getLogger(__name__)
class TransformersClassifierHandler(BaseHandler):
The handler takes an input string and returns the classification text
based on the serialized transformers checkpoint.
def __init__(self):
super(TransformersClassifierHandler, self).__init__()
self.initialized = False
def initialize(self, ctx):
Loads the model.pt file and initialized the model object.
Instantiates Tokenizer for preprocessor to use
Loads labels to name mapping file for post-processing inference response
self.manifest = ctx.manifest
properties = ctx.system_properties
model_dir = properties.get("model_dir")
self.device = torch.device("cuda:" + str(properties.get("gpu_id")) if torch.cuda.is_available() else "cpu")
# Read model serialize/pt file
serialized_file = self.manifest["model"]["serializedFile"]
model_pt_path = os.path.join(model_dir, serialized_file)
if not os.path.isfile(model_pt_path):
raise RuntimeError("Missing the model.pt or pytorch_model.bin file")
# Load model
self.model = AutoModelForSequenceClassification.from_pretrained(model_dir)
self.model.to(self.device)
self.model.eval()
logger.debug('Transformer model from path {0} loaded successfully'.format(model_dir))
# Ensure to use the same tokenizer used during training
self.tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')
# Read the mapping file, index to object name
mapping_file_path = os.path.join(model_dir, "index_to_name.json")
if os.path.isfile(mapping_file_path):
with open(mapping_file_path) as f:
self.mapping = json.load(f)
else:
logger.warning('Missing the index_to_name.json file. Inference output will default.')
self.mapping = {"0": "Negative", "1": "Positive"}
self.initialized = True
def preprocess(self, data):
Preprocessing input request by tokenizing
Extend with your own preprocessing steps as needed
text = data[0].get("data")
if text is None:
text = data[0].get("body")
sentences = text.decode('utf-8')
logger.info("Received text: '%s'", sentences)
# Tokenize the texts
tokenizer_args = ((sentences,))
inputs = self.tokenizer(*tokenizer_args,
padding='max_length',
max_length=128,
truncation=True,
return_tensors = "pt")
return inputs
def inference(self, inputs):
Predict the class of a text using a trained transformer model.
prediction = self.model(inputs['input_ids'].to(self.device))[0].argmax().item()
if self.mapping:
prediction = self.mapping[str(prediction)]
logger.info("Model predicted: '%s'", prediction)
return [prediction]
def postprocess(self, inference_output):
return inference_output
Explanation: Deploying
Deploying a PyTorch model on Vertex AI Predictions requires to use a custom container that serves online predictions. You will deploy a container running PyTorch's TorchServe tool in order to serve predictions from a fine-tuned transformer model from Hugging Face Transformers for sentiment analysis task. You can then use Vertex AI Predictions to classify sentiment of input texts.
Deploying model on Vertex AI Predictions with custom container
To use a custom container to serve predictions from a PyTorch model, you must provide Vertex AI with a Docker container image that runs an HTTP server, such as TorchServe in this case. Please refer to documentation that describes the container image requirements to be compatible with Vertex AI Predictions.
Essentially, to deploy a PyTorch model on Vertex AI Predictions following are the steps:
Package the trained model artifacts including default or custom handlers by creating an archive file using Torch model archiver
Build a custom container compatible with Vertex AI Predictions to serve the model using Torchserve
Upload the model with custom container image to serve predictions as a Vertex AI Model resource
Create a Vertex AI Endpoint and deploy the model resource
Create a custom model handler to handle prediction requests
When predicting sentiments of the input text with the fine-tuned transformer model, it requires pre-processing of the input text and post-processing by adding name (positive/negative) to the target label (1/0) along with probability (or confidence). We create a custom handler script that is packaged with the model artifacts and TorchServe executes the code when it runs.
Custom handler script does the following:
Pre-process input text before sending it to the model for inference
Customize how the model is invoked for inference
Post-process output from the model before sending back a response
Please refer to the TorchServe documentation for defining a custom handler.
End of explanation
%%writefile ./predictor/index_to_name.json
{
"0": "Negative",
"1": "Positive"
}
Explanation: Generate target label to name file [Optional]
In the custom handler, we refer to a mapping file between target labels and their meaningful names that will be used to format the prediction response. Here we are mapping target label "0" as "Negative" and "1" as "Positive".
End of explanation
GCS_MODEL_ARTIFACTS_URI = best_model_artifact_uri
Explanation: Create custom container image to serve predictions
We will use Cloud Build to create the custom container image with following build steps:
Download model artifacts
Download model artifacts that were saved as part of the training (or hyperparameter tuning) job from Cloud Storage to local directory
End of explanation
!gsutil ls -r $GCS_MODEL_ARTIFACTS_URI/model/
Explanation: Validate model artifact files in the Cloud Storage bucket
End of explanation
!gsutil -m cp -r $GCS_MODEL_ARTIFACTS_URI/model/ ./predictor/
!ls -ltrR ./predictor/model
Explanation: Copy files from Cloud Storage to local directory
End of explanation
%%bash -s $APP_NAME
APP_NAME=$1
cat << EOF > ./predictor/Dockerfile
FROM pytorch/torchserve:latest-cpu
# install dependencies
RUN python3 -m pip install --upgrade pip
RUN pip3 install transformers
USER model-server
# copy model artifacts, custom handler and other dependencies
COPY ./custom_handler.py /home/model-server/
COPY ./index_to_name.json /home/model-server/
COPY ./model/$APP_NAME/ /home/model-server/
# create torchserve configuration file
USER root
RUN printf "\nservice_envelope=json" >> /home/model-server/config.properties
RUN printf "\ninference_address=http://0.0.0.0:7080" >> /home/model-server/config.properties
RUN printf "\nmanagement_address=http://0.0.0.0:7081" >> /home/model-server/config.properties
USER model-server
# expose health and prediction listener ports from the image
EXPOSE 7080
EXPOSE 7081
# create model archive file packaging model artifacts and dependencies
RUN torch-model-archiver -f \
--model-name=$APP_NAME \
--version=1.0 \
--serialized-file=/home/model-server/pytorch_model.bin \
--handler=/home/model-server/custom_handler.py \
--extra-files "/home/model-server/config.json,/home/model-server/tokenizer.json,/home/model-server/training_args.bin,/home/model-server/tokenizer_config.json,/home/model-server/special_tokens_map.json,/home/model-server/vocab.txt,/home/model-server/index_to_name.json" \
--export-path=/home/model-server/model-store
# run Torchserve HTTP serve to respond to prediction requests
CMD ["torchserve", \
"--start", \
"--ts-config=/home/model-server/config.properties", \
"--models", \
"$APP_NAME=$APP_NAME.mar", \
"--model-store", \
"/home/model-server/model-store"]
EOF
echo "Writing ./predictor/Dockerfile"
Explanation: Build the container image
Create a Dockerfile with TorchServe as base image:
RUN: Installs dependencies such as transformers
COPY: Add model artifacts to /home/model-server/ directory of the container image
COPY: Add custom handler script to /home/model-server/ directory of the container image
RUN: Create /home/model-server/config.properties to define the serving configuration (health and prediction listener ports)
RUN: Run Torch model archiver to create a model archive file from the files copied into the image /home/model-server/. The model archive is saved in the /home/model-server/model-store/ with name same as <model-name>.mar
CMD: Launch Torchserve HTTP server referencing the configuration properties and enables serving for the model
End of explanation
CUSTOM_PREDICTOR_IMAGE_URI = f"gcr.io/{PROJECT_ID}/pytorch_predict_{APP_NAME}"
print(f"CUSTOM_PREDICTOR_IMAGE_URI = {CUSTOM_PREDICTOR_IMAGE_URI}")
!docker build \
--tag=$CUSTOM_PREDICTOR_IMAGE_URI \
./predictor
Explanation: Build the docker image tagged with Container Registry (gcr.io) path
End of explanation
!docker stop local_bert_classifier
!docker run -t -d --rm -p 7080:7080 --name=local_bert_classifier $CUSTOM_PREDICTOR_IMAGE_URI
!sleep 20
Explanation: Run the container locally [Optional]
Before push the container image to Container Registry to use it with Vertex AI Predictions, you can run it as a container in your local environment to verify that the server works as expected
To run the container image as a container locally, run the following command:
End of explanation
!curl http://localhost:7080/ping
Explanation: To send the container's server a health check, run the following command:
End of explanation
%%bash -s $APP_NAME
APP_NAME=$1
cat > ./predictor/instances.json <<END
{
"instances": [
{
"data": {
"b64": "$(echo 'Take away the CGI and the A-list cast and you end up with film with less punch.' | base64 --wrap=0)"
}
}
]
}
END
curl -s -X POST \
-H "Content-Type: application/json; charset=utf-8" \
-d @./predictor/instances.json \
http://localhost:7080/predictions/$APP_NAME/
Explanation: If successful, the server returns the following response:
{
"status": "Healthy"
}
To send the container's server a prediction request, run the following commands:
End of explanation
!docker stop local_bert_classifier
Explanation: This request uses a test sentence. If successful, the server returns prediction in below format:
{"predictions": ["Negative"]}
To stop the container, run the following command:
End of explanation
!docker push $CUSTOM_PREDICTOR_IMAGE_URI
Explanation: Deploying the serving container to Vertex AI Predictions
We create a model resource on Vertex AI and deploy the model to a Vertex AI Endpoints. You must deploy a model to an endpoint before using the model. The deployed model runs the custom container image to serve predictions.
Push the serving container to Container Registry
Push your container image with inference code and dependencies to your Container Registry
End of explanation
aiplatform.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize the Vertex AI SDK for Python
End of explanation
VERSION = 1
model_display_name = f"{APP_NAME}-v{VERSION}"
model_description = "PyTorch based text classifier with custom container"
MODEL_NAME = APP_NAME
health_route = "/ping"
predict_route = f"/predictions/{MODEL_NAME}"
serving_container_ports = [7080]
model = aiplatform.Model.upload(
display_name=model_display_name,
description=model_description,
serving_container_image_uri=CUSTOM_PREDICTOR_IMAGE_URI,
serving_container_predict_route=predict_route,
serving_container_health_route=health_route,
serving_container_ports=serving_container_ports,
)
model.wait()
print(model.display_name)
print(model.resource_name)
Explanation: Create a Model resource with custom serving container
End of explanation
endpoint_display_name = f"{APP_NAME}-endpoint"
endpoint = aiplatform.Endpoint.create(display_name=endpoint_display_name)
Explanation: For more context on upload or importing a model, refer documentation
Create an Endpoint for Model with Custom Container
End of explanation
traffic_percentage = 100
machine_type = "n1-standard-4"
deployed_model_display_name = model_display_name
min_replica_count = 1
max_replica_count = 3
sync = True
model.deploy(
endpoint=endpoint,
deployed_model_display_name=deployed_model_display_name,
machine_type=machine_type,
traffic_percentage=traffic_percentage,
sync=sync,
)
Explanation: Deploy the Model to Endpoint
Deploying a model associates physical resources with the model so it can serve online predictions with low latency.
NOTE: This step takes few minutes to deploy the resources.
End of explanation
endpoint_display_name = f"{APP_NAME}-endpoint"
filter = f'display_name="{endpoint_display_name}"'
for endpoint_info in aiplatform.Endpoint.list(filter=filter):
print(
f"Endpoint display name = {endpoint_info.display_name} resource id ={endpoint_info.resource_name} "
)
endpoint = aiplatform.Endpoint(endpoint_info.resource_name)
endpoint.list_models()
Explanation: Invoking the Endpoint with deployed Model using Vertex AI SDK to make predictions
Get the Endpoint id
End of explanation
test_instances = [
b"Jaw dropping visual affects and action! One of the best I have seen to date.",
b"Take away the CGI and the A-list cast and you end up with film with less punch.",
]
Explanation: Formatting input for online prediction
This notebook uses Torchserve's KServe based inference API which is also Vertex AI Predictions compatible format. For online prediction requests, format the prediction input instances as JSON with base64 encoding as shown here:
[
{
"data": {
"b64": "<base64 encoded string>"
}
}
]
Define sample texts to test predictions
End of explanation
print("=" * 100)
for instance in test_instances:
print(f"Input text: \n\t{instance.decode('utf-8')}\n")
b64_encoded = base64.b64encode(instance)
test_instance = [{"data": {"b64": f"{str(b64_encoded.decode('utf-8'))}"}}]
print(f"Formatted input: \n{json.dumps(test_instance, indent=4)}\n")
prediction = endpoint.predict(instances=test_instance)
print(f"Prediction response: \n\t{prediction}")
print("=" * 100)
Explanation: Sending an online prediction request
Format input text string and call prediction endpoint with formatted input request and get the response
End of explanation
endpoint_display_name = f"{APP_NAME}-endpoint"
%%bash -s $REGION $endpoint_display_name
REGION=$1
endpoint_display_name=$2
# get endpoint id
echo "REGION = ${REGION}"
echo "ENDPOINT DISPLAY NAME = ${endpoint_display_name}"
endpoint_id=$(gcloud beta ai endpoints list --region ${REGION} --filter "display_name=${endpoint_display_name}" --format "value(ENDPOINT_ID)")
echo "ENDPOINT_ID = ${endpoint_id}"
# call prediction endpoint
input_text="Take away the CGI and the A-list cast and you end up with film with less punch."
echo "INPUT TEXT = ${input_text}"
prediction=$(
echo
{
"instances": [
{
"data": {
"b64": "$(echo ${input_text} | base64 --wrap=0)"
}
}
]
}
| gcloud beta ai endpoints predict ${endpoint_id} --region=$REGION --json-request -)
echo "PREDICTION RESPONSE = ${prediction}"
Explanation: [Optional] Make prediction requests using gcloud CLI
You can also call the Vertex AI Endpoint to make predictions using gcloud beta ai endpoints predict.
The following cell shows how to make a prediction request to Vertex AI Endpoints using gcloud CLI:
End of explanation
delete_custom_job = False
delete_hp_tuning_job = False
delete_endpoint = True
delete_model = False
delete_bucket = False
delete_image = False
Explanation: Cleaning up
Cleaning up training and deployment resources
To clean up all Google Cloud resources used in this notebook, you can delete the Google Cloud project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Training Jobs
Model
Endpoint
Cloud Storage Bucket
Container Images
Set flags for the resource type to be deleted
End of explanation
# API Endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex AI location root path for your dataset, model and endpoint resources
PARENT = f"projects/{PROJECT_ID}/locations/{REGION}"
client_options = {"api_endpoint": API_ENDPOINT}
# Initialize Vertex AI SDK
aiplatform.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
# functions to create client
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
clients = {}
clients["job"] = create_job_client()
clients["model"] = create_model_client()
clients["endpoint"] = create_endpoint_client()
Explanation: Define clients for jobs, models and endpoints
End of explanation
def list_custom_jobs():
client = clients["job"]
jobs = []
response = client.list_custom_jobs(parent=PARENT)
for row in response:
_row = MessageToDict(row._pb)
if _row["displayName"].startswith(APP_NAME):
jobs.append((_row["name"], _row["displayName"]))
return jobs
def list_hp_tuning_jobs():
client = clients["job"]
jobs = []
response = client.list_hyperparameter_tuning_jobs(parent=PARENT)
for row in response:
_row = MessageToDict(row._pb)
if _row["displayName"].startswith(APP_NAME):
jobs.append((_row["name"], _row["displayName"]))
return jobs
def list_models():
client = clients["model"]
models = []
response = client.list_models(parent=PARENT)
for row in response:
_row = MessageToDict(row._pb)
if _row["displayName"].startswith(APP_NAME):
models.append((_row["name"], _row["displayName"]))
return models
def list_endpoints():
client = clients["endpoint"]
endpoints = []
response = client.list_endpoints(parent=PARENT)
for row in response:
_row = MessageToDict(row._pb)
if _row["displayName"].startswith(APP_NAME):
print(_row)
endpoints.append((_row["name"], _row["displayName"]))
return endpoints
Explanation: Define functions to list the jobs, models and endpoints starting with APP_NAME defined earlier in the notebook
End of explanation
# Delete the custom training using the Vertex AI fully qualified identifier for the custom training
try:
if delete_custom_job:
custom_jobs = list_custom_jobs()
for job_id, job_name in custom_jobs:
print(f"Deleting job {job_id} [{job_name}]")
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
Explanation: Deleting custom training jobs
End of explanation
# Delete the hyperparameter tuning jobs using the Vertex AI fully qualified identifier for the hyperparameter tuning job
try:
if delete_hp_tuning_job:
hp_tuning_jobs = list_hp_tuning_jobs()
for job_id, job_name in hp_tuning_jobs:
print(f"Deleting job {job_id} [{job_name}]")
clients["job"].delete_hyperparameter_tuning_job(name=job_id)
except Exception as e:
print(e)
Explanation: Deleting hyperparameter tuning jobs
End of explanation
# Delete the endpoint using the Vertex AI fully qualified identifier for the endpoint
try:
if delete_endpoint:
endpoints = list_endpoints()
for endpoint_id, endpoint_name in endpoints:
endpoint = aiplatform.Endpoint(endpoint_id)
# undeploy models from the endpoint
print(f"Undeploying all deployed models from the endpoint {endpoint_name}")
endpoint.undeploy_all(sync=True)
# deleting endpoint
print(f"Deleting endpoint {endpoint_id} [{endpoint_name}]")
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
Explanation: Undeploy models and Delete endpoints
End of explanation
# Delete the model using the Vertex AI fully qualified identifier for the model
try:
models = list_models()
for model_id, model_name in models:
print(f"Deleting model {model_id} [{model_name}]")
clients["model"].delete_model(name=model_id)
except Exception as e:
print(e)
Explanation: Deleting models
End of explanation
if delete_bucket and "BUCKET_NAME" in globals():
print(f"Deleting all contents from the bucket {BUCKET_NAME}")
shell_output = ! gsutil du -as $BUCKET_NAME
print(
f"Size of the bucket {BUCKET_NAME} before deleting = {shell_output[0].split()[0]} bytes"
)
# uncomment below line to delete contents of the bucket
# ! gsutil rm -r $BUCKET_NAME
shell_output = ! gsutil du -as $BUCKET_NAME
if float(shell_output[0].split()[0]) > 0:
print(
"PLEASE UNCOMMENT LINE TO DELETE BUCKET. CONTENT FROM THE BUCKET NOT DELETED"
)
print(
f"Size of the bucket {BUCKET_NAME} after deleting = {shell_output[0].split()[0]} bytes"
)
Explanation: Delete contents from the staging bucket
NOTE: Everything in this Cloud Storage bucket will be DELETED. Please run it with caution.
End of explanation
gcr_images = !gcloud container images list --repository=gcr.io/$PROJECT_ID --filter="name~"$APP_NAME
if delete_image:
for image in gcr_images:
if image != "NAME": # skip header line
print(f"Deleting image {image} including all tags")
!gcloud container images delete $image --force-delete-tags --quiet
Explanation: Delete images from Container Registry
Deletes all the container images created in this tutorial with prefix defined by variable APP_NAME from the registry. All associated tags are also deleted.
End of explanation |
4,130 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Fairness Indicators on TF-Hub Text Embeddings
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Import other required libraries.
Step3: Dataset
In this notebook, you work with the Civil Comments dataset which contains approximately 2 million public comments made public by the Civil Comments platform in 2017 for ongoing research. This effort was sponsored by Jigsaw, who have hosted competitions on Kaggle to help classify toxic comments as well as minimize unintended model bias.
Each individual text comment in the dataset has a toxicity label, with the label being 1 if the comment is toxic and 0 if the comment is non-toxic. Within the data, a subset of comments are labeled with a variety of identity attributes, including categories for gender, sexual orientation, religion, and race or ethnicity.
Prepare the data
TensorFlow parses features from data using tf.io.FixedLenFeature and tf.io.VarLenFeature. Map out the input feature, output feature, and all other slicing features of interest.
Step4: By default, the notebook downloads a preprocessed version of this dataset, but
you may use the original dataset and re-run the processing steps if
desired.
In the original dataset, each comment is labeled with the percentage
of raters who believed that a comment corresponds to a particular
identity. For example, a comment might be labeled with the following
Step5: Create a TensorFlow Model Analysis Pipeline
The Fairness Indicators library operates on TensorFlow Model Analysis (TFMA) models. TFMA models wrap TensorFlow models with additional functionality to evaluate and visualize their results. The actual evaluation occurs inside of an Apache Beam pipeline.
The steps you follow to create a TFMA pipeline are
Step6: Run TFMA & Fairness Indicators
Fairness Indicators Metrics
Some of the metrics available with Fairness Indicators are
Step7: NNLM
Step8: Universal Sentence Encoder
Step9: Comparing Embeddings
You can also use Fairness Indicators to compare embeddings directly. For example, compare the models generated from the NNLM and USE embeddings. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install -q -U pip==20.2
!pip install fairness-indicators \
"absl-py==0.12.0" \
"pyarrow==2.0.0" \
"apache-beam==2.38.0" \
"avro-python3==1.9.1"
Explanation: Fairness Indicators on TF-Hub Text Embeddings
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_on_TF_Hub_Text_Embeddings"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_on_TF_Hub_Text_Embeddings.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_on_TF_Hub_Text_Embeddings.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Fairness_Indicators_on_TF_Hub_Text_Embeddings.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/random-nnlm-en-dim128/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
In this tutorial, you will learn how to use Fairness Indicators to evaluate embeddings from TF Hub. This notebook uses the Civil Comments dataset.
Setup
Install the required libraries.
End of explanation
import os
import tempfile
import apache_beam as beam
from datetime import datetime
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_model_analysis as tfma
from tensorflow_model_analysis.addons.fairness.view import widget_view
from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators
from fairness_indicators import example_model
from fairness_indicators.tutorial_utils import util
Explanation: Import other required libraries.
End of explanation
BASE_DIR = tempfile.gettempdir()
# The input and output features of the classifier
TEXT_FEATURE = 'comment_text'
LABEL = 'toxicity'
FEATURE_MAP = {
# input and output features
LABEL: tf.io.FixedLenFeature([], tf.float32),
TEXT_FEATURE: tf.io.FixedLenFeature([], tf.string),
# slicing features
'sexual_orientation': tf.io.VarLenFeature(tf.string),
'gender': tf.io.VarLenFeature(tf.string),
'religion': tf.io.VarLenFeature(tf.string),
'race': tf.io.VarLenFeature(tf.string),
'disability': tf.io.VarLenFeature(tf.string)
}
IDENTITY_TERMS = ['gender', 'sexual_orientation', 'race', 'religion', 'disability']
Explanation: Dataset
In this notebook, you work with the Civil Comments dataset which contains approximately 2 million public comments made public by the Civil Comments platform in 2017 for ongoing research. This effort was sponsored by Jigsaw, who have hosted competitions on Kaggle to help classify toxic comments as well as minimize unintended model bias.
Each individual text comment in the dataset has a toxicity label, with the label being 1 if the comment is toxic and 0 if the comment is non-toxic. Within the data, a subset of comments are labeled with a variety of identity attributes, including categories for gender, sexual orientation, religion, and race or ethnicity.
Prepare the data
TensorFlow parses features from data using tf.io.FixedLenFeature and tf.io.VarLenFeature. Map out the input feature, output feature, and all other slicing features of interest.
End of explanation
download_original_data = False #@param {type:"boolean"}
if download_original_data:
train_tf_file = tf.keras.utils.get_file('train_tf.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/train_tf.tfrecord')
validate_tf_file = tf.keras.utils.get_file('validate_tf.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/validate_tf.tfrecord')
# The identity terms list will be grouped together by their categories
# (see 'IDENTITY_COLUMNS') on threshold 0.5. Only the identity term column,
# text column and label column will be kept after processing.
train_tf_file = util.convert_comments_data(train_tf_file)
validate_tf_file = util.convert_comments_data(validate_tf_file)
else:
train_tf_file = tf.keras.utils.get_file('train_tf_processed.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/train_tf_processed.tfrecord')
validate_tf_file = tf.keras.utils.get_file('validate_tf_processed.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/validate_tf_processed.tfrecord')
Explanation: By default, the notebook downloads a preprocessed version of this dataset, but
you may use the original dataset and re-run the processing steps if
desired.
In the original dataset, each comment is labeled with the percentage
of raters who believed that a comment corresponds to a particular
identity. For example, a comment might be labeled with the following:
{ male: 0.3, female: 1.0, transgender: 0.0, heterosexual: 0.8,
homosexual_gay_or_lesbian: 1.0 }.
The processing step groups identity by category (gender,
sexual_orientation, etc.) and removes identities with a score less
than 0.5. So the example above would be converted to the following:
of raters who believed that a comment corresponds to a particular
identity. For example, the comment above would be labeled with the
following:
{ gender: [female], sexual_orientation: [heterosexual,
homosexual_gay_or_lesbian] }
Download the dataset.
End of explanation
def embedding_fairness_result(embedding, identity_term='gender'):
model_dir = os.path.join(BASE_DIR, 'train',
datetime.now().strftime('%Y%m%d-%H%M%S'))
print("Training classifier for " + embedding)
classifier = example_model.train_model(model_dir,
train_tf_file,
LABEL,
TEXT_FEATURE,
FEATURE_MAP,
embedding)
# Create a unique path to store the results for this embedding.
embedding_name = embedding.split('/')[-2]
eval_result_path = os.path.join(BASE_DIR, 'eval_result', embedding_name)
example_model.evaluate_model(classifier,
validate_tf_file,
eval_result_path,
identity_term,
LABEL,
FEATURE_MAP)
return tfma.load_eval_result(output_path=eval_result_path)
Explanation: Create a TensorFlow Model Analysis Pipeline
The Fairness Indicators library operates on TensorFlow Model Analysis (TFMA) models. TFMA models wrap TensorFlow models with additional functionality to evaluate and visualize their results. The actual evaluation occurs inside of an Apache Beam pipeline.
The steps you follow to create a TFMA pipeline are:
1. Build a TensorFlow model
2. Build a TFMA model on top of the TensorFlow model
3. Run the model analysis in an orchestrator. The example model in this notebook uses Apache Beam as the orchestrator.
End of explanation
eval_result_random_nnlm = embedding_fairness_result('https://tfhub.dev/google/random-nnlm-en-dim128/1')
widget_view.render_fairness_indicator(eval_result=eval_result_random_nnlm)
Explanation: Run TFMA & Fairness Indicators
Fairness Indicators Metrics
Some of the metrics available with Fairness Indicators are:
Negative Rate, False Negative Rate (FNR), and True Negative Rate (TNR)
Positive Rate, False Positive Rate (FPR), and True Positive Rate (TPR)
Accuracy
Precision and Recall
Precision-Recall AUC
ROC AUC
Text Embeddings
TF-Hub provides several text embeddings. These embeddings will serve as the feature column for the different models. This tutorial uses the following embeddings:
random-nnlm-en-dim128: random text embeddings, this serves as a convenient baseline.
nnlm-en-dim128: a text embedding based on A Neural Probabilistic Language Model.
universal-sentence-encoder: a text embedding based on Universal Sentence Encoder.
Fairness Indicator Results
Compute fairness indicators with the embedding_fairness_result pipeline, and then render the results in the Fairness Indicator UI widget with widget_view.render_fairness_indicator for all the above embeddings.
Note: You may need to run the widget_view.render_fairness_indicator cells twice for the visualization to be displayed.
Random NNLM
End of explanation
eval_result_nnlm = embedding_fairness_result('https://tfhub.dev/google/nnlm-en-dim128/1')
widget_view.render_fairness_indicator(eval_result=eval_result_nnlm)
Explanation: NNLM
End of explanation
eval_result_use = embedding_fairness_result('https://tfhub.dev/google/universal-sentence-encoder/2')
widget_view.render_fairness_indicator(eval_result=eval_result_use)
Explanation: Universal Sentence Encoder
End of explanation
widget_view.render_fairness_indicator(multi_eval_results={'nnlm': eval_result_nnlm, 'use': eval_result_use})
Explanation: Comparing Embeddings
You can also use Fairness Indicators to compare embeddings directly. For example, compare the models generated from the NNLM and USE embeddings.
End of explanation |
4,131 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How we solve a model defined by the IndShockConsumerType class
The IndShockConsumerType reprents the work-horse consumption savings model with temporary and permanent shocks to income, finite or infinite horizons, CRRA utility and more. In this DemARK we take you through the steps involved in solving one period of such a model. The inheritance chains can be a little long, so figuring out where all the parameters and methods come from can be a bit confusing. Hence this map! The intention is to make it easier to know how to inheret from IndShockConsumerType in the sense that you know where to look for specific solver logic, but also so you know can figure out which methods to overwrite or supplement in your own AgentType and solver!
The solveConsIndShock function
In HARK, a period's problem is always solved by the callable (function or callable object instance) stored in the field solve_one_period. In the case of IndShockConsumerType, this function is called solveConsIndShock. The function accepts a number of arguments, that it uses to construct an instance of either a ConsIndShockSolverBasic or a ConsIndShockSolver. These solvers both have the methods prepare_to_solve and solve, that we will have a closer look at in this notebook. This means, that the logic of solveConsIndShock is basically
Step1: Let's have a look at the solution in time period second period. We should then be able to
Step2: Let us then create a solver for the first period.
Step3: Many important values are now calculated and stored in solver, such as the effective discount factor, the smallest permanent income shock, and more.
Step4: These values were calculated in setAndUpdateValues. In defBoroCnst that was also called, several things were calculated, for example the consumption function defined by the borrowing constraint.
Step5: Then, we set up all the grids, grabs the discrete shock distributions, and state grids in prepare_to_calc_EndOfPrdvP.
Step6: Then we calculate the marginal utility of next period's resources given the stochastic environment and current grids.
Step7: Then, we essentially just have to construct the (resource, consumption) pairs by completing the EGM step, and constructing the interpolants by using the knowledge that the limiting solutions are those of the perfect foresight model. This is done with make_basic_solution as discussed above.
Step8: Lastly, we add the MPC and human wealth quantities we calculated in the method that prepared the solution of this period.
Step9: All that is left is to verify that the solution in solution is identical to LifecycleExample.solution[0]. We can plot the against each other
Step10: Although, it's probably even clearer if we just subtract the function values from each other at some grid. | Python Code:
from HARK.ConsumptionSaving.ConsIndShockModel import IndShockConsumerType, init_lifecycle
import numpy as np
import matplotlib.pyplot as plt
LifecycleExample = IndShockConsumerType(**init_lifecycle)
LifecycleExample.cycles = 1 # Make this consumer live a sequence of periods exactly once
LifecycleExample.solve()
Explanation: How we solve a model defined by the IndShockConsumerType class
The IndShockConsumerType reprents the work-horse consumption savings model with temporary and permanent shocks to income, finite or infinite horizons, CRRA utility and more. In this DemARK we take you through the steps involved in solving one period of such a model. The inheritance chains can be a little long, so figuring out where all the parameters and methods come from can be a bit confusing. Hence this map! The intention is to make it easier to know how to inheret from IndShockConsumerType in the sense that you know where to look for specific solver logic, but also so you know can figure out which methods to overwrite or supplement in your own AgentType and solver!
The solveConsIndShock function
In HARK, a period's problem is always solved by the callable (function or callable object instance) stored in the field solve_one_period. In the case of IndShockConsumerType, this function is called solveConsIndShock. The function accepts a number of arguments, that it uses to construct an instance of either a ConsIndShockSolverBasic or a ConsIndShockSolver. These solvers both have the methods prepare_to_solve and solve, that we will have a closer look at in this notebook. This means, that the logic of solveConsIndShock is basically:
Check if cubic interpolation (CubicBool) or construction of the value function interpolant (vFuncBool) are requested. Construct an instance of ConsIndShockSolverBasic if neither are requested, else construct a ConsIndShockSolver. Call this solver.
Call solver.prepare_to_solve()
Call solver.solve() and return the output as the current solution.
Two types of solvers
As mentioned above, solve_one_period will construct an instance of the class ConsIndShockSolverBasicor ConsIndShockSolver. The main difference is whether it uses cubic interpolation or if it explicitly constructs a value function approximation. The choice and construction of a solver instance is bullet 1) from above.
What happens in upon construction
Neither of the two solvers have their own __init__. ConsIndShockSolver inherits from ConsIndShockSolverBasic that in turn inherits from ConsIndShockSetup. ConsIndShockSetup inherits from ConsPerfForesightSolver, which itself is just an Object, so we get the inheritance structure
ConsPerfForesightSolver $\leftarrow$ ConsIndShockSetup $\leftarrow$ ConsIndShockSolverBasic $\leftarrow$ ConsIndShockSolver
When one of the two classes in the end of the inheritance chain is called, it will call ConsIndShockSetup.__init__(args...). This takes a whole list of fixed inputs that then gets assigned to the object through a
ConsIndShockSetup.assign_parameters(solution_next,IncomeDstn,LivPrb,DiscFac,CRRA,Rfree,PermGroFac,BoroCnstArt,aXtraGrid,vFuncBool,CubicBool)
call, that then calls
ConsPerfForesightSolver.assign_parameters(self,solution_next,DiscFac,LivPrb,CRRA,Rfree,PermGroFac)
We're getting kind of detailed here, but it is simply to help us understand the inheritance structure. The methods are quite straight forward, and simply assign the list of variables to self. The ones that do not get assigned by the ConsPerfForesightSolver method gets assign by the ConsIndShockSetup method instead.
After all the input parameters are set, we update the utility function definitions. Remember, that we restrict ourselves to CRRA utility functions, and these are parameterized with the scalar we call CRRA in HARK. We use the two-argument CRRA utility (and derivatives, inverses, etc) from HARK.utilities, so we need to create a lambda (an anonymous function) according to the fixed CRRA we have chosen. This gets done through a call to
ConsIndShockSetup.defUtilityFuncs()
that itself calls
ConsPerfForesightSolver.defUtilityFuncs()
Again, we wish to emphasize the inheritance structure. The method in ConsPerfForesightSolver defines the most basic utility functions (utility, its marginal and its marginal marginal), and ConsIndShockSolver adds additional functions (marginal of inverse, inverse of marginal, marginal of inverse of marginal, and optionally inverse if vFuncBool is true).
To sum up, the __init__ method lives in ConsIndShockSetup, calls assign_parameters and defUtilityFuncs from ConsPerfForesightSolver and defines its own methods with the same names that adds some methods used to solve the IndShockConsumerType using EGM. The main things controlled by the end-user are whether cubic interpolation should be used, CubicBool, and if the value function should be explicitly formed, vFuncBool.
Prepare to solve
We are now in bullet 2) from the list above. The prepare_to_solve method is all about grabbing relevant information from next period's solution, calculating some limiting solutions. It comes from ConsIndShockSetup and calls two methods:
ConsIndShockSetup.setAndUpdateValues(self.solution_next,self.IncomeDstn,self.LivPrb,self.DiscFac)
ConsIndShockSetup.defBoroCnst(self.BoroCnstArt)
First, we have setAndUpdateValues. The main purpose is to grab the relevant vectors that represent the shock distributions, the effective discount factor, and value function (marginal, level, marginal marginal depending on the options). It also calculates some limiting marginal propensities to consume and human wealth levels. Second, we have defBoroCnst. As the name indicates, it calculates the natural borrowing constraint, handles artificial borrowing constraints, and defines the consumption function where the constraint binds (cFuncNowCnst).
To sum, prepare_to_solve sets up the stochastic environment an borrowing constraints the consumer might face. It also grabs interpolants from "next period"'s solution.
Solve it!
The last method solveConsIndShock will call from the solver is solve. This method essentially has four steps:
1. Pre-processing for EGM: solver.prepare_to_calc_EndOfPrdvP
1. First step of EGM: solver.calc_EndOfPrdvP
1. Second step of EGM: solver.make_basic_solution
1. Add MPC and human wealth: solver.add_MPC_and_human_wealth
Pre-processing for EGM prepare_to_calc_EndOfPrdvP
Find relevant values of end-of-period asset values (according to aXtraGrid and natural borrowing constraint) and next period values implied by current period end-of-period assets and stochastic elements. The method stores the following in self:
values of permanent shocks in PermShkVals_temp
shock probabilities in ShkPrbs_temp
next period resources in mNrmNext
current grid of end-of-period assets in aNrmNow
The method also returns aNrmNow. The definition is in ConsIndShockSolverBasic and is not overwritten in ConsIndShockSolver.
First step of EGM calc_EndOfPrdvP
Find the marginal value of having some level of end-of-period assets today. End-of-period assets as well as stochastics imply next-period resources at the beginning of the period, calculated above. Return the result as EndOfPrdvP.
Second step of EGM make_basic_solution
Apply inverse marginal utility function to nodes from about to find (m, c) pairs for the new consumption function in get_points_for_interpolation and create the interpolants in use_points_for_interpolation. The latter constructs the ConsumerSolution that contains the current consumption function cFunc, the current marginal value function vPfunc, and the smallest possible resource level mNrmMinNow.
Add MPC and human wealth add_MPC_and_human_wealth
Add values calculated in defBoroCnst now that we have a solution object to put them in.
Special to the non-Basic solver
We are now done, but in the ConsIndShockSolver (non-Basic!) solver there are a few extra steps. We add steady state m, and depending on the values of vFuncBool and CubicBool we also add the value function and the marginal marginal value function.
Let's try it in action!
First, we define a standard lifecycle model, solve it and then
End of explanation
from HARK.utilities import plot_funcs
plot_funcs([LifecycleExample.solution[0].cFunc],LifecycleExample.solution[0].mNrmMin,10)
Explanation: Let's have a look at the solution in time period second period. We should then be able to
End of explanation
from HARK.ConsumptionSaving.ConsIndShockModel import ConsIndShockSolverBasic
solver = ConsIndShockSolverBasic(LifecycleExample.solution[1],
LifecycleExample.IncShkDstn[0],
LifecycleExample.LivPrb[0],
LifecycleExample.DiscFac,
LifecycleExample.CRRA,
LifecycleExample.Rfree,
LifecycleExample.PermGroFac[0],
LifecycleExample.BoroCnstArt,
LifecycleExample.aXtraGrid,
LifecycleExample.vFuncBool,
LifecycleExample.CubicBool)
solver.prepare_to_solve()
Explanation: Let us then create a solver for the first period.
End of explanation
solver.DiscFacEff
solver.PermShkMinNext
Explanation: Many important values are now calculated and stored in solver, such as the effective discount factor, the smallest permanent income shock, and more.
End of explanation
plot_funcs([solver.cFuncNowCnst],solver.mNrmMinNow,10)
Explanation: These values were calculated in setAndUpdateValues. In defBoroCnst that was also called, several things were calculated, for example the consumption function defined by the borrowing constraint.
End of explanation
solver.prepare_to_calc_EndOfPrdvP()
Explanation: Then, we set up all the grids, grabs the discrete shock distributions, and state grids in prepare_to_calc_EndOfPrdvP.
End of explanation
EndOfPrdvP = solver.calc_EndOfPrdvP()
Explanation: Then we calculate the marginal utility of next period's resources given the stochastic environment and current grids.
End of explanation
solution = solver.make_basic_solution(EndOfPrdvP,solver.aNrmNow,solver.make_linear_cFunc)
Explanation: Then, we essentially just have to construct the (resource, consumption) pairs by completing the EGM step, and constructing the interpolants by using the knowledge that the limiting solutions are those of the perfect foresight model. This is done with make_basic_solution as discussed above.
End of explanation
solver.add_MPC_and_human_wealth(solution)
Explanation: Lastly, we add the MPC and human wealth quantities we calculated in the method that prepared the solution of this period.
End of explanation
plot_funcs([LifecycleExample.solution[0].cFunc, solution.cFunc],LifecycleExample.solution[0].mNrmMin,10)
Explanation: All that is left is to verify that the solution in solution is identical to LifecycleExample.solution[0]. We can plot the against each other:
End of explanation
eval_grid = np.linspace(0, 20, 200)
LifecycleExample.solution[0].cFunc(eval_grid) - solution.cFunc(eval_grid)
Explanation: Although, it's probably even clearer if we just subtract the function values from each other at some grid.
End of explanation |
4,132 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: 1. Create a Doc object from the file peterrabbit.txt<br>
HINT
Step2: 2. For every token in the third sentence, print the token text, the POS tag, the fine-grained TAG tag, and the description of the fine-grained tag.
Step3: 3. Provide a frequency list of POS tags from the entire document
Step4: 4. CHALLENGE
Step5: 5. Display the Dependency Parse for the third sentence
Step6: Show the first two named entities from Beatrix Potter's The Tale of Peter Rabbit **
Step7: 7. How many sentences are contained in The Tale of Peter Rabbit?
Step8: 8. CHALLENGE
Step9: 9. CHALLENGE | Python Code:
# RUN THIS CELL to perform standard imports:
import spacy
nlp = spacy.load('en_core_web_sm')
from spacy import displacy
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Parts of Speech Assessment
For this assessment we'll be using the short story The Tale of Peter Rabbit by Beatrix Potter (1902). <br>The story is in the public domain; the text file was obtained from Project Gutenberg.
End of explanation
with open('../TextFiles/peterrabbit.txt') as f:
doc = nlp(f.read())
Explanation: 1. Create a Doc object from the file peterrabbit.txt<br>
HINT: Use with open('../TextFiles/peterrabbit.txt') as f:
End of explanation
# Enter your code here:
for tokens in list(doc.sents)[3]:
print(f"{tokens.text:{15}} {tokens.pos_:{10}} {tokens.tag_:{10}} {spacy.explain(tokens.tag_)} ")
Explanation: 2. For every token in the third sentence, print the token text, the POS tag, the fine-grained TAG tag, and the description of the fine-grained tag.
End of explanation
POS_counts = doc.count_by(spacy.attrs.POS)
for k,v in sorted(POS_counts.items()):
print(f'{k}. {doc.vocab[k].text:{10}} {v}')
Explanation: 3. Provide a frequency list of POS tags from the entire document
End of explanation
total_tokens = len([tokens for tokens in doc])
noun_tokens = len([tokens for tokens in doc if tokens.pos_ == 'NOUN'])
(noun_tokens / total_tokens) * 100
Explanation: 4. CHALLENGE: What percentage of tokens are nouns?<br>
HINT: the attribute ID for 'NOUN' is 91
End of explanation
displacy.render(list(doc.sents)[3],style='dep', jupyter=True, options={'distance':50})
Explanation: 5. Display the Dependency Parse for the third sentence
End of explanation
for ent in doc.ents[:3]:
print(ent.text+' - '+ent.label_+' - '+str(spacy.explain(ent.label_)))
Explanation: Show the first two named entities from Beatrix Potter's The Tale of Peter Rabbit **
End of explanation
len([s for s in doc.sents])
Explanation: 7. How many sentences are contained in The Tale of Peter Rabbit?
End of explanation
list_of_sents = [nlp(sent.text) for sent in doc.sents]
list_of_ners = [doc for doc in list_of_sents if doc.ents]
len(list_of_ners)
Explanation: 8. CHALLENGE: How many sentences contain named entities?
End of explanation
displacy.render(list_of_sents[0], style='ent', jupyter=True)
Explanation: 9. CHALLENGE: Display the named entity visualization for list_of_sents[0] from the previous problem
End of explanation |
4,133 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ConvNet
Let's get the data and training interface from where we left in the last notebook.
Jump_to lesson 10 video
Step1: Batchnorm
Custom
Let's start by building our own BatchNorm layer from scratch.
Jump_to lesson 10 video
Step2: We can then use it in training and see how it helps keep the activations means to 0 and the std to 1.
Step3: Builtin batchnorm
Jump_to lesson 10 video
Step4: With scheduler
Now let's add the usual warm-up/annealing.
Step5: More norms
Layer norm
From the paper
Step6: Thought experiment
Step7: Question
Step8: Running Batch Norm
To solve this problem we introduce a Running BatchNorm that uses smoother running mean and variance for the mean and std.
Jump_to lesson 10 video
Step9: This solves the small batch size issue!
What can we do in a single epoch?
Now let's see with a decent batch size what result we can get.
Jump_to lesson 10 video
Step10: Export | Python Code:
x_train,y_train,x_valid,y_valid = get_data()
x_train,x_valid = normalize_to(x_train,x_valid)
train_ds,valid_ds = Dataset(x_train, y_train),Dataset(x_valid, y_valid)
nh,bs = 50,512
c = y_train.max().item()+1
loss_func = F.cross_entropy
data = DataBunch(*get_dls(train_ds, valid_ds, bs), c)
mnist_view = view_tfm(1,28,28)
cbfs = [Recorder,
partial(AvgStatsCallback,accuracy),
CudaCallback,
partial(BatchTransformXCallback, mnist_view)]
nfs = [8,16,32,64,64]
learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs)
%time run.fit(2, learn)
Explanation: ConvNet
Let's get the data and training interface from where we left in the last notebook.
Jump_to lesson 10 video
End of explanation
class BatchNorm(nn.Module):
def __init__(self, nf, mom=0.1, eps=1e-5):
super().__init__()
# NB: pytorch bn mom is opposite of what you'd expect
self.mom,self.eps = mom,eps
self.mults = nn.Parameter(torch.ones (nf,1,1))
self.adds = nn.Parameter(torch.zeros(nf,1,1))
self.register_buffer('vars', torch.ones(1,nf,1,1))
self.register_buffer('means', torch.zeros(1,nf,1,1))
def update_stats(self, x):
m = x.mean((0,2,3), keepdim=True)
v = x.var ((0,2,3), keepdim=True)
self.means.lerp_(m, self.mom)
self.vars.lerp_ (v, self.mom)
return m,v
def forward(self, x):
if self.training:
with torch.no_grad(): m,v = self.update_stats(x)
else: m,v = self.means,self.vars
x = (x-m) / (v+self.eps).sqrt()
return x*self.mults + self.adds
def conv_layer(ni, nf, ks=3, stride=2, bn=True, **kwargs):
# No bias needed if using bn
layers = [nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=not bn),
GeneralRelu(**kwargs)]
if bn: layers.append(BatchNorm(nf))
return nn.Sequential(*layers)
#export
def init_cnn_(m, f):
if isinstance(m, nn.Conv2d):
f(m.weight, a=0.1)
if getattr(m, 'bias', None) is not None: m.bias.data.zero_()
for l in m.children(): init_cnn_(l, f)
def init_cnn(m, uniform=False):
f = init.kaiming_uniform_ if uniform else init.kaiming_normal_
init_cnn_(m, f)
def get_learn_run(nfs, data, lr, layer, cbs=None, opt_func=None, uniform=False, **kwargs):
model = get_cnn_model(data, nfs, layer, **kwargs)
init_cnn(model, uniform=uniform)
return get_runner(model, data, lr=lr, cbs=cbs, opt_func=opt_func)
Explanation: Batchnorm
Custom
Let's start by building our own BatchNorm layer from scratch.
Jump_to lesson 10 video
End of explanation
learn,run = get_learn_run(nfs, data, 0.9, conv_layer, cbs=cbfs)
with Hooks(learn.model, append_stats) as hooks:
run.fit(1, learn)
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks[:-1]:
ms,ss = h.stats
ax0.plot(ms[:10])
ax1.plot(ss[:10])
h.remove()
plt.legend(range(6));
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks[:-1]:
ms,ss = h.stats
ax0.plot(ms)
ax1.plot(ss)
learn,run = get_learn_run(nfs, data, 1.0, conv_layer, cbs=cbfs)
%time run.fit(3, learn)
Explanation: We can then use it in training and see how it helps keep the activations means to 0 and the std to 1.
End of explanation
#export
def conv_layer(ni, nf, ks=3, stride=2, bn=True, **kwargs):
layers = [nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=not bn),
GeneralRelu(**kwargs)]
if bn: layers.append(nn.BatchNorm2d(nf, eps=1e-5, momentum=0.1))
return nn.Sequential(*layers)
learn,run = get_learn_run(nfs, data, 1., conv_layer, cbs=cbfs)
%time run.fit(3, learn)
Explanation: Builtin batchnorm
Jump_to lesson 10 video
End of explanation
sched = combine_scheds([0.3, 0.7], [sched_lin(0.6, 2.), sched_lin(2., 0.1)])
learn,run = get_learn_run(nfs, data, 0.9, conv_layer, cbs=cbfs
+[partial(ParamScheduler,'lr', sched)])
run.fit(8, learn)
Explanation: With scheduler
Now let's add the usual warm-up/annealing.
End of explanation
class LayerNorm(nn.Module):
__constants__ = ['eps']
def __init__(self, eps=1e-5):
super().__init__()
self.eps = eps
self.mult = nn.Parameter(tensor(1.))
self.add = nn.Parameter(tensor(0.))
def forward(self, x):
m = x.mean((1,2,3), keepdim=True)
v = x.var ((1,2,3), keepdim=True)
x = (x-m) / ((v+self.eps).sqrt())
return x*self.mult + self.add
def conv_ln(ni, nf, ks=3, stride=2, bn=True, **kwargs):
layers = [nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=True),
GeneralRelu(**kwargs)]
if bn: layers.append(LayerNorm())
return nn.Sequential(*layers)
learn,run = get_learn_run(nfs, data, 0.8, conv_ln, cbs=cbfs)
%time run.fit(3, learn)
Explanation: More norms
Layer norm
From the paper: "batch normalization cannot be applied to online learning tasks or to extremely large distributed models where the minibatches have to be small".
General equation for a norm layer with learnable affine:
$$y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta$$
The difference with BatchNorm is
1. we don't keep a moving average
2. we don't average over the batches dimension but over the hidden dimension, so it's independent of the batch size
Jump_to lesson 10 video
End of explanation
class InstanceNorm(nn.Module):
__constants__ = ['eps']
def __init__(self, nf, eps=1e-0):
super().__init__()
self.eps = eps
self.mults = nn.Parameter(torch.ones (nf,1,1))
self.adds = nn.Parameter(torch.zeros(nf,1,1))
def forward(self, x):
m = x.mean((2,3), keepdim=True)
v = x.var ((2,3), keepdim=True)
res = (x-m) / ((v+self.eps).sqrt())
return res*self.mults + self.adds
def conv_in(ni, nf, ks=3, stride=2, bn=True, **kwargs):
layers = [nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=True),
GeneralRelu(**kwargs)]
if bn: layers.append(InstanceNorm(nf))
return nn.Sequential(*layers)
learn,run = get_learn_run(nfs, data, 0.1, conv_in, cbs=cbfs)
%time run.fit(3, learn)
Explanation: Thought experiment: can this distinguish foggy days from sunny days (assuming you're using it before the first conv)?
Instance norm
From the paper:
The key difference between contrast and batch normalization is that the latter applies the normalization to a whole batch of images instead for single ones:
\begin{equation}\label{eq:bnorm}
y_{tijk} = \frac{x_{tijk} - \mu_{i}}{\sqrt{\sigma_i^2 + \epsilon}},
\quad
\mu_i = \frac{1}{HWT}\sum_{t=1}^T\sum_{l=1}^W \sum_{m=1}^H x_{tilm},
\quad
\sigma_i^2 = \frac{1}{HWT}\sum_{t=1}^T\sum_{l=1}^W \sum_{m=1}^H (x_{tilm} - mu_i)^2.
\end{equation}
In order to combine the effects of instance-specific normalization and batch normalization, we propose to replace the latter by the instance normalization (also known as contrast normalization) layer:
\begin{equation}\label{eq:inorm}
y_{tijk} = \frac{x_{tijk} - \mu_{ti}}{\sqrt{\sigma_{ti}^2 + \epsilon}},
\quad
\mu_{ti} = \frac{1}{HW}\sum_{l=1}^W \sum_{m=1}^H x_{tilm},
\quad
\sigma_{ti}^2 = \frac{1}{HW}\sum_{l=1}^W \sum_{m=1}^H (x_{tilm} - mu_{ti})^2.
\end{equation}
Jump_to lesson 10 video
End of explanation
data = DataBunch(*get_dls(train_ds, valid_ds, 2), c)
def conv_layer(ni, nf, ks=3, stride=2, bn=True, **kwargs):
layers = [nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=not bn),
GeneralRelu(**kwargs)]
if bn: layers.append(nn.BatchNorm2d(nf, eps=1e-5, momentum=0.1))
return nn.Sequential(*layers)
learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs)
%time run.fit(1, learn)
Explanation: Question: why can't this classify anything?
Lost in all those norms? The authors from the group norm paper have you covered:
Group norm
Jump_to lesson 10 video
From the PyTorch docs:
GroupNorm(num_groups, num_channels, eps=1e-5, affine=True)
The input channels are separated into num_groups groups, each containing
num_channels / num_groups channels. The mean and standard-deviation are calculated
separately over the each group. $\gamma$ and $\beta$ are learnable
per-channel affine transform parameter vectorss of size num_channels if
affine is True.
This layer uses statistics computed from input data in both training and
evaluation modes.
Args:
- num_groups (int): number of groups to separate the channels into
- num_channels (int): number of channels expected in input
- eps: a value added to the denominator for numerical stability. Default: 1e-5
- affine: a boolean value that when set to True, this module
has learnable per-channel affine parameters initialized to ones (for weights)
and zeros (for biases). Default: True.
Shape:
- Input: (N, num_channels, *)
- Output: (N, num_channels, *) (same shape as input)
Examples::
>>> input = torch.randn(20, 6, 10, 10)
>>> # Separate 6 channels into 3 groups
>>> m = nn.GroupNorm(3, 6)
>>> # Separate 6 channels into 6 groups (equivalent with InstanceNorm)
>>> m = nn.GroupNorm(6, 6)
>>> # Put all 6 channels into a single group (equivalent with LayerNorm)
>>> m = nn.GroupNorm(1, 6)
>>> # Activating the module
>>> output = m(input)
Fix small batch sizes
What's the problem?
When we compute the statistics (mean and std) for a BatchNorm Layer on a small batch, it is possible that we get a standard deviation very close to 0. because there aren't many samples (the variance of one thing is 0. since it's equal to its mean).
Jump_to lesson 10 video
End of explanation
class RunningBatchNorm(nn.Module):
def __init__(self, nf, mom=0.1, eps=1e-5):
super().__init__()
self.mom,self.eps = mom,eps
self.mults = nn.Parameter(torch.ones (nf,1,1))
self.adds = nn.Parameter(torch.zeros(nf,1,1))
self.register_buffer('sums', torch.zeros(1,nf,1,1))
self.register_buffer('sqrs', torch.zeros(1,nf,1,1))
self.register_buffer('batch', tensor(0.))
self.register_buffer('count', tensor(0.))
self.register_buffer('step', tensor(0.))
self.register_buffer('dbias', tensor(0.))
def update_stats(self, x):
bs,nc,*_ = x.shape
self.sums.detach_()
self.sqrs.detach_()
dims = (0,2,3)
s = x.sum(dims, keepdim=True)
ss = (x*x).sum(dims, keepdim=True)
c = self.count.new_tensor(x.numel()/nc)
mom1 = 1 - (1-self.mom)/math.sqrt(bs-1)
self.mom1 = self.dbias.new_tensor(mom1)
self.sums.lerp_(s, self.mom1)
self.sqrs.lerp_(ss, self.mom1)
self.count.lerp_(c, self.mom1)
self.dbias = self.dbias*(1-self.mom1) + self.mom1
self.batch += bs
self.step += 1
def forward(self, x):
if self.training: self.update_stats(x)
sums = self.sums
sqrs = self.sqrs
c = self.count
if self.step<100:
sums = sums / self.dbias
sqrs = sqrs / self.dbias
c = c / self.dbias
means = sums/c
vars = (sqrs/c).sub_(means*means)
if bool(self.batch < 20): vars.clamp_min_(0.01)
x = (x-means).div_((vars.add_(self.eps)).sqrt())
return x.mul_(self.mults).add_(self.adds)
def conv_rbn(ni, nf, ks=3, stride=2, bn=True, **kwargs):
layers = [nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=not bn),
GeneralRelu(**kwargs)]
if bn: layers.append(RunningBatchNorm(nf))
return nn.Sequential(*layers)
learn,run = get_learn_run(nfs, data, 0.4, conv_rbn, cbs=cbfs)
%time run.fit(1, learn)
Explanation: Running Batch Norm
To solve this problem we introduce a Running BatchNorm that uses smoother running mean and variance for the mean and std.
Jump_to lesson 10 video
End of explanation
data = DataBunch(*get_dls(train_ds, valid_ds, 32), c)
learn,run = get_learn_run(nfs, data, 0.9, conv_rbn, cbs=cbfs
+[partial(ParamScheduler,'lr', sched_lin(1., 0.2))])
%time run.fit(1, learn)
Explanation: This solves the small batch size issue!
What can we do in a single epoch?
Now let's see with a decent batch size what result we can get.
Jump_to lesson 10 video
End of explanation
nb_auto_export()
Explanation: Export
End of explanation |
4,134 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Employee scheduling
pyschedule can be used for employee scheduling. The following example is motivated by instances from
Step1: Solving without shift requests
First build the scenario without any shift requests
Step2: Solving with shift requests
To include shift requests, we use the following heuristic | Python Code:
employee_names = ['A','B','C','D','E','F','G','H']
n_days = 14 # number of days
days = list(range(n_days))
max_seq = 5 # max number of consecutive shifts
min_seq = 2 # min sequence without gaps
max_work = 10 # max total number of shifts
min_work = 7 # min total number of shifts
max_weekend = 3 # max number of weekend shifts
# number of required shifts for each day
shift_requirements =\
{
0: 5,
1: 7,
2: 6,
3: 4,
4: 5,
5: 5,
6: 5,
7: 6,
8: 7,
9: 4,
10: 2,
11: 5,
12: 6,
13: 4
}
# specific shift requests by employees for days
shift_requests =\
[
('A',0),
('B',5),
('C',8),
('D',2),
('E',9),
('F',5),
('G',1),
('H',7),
('A',3),
('B',4),
('C',4),
('D',9),
('F',1),
('F',2),
('F',3),
('F',5),
('F',7),
('H',13)
]
Explanation: Employee scheduling
pyschedule can be used for employee scheduling. The following example is motivated by instances from:
http://www.cs.nott.ac.uk/~tec/NRP/#new_instances
We simplified these instances a little bit for the sake of exposition. First load some instance:
End of explanation
from pyschedule import Scenario, solvers, plotters, alt
# Create employee scheduling scenari
S = Scenario('employee_scheduling',horizon=n_days)
# Create enployees as resources indexed by namesc
employees = { name : S.Resource(name) for name in employee_names }
# Create shifts as tasks
shifts = { (day,i) : S.Task('S_%s_%s'%(str(day),str(i)))
for day in shift_requirements if day in days
for i in range(shift_requirements[day]) }
# distribute shifts to days
for day,i in shifts:
# Assign shift to its day
S += shifts[day,i] >= day
# The shifts on each day are interchangeable, so add them to the same group
shifts[day,i].group = day
# Weekend shifts get attribute week_end
if day % 7 in {5,6}:
shifts[day,i].week_end = 1
# There are no restrictions, any shift can be done by any employee
for day,i in shifts:
shifts[day,i] += alt( S.resources() )
# Capacity restrictions
for name in employees:
# Maximal number of shifts
S += employees[name] <= max_work
# Minimal number of shifts
S += employees[name] >= min_work
# Maximal number of weekend shifts using attribute week_end
S += employees[name]['week_end'] <= max_weekend
# Max number of consecutive shifts
for name in employees:
for day in range(n_days):
S += employees[name][day:day+max_seq+1] <= max_seq
# Min sequence without gaps
for name in employees:
# No increase in last periods
S += employees[name][n_days-min_seq:].inc <= 0
# No decrease in first periods
S += employees[name][:min_seq].dec <= 0
# No diff during time horizon
for day in days[:-min_seq]:
S += employees[name][day:day+min_seq+1].diff <= 1
# Solve and plot scenario
if solvers.mip.solve(S,kind='CBC',msg=1,random_seed=6):
%matplotlib inline
plotters.matplotlib.plot(S,fig_size=(12,5))
else:
print('no solution found')
Explanation: Solving without shift requests
First build the scenario without any shift requests:
End of explanation
import random
import time
time_limit = 10 # time limit for each run
repeats = 5 # repeated random runs because CBC might get stuck
# Iteratively add shift requests until no solution exists
for name,day in shift_requests:
S += employees[name][day] >= 1
for i in range(repeats):
random_seed = random.randint(0,10000)
start_time = time.time()
status = solvers.mip.solve(S,kind='CBC',time_limit=time_limit,
random_seed=random_seed,msg=0)
# Break when solution found
if status:
break
print(name,day,'compute time:', time.time()-start_time)
# Break if all computed solution runs fail
if not status:
S -= employees[name][day] >= 1
print('cant fit last shift request')
# Plot the last computed solution
%matplotlib inline
plotters.matplotlib.plot(S,fig_size=(12,5))
Explanation: Solving with shift requests
To include shift requests, we use the following heuristic: iteratively add requests in the given order. If one shift request does not fit, remove it again and proceed with the next one. In case CBC get stuck repeat each computation several times with some random seed:
End of explanation |
4,135 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1.1 Reading data from a csv file
You can read data from a CSV file using the read_csv function. By default, it assumes that the fields are comma-separated.
We're going to be looking some cyclist data from Montréal. Here's the original page (in French), but it's already included in this repository. We're using the data from 2012.
This dataset is a list of how many people were on 7 different bike paths in Montreal, each day.
Step1: You'll notice that this is totally broken! read_csv has a bunch of options that will let us fix that, though. Here we'll
change the column separator to a ;
Set the encoding to 'latin1' (the default is 'utf8')
Parse the dates in the 'Date' column
Tell it that our dates have the date first instead of the month first
Set the index to be the 'Date' column
Step2: 1.2 Selecting a column
When you read a CSV, you get a kind of object called a DataFrame, which is made up of rows and columns. You get columns out of a DataFrame the same way you get elements out of a dictionary.
Here's an example
Step3: 1.3 Plotting a column
Just add .plot() to the end! How could it be easier? =)
We can see that, unsurprisingly, not many people are biking in January, February, and March,
Step4: We can also plot all the columns just as easily. We'll make it a little bigger, too.
You can see that it's more squished together, but all the bike paths behave basically the same -- if it's a bad day for cyclists, it's a bad day everywhere.
Step5: 1.4 Putting all that together
Here's the code we needed to write do draw that graph, all together | Python Code:
broken_df = pd.read_csv('../data/bikes.csv')
# Look at the first 3 rows
broken_df[:3]
Explanation: 1.1 Reading data from a csv file
You can read data from a CSV file using the read_csv function. By default, it assumes that the fields are comma-separated.
We're going to be looking some cyclist data from Montréal. Here's the original page (in French), but it's already included in this repository. We're using the data from 2012.
This dataset is a list of how many people were on 7 different bike paths in Montreal, each day.
End of explanation
fixed_df = pd.read_csv('../data/bikes.csv', sep=';', encoding='latin1', parse_dates=['Date'], dayfirst=True, index_col='Date')
fixed_df[:3]
Explanation: You'll notice that this is totally broken! read_csv has a bunch of options that will let us fix that, though. Here we'll
change the column separator to a ;
Set the encoding to 'latin1' (the default is 'utf8')
Parse the dates in the 'Date' column
Tell it that our dates have the date first instead of the month first
Set the index to be the 'Date' column
End of explanation
fixed_df['Berri 1']
Explanation: 1.2 Selecting a column
When you read a CSV, you get a kind of object called a DataFrame, which is made up of rows and columns. You get columns out of a DataFrame the same way you get elements out of a dictionary.
Here's an example:
End of explanation
fixed_df['Berri 1'].plot()
Explanation: 1.3 Plotting a column
Just add .plot() to the end! How could it be easier? =)
We can see that, unsurprisingly, not many people are biking in January, February, and March,
End of explanation
fixed_df.plot(figsize=(15, 10))
Explanation: We can also plot all the columns just as easily. We'll make it a little bigger, too.
You can see that it's more squished together, but all the bike paths behave basically the same -- if it's a bad day for cyclists, it's a bad day everywhere.
End of explanation
df = pd.read_csv('../data/bikes.csv', sep=';', encoding='latin1', parse_dates=['Date'], dayfirst=True, index_col='Date')
df['Berri 1'].plot()
Explanation: 1.4 Putting all that together
Here's the code we needed to write do draw that graph, all together:
End of explanation |
4,136 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Abstract
This paper introduces PyEDA, a Python library for electronic design automation (EDA). PyEDA provides both a high level interface to the representation of Boolean functions,
and blazingly-fast C extensions for fundamental algorithms where performance is essential.
PyEDA is a hobby project which has the simple but audacious goal of improving the state of digital design by using Python.
Introduction
Chip design and verification is a complicated undertaking.
You must assemble a large team of engineers with many different specialties
Step1: By overloading Python's logical operators,
you can build expression algebraically
Step2: Use methods from the Function base class to explore the function's
basic properties
Step3: There are also several factory functions that offer more power than Python's
built-in binary operators.
For example, operators such as Or, And, and Xor allow you to
construct N-ary expressions
Step4: Also, functions such as OneHot, and Majority
implement powerful, higher order functions
Step5: Simplification
The laws of Boolean Algebra can be used to simplify expressions.
For example, this table enumerates a partial list of Boolean identities
for the Or and And operators.
| Name | OR | AND |
|
Step6: Performing simplification can dramatically reduce the size and depth of
your logic expressions.
Transformation
PyEDA also supports a growing list of expression transformations.
Since expressions are not a canonical form,
transformations can help explore tradeoffs in time and space,
as well as convert an expression to a form suitable for a particular algorithm.
For example,
in addition to the primary operators Not, Or, and And,
expressions also natively support the secondary Xor, Equal,
Implies, and ITE (if-then-else) operators.
By transforming all secondary operators into primary operators,
and pushing all Not operators down towards the leaf nodes,
you arrive at what is known as "negation normal form".
Step7: Currently, expressions also support conversion to the following forms
Step8: Expression Parsing
The expr function is a factory function that attempts to transform any
input into a logic expression.
It does the obvious thing when converting inputs that look like Boolean values
Step9: But it also implements a full top-down parser of expressions.
For example
Step10: See the documentation
for a complete list of supported operators accepted by the expr function.
Satisfiability
One of the most interesting questions in computer science is whether a given
Boolean function is satisfiable, or SAT.
That is, for a given function $F$,
is there a set of input assignments that will produce an output of $1$?
PyEDA Boolean functions implement two functions for this purpose,
satisfy_one, and satisfy_all.
The former answers the question in a yes/no fashion,
returning a satisfying input point if the function is satisfiable,
and None otherwise.
The latter returns a generator that will iterate through all satisfying
input points.
SAT has all kinds of applications in both digital design and verification.
In digital design, it can be used in equivalence checking,
test pattern generation, model checking, formal verification,
and constrained-random verification, among others.
SAT finds its way into other areas as well.
For example, modern package management systems such as apt and yum
might use SAT to guarantee that certain dependencies are satisfied
for a given configuration.
The pyeda.boolalg.picosat module provides an interface to the modern
SAT solver PicoSAT.
When a logic expression is in conjunctive normal form (CNF),
calling the satisfy_* methods will invoke PicoSAT transparently.
For example
Step11: When an expression is not a CNF,
PyEDA will resort to a standard, backtracking algorithm.
The worst-case performance of this implementation is exponential,
but is acceptable for many real-world scenarios.
Tseitin Transformation
The worst case memory consumption when converting to CNF is exponential.
This is due to the fact that distribution of $M$ Or clauses over
$N$ And clauses (or vice-versa) requires $M \times N$ clauses.
Step12: Logic expressions support the tseitin method,
which perform's Tseitin's transformation on the input expression.
For more information about this transformation, see (ref needed).
The Tseitin transformation does not produce an equivalent expression,
but rather an equisatisfiable CNF,
with the addition of auxiliary variables.
The important feature is that it can convert any expression into a CNF,
which can be solved using PicoSAT.
Step13: You can safely discard the aux variables to get the solution
Step14: Truth Tables
The most straightforward way to represent a Boolean function is to simply
enumerate all possible mappings from input assignment to output values.
This is known as a truth table,
It is implemented as a packed list,
where the index of the output value corresponds to the assignment of the
input variables.
The nature of this data structure implies an exponential size.
For $N$ input variables, the table will be size $2^N$.
It is therefore mostly useful for manual definition and inspection of
functions of reasonable size.
To construct a truth table from scratch,
use the truthtable factory function.
For example, to represent the And function
Step15: You can also convert expressions to truth tables using the expr2truthtable function
Step16: Partial Definitions
Another use for truth tables is the representation of partially defined functions.
Logic expressions and binary decision diagrams are completely defined,
meaning that their implementation imposes a complete mapping from all points
in the domain to ${0, 1}$.
Truth tables allow you to specify some function outputs as "don't care".
You can accomplish this by using either "-" or "X" with the truthtable function.
For example, a seven segment display is used to display decimal numbers.
The codes "0000" through "1001" are used for 0-9,
but codes "1010" through "1111" are not important, and therefore can be
labeled as "don't care".
Step17: To convert a table to a two-level,
disjunctive normal form (DNF) expression,
use the truthtable2expr function
Step18: Two-Level Logic Minimization
When choosing a physical implementation for a Boolean function,
the size of the logic network is proportional to its cost,
in terms of area and power.
Therefore it is desirable to reduce the size of that network.
Logic minimization of two-level forms is an NP-complete problem.
It is equivalent to finding a minimal-cost set of subsets of a
set $S$ that covers $S$.
This is sometimes called the "paving problem",
because it is conceptually similar to finding the cheapest configuration of
tiles that cover a floor.
Due to the complexity of this operation,
PyEDA uses a C extension to the Berkeley Espresso library.
After calling the espresso_tts function on the F1 and F2
truth tables from above,
observe how much smaller (and therefore cheaper) the resulting DNF expression is
Step19: Binary Decision Diagrams
A binary decision diagram is a directed acyclic graph used to represent a
Boolean function.
They were originally introduced by Lee,
and later by Akers.
In 1986, Randal Bryant introduced the reduced, ordered BDD (ROBDD).
The ROBDD is a canonical form,
which means that given an identical ordering of input variables,
equivalent Boolean functions will always reduce to the same ROBDD.
This is a desirable property for determining formal equivalence.
Also, it means that unsatisfiable functions will be reduced to zero,
making SAT/UNSAT calculations trivial.
Due to these auspicious properties,
the term BDD almost always refers to some minor variation of the ROBDD
devised by Bryant.
The downside of BDDs is that certain functions,
no matter how cleverly you order their input variables,
will result in an exponentially-sized graph data structure.
Construction
Like logic expressions,
you can construct a BDD by starting with symbolic variables
and combining them with operators.
For example
Step20: The expr2bdd function can also be used to convert any expression into
an equivalent BDD
Step21: Equivalence
As we mentioned before,
BDDs are a canonical form.
This makes checking for SAT, UNSAT, and formal equivalence trivial.
Step22: PyEDA's BDD implementation uses a unique table,
so F and G from the previous example are actually just two different
names for the same object.
Visualization
Like expressions,
binary decision diagrams also support a to_dot() method,
which can be used to convert the graph structure to DOT format
for consumption by Graphviz.
For example, this figure shows the Graphviz output on the
majority function in three variables
Step23: Function Arrays
When dealing with several related Boolean functions,
it is usually convenient to index the inputs and outputs.
For this purpose, PyEDA includes a multi-dimensional array (MDA) data type,
called an farray (function array).
The most pervasive example is computation involving any numeric data type.
If these numbers are 32-bit integers, there are 64 total inputs,
not including a carry-in.
The conventional way of labeling the input variables is
$a_0, a_1, \ldots, a_{31}$, and $b_0, b_1, \ldots, b_{31}$.
Furthermore, you can extend the symbolic algebra of Boolean functions to arrays.
For example, the element-wise XOR of A and B is also an array.
In this section, we will briefly discuss farray construction,
slicing operations, and algebraic operators.
Function arrays can be constructed using any Function implementation,
but for simplicity we will restrict the discussion to logic expressions.
Construction
The farray constructor can be used to create an array of arbitrary expressions.
Step24: As you can see, this produces a one-dimensional array of size 4.
The shape of the previous array uses Python's conventional,
exclusive indexing scheme in one dimension.
The farray constructor also supports multi-dimensional arrays
Step25: Though arrays can be constructed from arbitrary functions in arbitrary shapes,
it is far more useful to start with arrays of variables and constants,
and build more complex arrays from them using operators.
To construct arrays of expression variables,
use the exprvars factory function
Step26: Use the uint2exprs and int2exprs function to convert integers to their
binary encoding in unsigned, and twos-complement, respectively.
Step27: Note that the bits are in order from LSB to MSB,
so the conventional bitstring representation of $-42$ in eight bits
would be "11010110".
Slicing
PyEDA's function arrays support numpy-style slicing operators
Step28: A special feature of PyEDA farray slicing that is useful for digital logic
is the ability to multiplex (mux) array items over a select input.
For example, to create a simple, 4
Step29: Algebraic Operations
Function arrays are algebraic data types,
which support the following symbolic operators | Python Code:
a, b, c, d = map(exprvar, 'abcd')
Explanation: Abstract
This paper introduces PyEDA, a Python library for electronic design automation (EDA). PyEDA provides both a high level interface to the representation of Boolean functions,
and blazingly-fast C extensions for fundamental algorithms where performance is essential.
PyEDA is a hobby project which has the simple but audacious goal of improving the state of digital design by using Python.
Introduction
Chip design and verification is a complicated undertaking.
You must assemble a large team of engineers with many different specialties:
front-end design entry, logic verification, power optimization, synthesis,
place and route, physical verification, and so on.
Unfortunately, the tools, languages,
and work flows offered by the electronic design automation (EDA) industry are,
in this author's opinion, largely a pit of despair.
The languages most familiar to chip design and verification engineers are
Verilog (now SystemVerilog), C/C++, TCL, and Perl.
Flows are patched together from several proprietary tools with incompatible
data representations.
Even with Python's strength in scientific computing,
it has largely failed to penetrate this space.
In short, EDA needs more Python!
This paper surveys some of the features and applications of
PyEDA,
a Python library for electronic design automation.
PyEDA provides both a high level interface to the representation of Boolean functions,
and blazingly-fast C extensions for fundamental algorithms where
performance is essential.
PyEDA is a hobby project,
but in the past year it has seen some interesting adoption from
University students.
For example,
students at Vanderbilt University used it to model system reliability,
and students at Saarland University used as part of a fast DQBF Refutation tool.
Even though the name "PyEDA" implies that the library is specific to EDA,
it is actually general in nature.
Some of the techniques used for designing and verifying digital logic are
fundamental to computer science.
For example, we will discuss applications of Boolean satisfiability (SAT),
the definitive NP-complete problem.
PyEDA's repository is hosted at https://github.com/cjdrake/pyeda,
and its documentation is hosted at http://pyeda.rtfd.org.
Boolean Variables and Functions
At its core, PyEDA provides a powerful API for creating and
manipulating Boolean functions.
First, let us provide the standard definitions.
A Boolean variable is an abstract numerical quantity that can take any
value in the set ${0, 1}$.
A Boolean function is a rule that maps points in an $N$-dimensional
Boolean space to an element in ${0, 1}$.
Formally, $f: B^N \Rightarrow B$,
where $B^N$ means the Cartesian product of $N$ sets of type ${0, 1}$.
For example, if you have three input variables, $a, b, c$,
each defined on ${0, 1}$,
then $B^3 = {0, 1}^3 = {(0, 0, 0), (0, 0, 1), ..., (1, 1, 1)}$.
$B^3$ is the domain of the function (the input part),
and $B = {0, 1}$ is the range of the function (the output part).
The set of all input variables a function depends on is called its support.
There are several ways to represent a Boolean function,
and different data structures have different tradeoffs.
In the following sections,
we will give a brief overview of PyEDA's API for logic expressions,
truth tables, and binary decision diagrams.
In addition,
we will provide implementation notes for several useful applications.
Logic Expressions
Logic expressions are a powerful and flexible way to represent Boolean functions.
They are implemented as a graph,
with atoms at the branches, and operators at the leaves.
Atomic elements are literals (variables and complemented variables),
and constants (zero and one).
The supported algebraic operators are Not, Or, And, Xor,
Equal, Implies, and ITE (if-then-else).
For general purpose use,
symbolic logic expressions are PyEDA's central data type.
Since release 0.27,
they have been implemented using a high performance C library.
Expressions are fast, and reasonably compact.
On the other hand, they are generally not canonical,
and determining expression equivalence is NP-complete.
Conversion to a canonical expression form can result in exponential size.
Construction
To construct a logic expression, first start by defining some symbolic
variables of type Expression:
End of explanation
F = a | ~b & c ^ ~d
Explanation: By overloading Python's logical operators,
you can build expression algebraically:
End of explanation
F.support
list (F.iter_relation())
Explanation: Use methods from the Function base class to explore the function's
basic properties:
End of explanation
a ^ b ^ c
Xor(a, b, c)
Explanation: There are also several factory functions that offer more power than Python's
built-in binary operators.
For example, operators such as Or, And, and Xor allow you to
construct N-ary expressions:
End of explanation
OneHot(a, b, c)
Majority(a, b, c)
Explanation: Also, functions such as OneHot, and Majority
implement powerful, higher order functions:
End of explanation
F = ~a | a
F
F.simplify()
Xor(a, ~b, Xnor(~a, b), c)
Explanation: Simplification
The laws of Boolean Algebra can be used to simplify expressions.
For example, this table enumerates a partial list of Boolean identities
for the Or and And operators.
| Name | OR | AND |
|:-------------:|:---------------:|:-----------------------:|
| Commutativity | $x + y = y + x$ | $x \cdot y = y \cdot x$ |
| Associativity | $x + (y + z) = (x + y) + z$ | $x \cdot (y \cdot z) = (x \cdot y) \cdot z$ |
| Identity | $x + 0 = x$ | $x \cdot 1 = x$ |
| Domination | $x + 1 = 1$ | $x \cdot 0 = 0$ |
| Idempotence | $x + x = x$ | $x \cdot x = x$ |
| Inverse | $x + x' = 1$ | $x \cdot x' = 0$ |
Most laws are computationally easy to apply.
PyEDA allows you to construct unsimplified Boolean expressions,
and provides the simplify method to perform such inexpensive
transformations.
For example:
End of explanation
F = Xor(a >> b, c.eq(d))
F.to_nnf()
Explanation: Performing simplification can dramatically reduce the size and depth of
your logic expressions.
Transformation
PyEDA also supports a growing list of expression transformations.
Since expressions are not a canonical form,
transformations can help explore tradeoffs in time and space,
as well as convert an expression to a form suitable for a particular algorithm.
For example,
in addition to the primary operators Not, Or, and And,
expressions also natively support the secondary Xor, Equal,
Implies, and ITE (if-then-else) operators.
By transforming all secondary operators into primary operators,
and pushing all Not operators down towards the leaf nodes,
you arrive at what is known as "negation normal form".
End of explanation
F = Majority(a, b, c, d)
%dotobj F
Explanation: Currently, expressions also support conversion to the following forms:
Binary operator (only two args per Or, And, etc)
Disjunctive Normal Form (DNF)
Conjunctive Normal Form (CNF)
DNF and CNF expressions are "two-level" forms.
That is, the entire expression is either an Or of And clauses (DNF),
or an And of Or clauses (CNF).
DNF expressions are also called "covers",
and are important in both two-level and multi-level logic minimization.
CNF expressions play an important role in satisfiability.
We will briefly cover both of these topics in subsequent sections.
Visualizaton
Boolean expressions support a to_dot() method,
which can be used to convert the graph structure to DOT format
for consumption by Graphviz.
For example, this figure shows the Graphviz output on the
majority function in four variables:
End of explanation
expr(False)
expr(1)
expr("0")
Explanation: Expression Parsing
The expr function is a factory function that attempts to transform any
input into a logic expression.
It does the obvious thing when converting inputs that look like Boolean values:
End of explanation
expr("a | b ^ c & d")
expr("s ? x[0] ^ x[1] : y[0] <=> y[1]")
expr("a[0,1] & a[1,0] => y[0,1] | y[1,0]")
Explanation: But it also implements a full top-down parser of expressions.
For example:
End of explanation
F = OneHot(a, b, c)
F.is_cnf()
F.satisfy_one()
list(F.satisfy_all())
Explanation: See the documentation
for a complete list of supported operators accepted by the expr function.
Satisfiability
One of the most interesting questions in computer science is whether a given
Boolean function is satisfiable, or SAT.
That is, for a given function $F$,
is there a set of input assignments that will produce an output of $1$?
PyEDA Boolean functions implement two functions for this purpose,
satisfy_one, and satisfy_all.
The former answers the question in a yes/no fashion,
returning a satisfying input point if the function is satisfiable,
and None otherwise.
The latter returns a generator that will iterate through all satisfying
input points.
SAT has all kinds of applications in both digital design and verification.
In digital design, it can be used in equivalence checking,
test pattern generation, model checking, formal verification,
and constrained-random verification, among others.
SAT finds its way into other areas as well.
For example, modern package management systems such as apt and yum
might use SAT to guarantee that certain dependencies are satisfied
for a given configuration.
The pyeda.boolalg.picosat module provides an interface to the modern
SAT solver PicoSAT.
When a logic expression is in conjunctive normal form (CNF),
calling the satisfy_* methods will invoke PicoSAT transparently.
For example:
End of explanation
Or(And(a, b), And(c, d)).to_cnf()
Explanation: When an expression is not a CNF,
PyEDA will resort to a standard, backtracking algorithm.
The worst-case performance of this implementation is exponential,
but is acceptable for many real-world scenarios.
Tseitin Transformation
The worst case memory consumption when converting to CNF is exponential.
This is due to the fact that distribution of $M$ Or clauses over
$N$ And clauses (or vice-versa) requires $M \times N$ clauses.
End of explanation
F = Xor(a, b, c, d)
soln = F.tseitin().satisfy_one()
soln
Explanation: Logic expressions support the tseitin method,
which perform's Tseitin's transformation on the input expression.
For more information about this transformation, see (ref needed).
The Tseitin transformation does not produce an equivalent expression,
but rather an equisatisfiable CNF,
with the addition of auxiliary variables.
The important feature is that it can convert any expression into a CNF,
which can be solved using PicoSAT.
End of explanation
{k: v for k, v in soln.items() if k.name != 'aux'}
Explanation: You can safely discard the aux variables to get the solution:
End of explanation
truthtable([a, b], [False, False, False, True])
# This also works
truthtable([a, b], "0001")
Explanation: Truth Tables
The most straightforward way to represent a Boolean function is to simply
enumerate all possible mappings from input assignment to output values.
This is known as a truth table,
It is implemented as a packed list,
where the index of the output value corresponds to the assignment of the
input variables.
The nature of this data structure implies an exponential size.
For $N$ input variables, the table will be size $2^N$.
It is therefore mostly useful for manual definition and inspection of
functions of reasonable size.
To construct a truth table from scratch,
use the truthtable factory function.
For example, to represent the And function:
End of explanation
expr2truthtable(OneHot0(a, b, c))
Explanation: You can also convert expressions to truth tables using the expr2truthtable function:
End of explanation
X = ttvars('x', 4)
F1 = truthtable(X, "0000011111------")
F2 = truthtable(X, "0001111100------")
Explanation: Partial Definitions
Another use for truth tables is the representation of partially defined functions.
Logic expressions and binary decision diagrams are completely defined,
meaning that their implementation imposes a complete mapping from all points
in the domain to ${0, 1}$.
Truth tables allow you to specify some function outputs as "don't care".
You can accomplish this by using either "-" or "X" with the truthtable function.
For example, a seven segment display is used to display decimal numbers.
The codes "0000" through "1001" are used for 0-9,
but codes "1010" through "1111" are not important, and therefore can be
labeled as "don't care".
End of explanation
truthtable2expr(F1)
Explanation: To convert a table to a two-level,
disjunctive normal form (DNF) expression,
use the truthtable2expr function:
End of explanation
F1M, F2M = espresso_tts(F1, F2)
F1M
F2M
Explanation: Two-Level Logic Minimization
When choosing a physical implementation for a Boolean function,
the size of the logic network is proportional to its cost,
in terms of area and power.
Therefore it is desirable to reduce the size of that network.
Logic minimization of two-level forms is an NP-complete problem.
It is equivalent to finding a minimal-cost set of subsets of a
set $S$ that covers $S$.
This is sometimes called the "paving problem",
because it is conceptually similar to finding the cheapest configuration of
tiles that cover a floor.
Due to the complexity of this operation,
PyEDA uses a C extension to the Berkeley Espresso library.
After calling the espresso_tts function on the F1 and F2
truth tables from above,
observe how much smaller (and therefore cheaper) the resulting DNF expression is:
End of explanation
a, b, c = map(bddvar, 'abc')
F = a & b & c
F.support
F.restrict({a: 1, b: 1})
F & 0
Explanation: Binary Decision Diagrams
A binary decision diagram is a directed acyclic graph used to represent a
Boolean function.
They were originally introduced by Lee,
and later by Akers.
In 1986, Randal Bryant introduced the reduced, ordered BDD (ROBDD).
The ROBDD is a canonical form,
which means that given an identical ordering of input variables,
equivalent Boolean functions will always reduce to the same ROBDD.
This is a desirable property for determining formal equivalence.
Also, it means that unsatisfiable functions will be reduced to zero,
making SAT/UNSAT calculations trivial.
Due to these auspicious properties,
the term BDD almost always refers to some minor variation of the ROBDD
devised by Bryant.
The downside of BDDs is that certain functions,
no matter how cleverly you order their input variables,
will result in an exponentially-sized graph data structure.
Construction
Like logic expressions,
you can construct a BDD by starting with symbolic variables
and combining them with operators.
For example:
End of explanation
expr2bdd(expr("(s ? d1 : d0) <=> (s & d1 | ~s & d0)"))
Explanation: The expr2bdd function can also be used to convert any expression into
an equivalent BDD:
End of explanation
~a & a
~a & ~b | ~a & b | a & ~b | a & b
F = a ^ b
G = ~a & b | a & ~b
F.equivalent(G)
F is G
Explanation: Equivalence
As we mentioned before,
BDDs are a canonical form.
This makes checking for SAT, UNSAT, and formal equivalence trivial.
End of explanation
%dotobj expr2bdd(expr("Majority(a, b, c)"))
Explanation: PyEDA's BDD implementation uses a unique table,
so F and G from the previous example are actually just two different
names for the same object.
Visualization
Like expressions,
binary decision diagrams also support a to_dot() method,
which can be used to convert the graph structure to DOT format
for consumption by Graphviz.
For example, this figure shows the Graphviz output on the
majority function in three variables:
End of explanation
a, b, c, d = map(exprvar, 'abcd')
F = farray([a, b, And(a, c), Or(b, d)])
F.ndim
F.size
F.shape
Explanation: Function Arrays
When dealing with several related Boolean functions,
it is usually convenient to index the inputs and outputs.
For this purpose, PyEDA includes a multi-dimensional array (MDA) data type,
called an farray (function array).
The most pervasive example is computation involving any numeric data type.
If these numbers are 32-bit integers, there are 64 total inputs,
not including a carry-in.
The conventional way of labeling the input variables is
$a_0, a_1, \ldots, a_{31}$, and $b_0, b_1, \ldots, b_{31}$.
Furthermore, you can extend the symbolic algebra of Boolean functions to arrays.
For example, the element-wise XOR of A and B is also an array.
In this section, we will briefly discuss farray construction,
slicing operations, and algebraic operators.
Function arrays can be constructed using any Function implementation,
but for simplicity we will restrict the discussion to logic expressions.
Construction
The farray constructor can be used to create an array of arbitrary expressions.
End of explanation
G = farray([ [a, b],
[And(a, c), Or(b, d)],
[Xor(b, c), Equal(c, d)] ])
G.ndim
G.size
G.shape
Explanation: As you can see, this produces a one-dimensional array of size 4.
The shape of the previous array uses Python's conventional,
exclusive indexing scheme in one dimension.
The farray constructor also supports multi-dimensional arrays:
End of explanation
xs = exprvars('x', 8)
xs
ys = exprvars('y', 4, 4)
ys
Explanation: Though arrays can be constructed from arbitrary functions in arbitrary shapes,
it is far more useful to start with arrays of variables and constants,
and build more complex arrays from them using operators.
To construct arrays of expression variables,
use the exprvars factory function:
End of explanation
uint2exprs(42, 8)
int2exprs(-42, 8)
Explanation: Use the uint2exprs and int2exprs function to convert integers to their
binary encoding in unsigned, and twos-complement, respectively.
End of explanation
xs = exprvars('x', 4, 4, 4)
xs[1,2,3]
xs[2,:,2]
xs[...,1]
Explanation: Note that the bits are in order from LSB to MSB,
so the conventional bitstring representation of $-42$ in eight bits
would be "11010110".
Slicing
PyEDA's function arrays support numpy-style slicing operators:
End of explanation
X = exprvars('x', 4)
S = exprvars('s', 2)
X[S].simplify()
Explanation: A special feature of PyEDA farray slicing that is useful for digital logic
is the ability to multiplex (mux) array items over a select input.
For example, to create a simple, 4:1 mux:
End of explanation
from pyeda.logic.addition import kogge_stone_add
A = exprvars('a', 8)
B = exprvars('b', 8)
S, C = kogge_stone_add(A, B)
S.vrestrict({A: "01000000", B: "01000000"})
Explanation: Algebraic Operations
Function arrays are algebraic data types,
which support the following symbolic operators:
unary reductions (uor, uand, uxor, ...)
bitwise logic (~ | & ^)
shifts (<< >>)
concatenation (+)
repetition (*)
Combining function and array operators allows us to implement a reasonably
complete domain-specific language (DSL) for symbolic Boolean algebra in Python.
Consider, for example, the implementation of the xtime function,
which is an integral part of the AES algorithm.
The Verilog implementation, as a function:
verilog
function automatic logic [7:0]
xtime(logic [7:0] b, int n);
xtime = b;
for (int i = 0; i < n; i++)
xtime = {xtime[6:0], 1'b0}
^ (8'h1b & {8{xtime[7]}});
endfunction
And the PyEDA implementation:
python
def xtime(b, n):
for _ in range(n):
b = (exprzeros(1) + b[:7]
^ uint2exprs(0x1b, 8) & b[7]*8)
return b
Practical Applications
Arrays of functions have many practical applications.
For example,
the pyeda.logic.addition module contains implementations of
ripple-carry, brent-kung, and kogge-stone addition logic.
Here is the digital logic implementation of $2 + 2 = 4$:
End of explanation |
4,137 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualizing Evoked data
This tutorial shows the different visualization methods for
Step1: Instead of creating the ~mne.Evoked object from an ~mne.Epochs object,
we'll load an existing ~mne.Evoked object from disk. Remember, the
Step2: To make our life easier, let's convert that list of
Step3: Plotting signal traces
.. sidebar
Step4: Notice the completely flat EEG channel and the noisy gradiometer channel
plotted in red color. Like many MNE-Python plotting functions,
Step5: Plotting scalp topographies
In an interactive session, the butterfly plots seen above can be
click-dragged to select a time region, which will pop up a map of the average
field distribution over the scalp for the selected time span. You can also
generate scalp topographies at specific times or time spans using the
Step6: Additional examples of plotting scalp topographies can be found in
ex-evoked-topomap.
Arrow maps
Scalp topographies at a given time point can be augmented with arrows to show
the estimated magnitude and direction of the magnetic field, using the
function
Step7: Joint plots
Joint plots combine butterfly plots with scalp topographies, and provide an
excellent first-look at evoked data; by default, topographies will be
automatically placed based on peak finding. Here we plot the
right-visual-field condition; if no picks are specified we get a separate
figure for each channel type
Step8: Like
Step9: One nice feature of
Step10: Image plots
Like
Step11: Topographical subplots
For sensor-level analyses it can be useful to plot the response at each
sensor in a topographical layout. The
Step12: For larger numbers of sensors, the method
Step13: By default,
Step14: By default, MEG sensors will be used to estimate the field on the helmet
surface, while EEG sensors will be used to estimate the field on the scalp.
Once the maps are computed, you can plot them with
Step15: You can also use MEG sensors to estimate the scalp field by passing
meg_surf='head'. By selecting each sensor type in turn, you can compare
the scalp field estimates from each. | Python Code:
import os
import numpy as np
import mne
Explanation: Visualizing Evoked data
This tutorial shows the different visualization methods for
:class:~mne.Evoked objects.
:depth: 2
As usual we'll start by importing the modules we need:
End of explanation
sample_data_folder = mne.datasets.sample.data_path()
sample_data_evk_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis-ave.fif')
evokeds_list = mne.read_evokeds(sample_data_evk_file, baseline=(None, 0),
proj=True, verbose=False)
# show the condition names
for e in evokeds_list:
print(e.comment)
Explanation: Instead of creating the ~mne.Evoked object from an ~mne.Epochs object,
we'll load an existing ~mne.Evoked object from disk. Remember, the
:file:.fif format can store multiple ~mne.Evoked objects, so we'll end up
with a list of ~mne.Evoked objects after loading. Recall also from the
tut-section-load-evk section of the introductory Evoked tutorial
<tut-evoked-class> that the sample ~mne.Evoked objects have not been
baseline-corrected and have unapplied projectors, so we'll take care of that
when loading:
End of explanation
conds = ('aud/left', 'aud/right', 'vis/left', 'vis/right')
evks = dict(zip(conds, evokeds_list))
# ‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾ this is equivalent to:
# {'aud/left': evokeds_list[0], 'aud/right': evokeds_list[1],
# 'vis/left': evokeds_list[2], 'vis/right': evokeds_list[3]}
Explanation: To make our life easier, let's convert that list of :class:~mne.Evoked
objects into a :class:dictionary <dict>. We'll use /-separated
dictionary keys to encode the conditions (like is often done when epoching)
because some of the plotting methods can take advantage of that style of
coding.
End of explanation
evks['aud/left'].plot(exclude=[])
Explanation: Plotting signal traces
.. sidebar:: Butterfly plots
Plots of superimposed sensor timeseries are called "butterfly plots"
because the positive- and negative-going traces can resemble
butterfly wings.
The most basic plot of :class:~mne.Evoked objects is a butterfly plot of
each channel type, generated by the :meth:evoked.plot() <mne.Evoked.plot>
method. By default, channels marked as "bad" are suppressed, but you can
control this by passing an empty :class:list to the exclude parameter
(default is exclude='bads'):
End of explanation
evks['aud/left'].plot(picks='mag', spatial_colors=True, gfp=True)
Explanation: Notice the completely flat EEG channel and the noisy gradiometer channel
plotted in red color. Like many MNE-Python plotting functions,
:meth:evoked.plot() <mne.Evoked.plot> has a picks parameter that can
select channels to plot by name, index, or type. In the next plot we'll show
only magnetometer channels, and also color-code the channel traces by their
location by passing spatial_colors=True. Finally, we'll superimpose a
trace of the :term:global field power <GFP> across channels:
End of explanation
times = np.linspace(0.05, 0.13, 5)
evks['aud/left'].plot_topomap(ch_type='mag', times=times, colorbar=True)
fig = evks['aud/left'].plot_topomap(ch_type='mag', times=0.09, average=0.1)
fig.text(0.5, 0.05, 'average from 40-140 ms', ha='center')
Explanation: Plotting scalp topographies
In an interactive session, the butterfly plots seen above can be
click-dragged to select a time region, which will pop up a map of the average
field distribution over the scalp for the selected time span. You can also
generate scalp topographies at specific times or time spans using the
:meth:~mne.Evoked.plot_topomap method:
End of explanation
mags = evks['aud/left'].copy().pick_types(meg='mag')
mne.viz.plot_arrowmap(mags.data[:, 175], mags.info, extrapolate='local')
Explanation: Additional examples of plotting scalp topographies can be found in
ex-evoked-topomap.
Arrow maps
Scalp topographies at a given time point can be augmented with arrows to show
the estimated magnitude and direction of the magnetic field, using the
function :func:mne.viz.plot_arrowmap:
End of explanation
evks['vis/right'].plot_joint()
Explanation: Joint plots
Joint plots combine butterfly plots with scalp topographies, and provide an
excellent first-look at evoked data; by default, topographies will be
automatically placed based on peak finding. Here we plot the
right-visual-field condition; if no picks are specified we get a separate
figure for each channel type:
End of explanation
def custom_func(x):
return x.max(axis=1)
for combine in ('mean', 'median', 'gfp', custom_func):
mne.viz.plot_compare_evokeds(evks, picks='eeg', combine=combine)
Explanation: Like :meth:~mne.Evoked.plot_topomap you can specify the times at which
you want the scalp topographies calculated, and you can customize the plot in
various other ways as well. See :meth:mne.Evoked.plot_joint for details.
Comparing Evoked objects
To compare :class:~mne.Evoked objects from different experimental
conditions, the function :func:mne.viz.plot_compare_evokeds can take a
:class:list or :class:dict of :class:~mne.Evoked objects and plot them
all on the same axes. Like most MNE-Python visualization functions, it has a
picks parameter for selecting channels, but by default will generate one
figure for each channel type, and combine information across channels of the
same type by calculating the :term:global field power <GFP>. Information
may be combined across channels in other ways too; support for combining via
mean, median, or standard deviation are built-in, and custom callable
functions may also be used, as shown here:
End of explanation
mne.viz.plot_compare_evokeds(evks, picks='MEG 1811', colors=dict(aud=0, vis=1),
linestyles=dict(left='solid', right='dashed'))
Explanation: One nice feature of :func:~mne.viz.plot_compare_evokeds is that when
passing evokeds in a dictionary, it allows specifying plot styles based on
/-separated substrings of the dictionary keys (similar to epoch
selection; see tut-section-subselect-epochs). Here, we specify colors
for "aud" and "vis" conditions, and linestyles for "left" and "right"
conditions, and the traces and legend are styled accordingly.
End of explanation
evks['vis/right'].plot_image(picks='meg')
Explanation: Image plots
Like :class:~mne.Epochs, :class:~mne.Evoked objects also have a
:meth:~mne.Evoked.plot_image method, but unlike :meth:epochs.plot_image()
<mne.Epochs.plot_image>, :meth:evoked.plot_image() <mne.Evoked.plot_image>
shows one channel per row instead of one epoch per row. Again, a
picks parameter is available, as well as several other customization
options; see :meth:~mne.Evoked.plot_image for details.
End of explanation
mne.viz.plot_compare_evokeds(evks, picks='eeg', colors=dict(aud=0, vis=1),
linestyles=dict(left='solid', right='dashed'),
axes='topo', styles=dict(aud=dict(linewidth=1),
vis=dict(linewidth=1)))
Explanation: Topographical subplots
For sensor-level analyses it can be useful to plot the response at each
sensor in a topographical layout. The :func:~mne.viz.plot_compare_evokeds
function can do this if you pass axes='topo', but it can be quite slow
if the number of sensors is too large, so here we'll plot only the EEG
channels:
End of explanation
mne.viz.plot_evoked_topo(evokeds_list)
Explanation: For larger numbers of sensors, the method :meth:evoked.plot_topo()
<mne.Evoked.plot_topo> and the function :func:mne.viz.plot_evoked_topo
can both be used. The :meth:~mne.Evoked.plot_topo method will plot only a
single condition, while the :func:~mne.viz.plot_evoked_topo function can
plot one or more conditions on the same axes, if passed a list of
:class:~mne.Evoked objects. The legend entries will be automatically drawn
from the :class:~mne.Evoked objects' comment attribute:
End of explanation
subjects_dir = os.path.join(sample_data_folder, 'subjects')
sample_data_trans_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw-trans.fif')
Explanation: By default, :func:~mne.viz.plot_evoked_topo will plot all MEG sensors (if
present), so to get EEG sensors you would need to modify the evoked objects
first (e.g., using :func:mne.pick_types).
<div class="alert alert-info"><h4>Note</h4><p>In interactive sessions, both approaches to topographical plotting allow
you to click one of the sensor subplots to pop open a larger version of
the evoked plot at that sensor.</p></div>
3D Field Maps
The scalp topographies above were all projected into 2-dimensional overhead
views of the field, but it is also possible to plot field maps in 3D. To do
this requires a :term:trans file to transform locations between the
coordinate systems of the MEG device and the head surface (based on the MRI).
You can compute 3D field maps without a trans file, but it will only
work for calculating the field on the MEG helmet from the MEG sensors.
End of explanation
maps = mne.make_field_map(evks['aud/left'], trans=sample_data_trans_file,
subject='sample', subjects_dir=subjects_dir)
evks['aud/left'].plot_field(maps, time=0.1)
Explanation: By default, MEG sensors will be used to estimate the field on the helmet
surface, while EEG sensors will be used to estimate the field on the scalp.
Once the maps are computed, you can plot them with :meth:evoked.plot_field()
<mne.Evoked.plot_field>:
End of explanation
for ch_type in ('mag', 'grad', 'eeg'):
evk = evks['aud/right'].copy().pick(ch_type)
_map = mne.make_field_map(evk, trans=sample_data_trans_file,
subject='sample', subjects_dir=subjects_dir,
meg_surf='head')
fig = evk.plot_field(_map, time=0.1)
mne.viz.set_3d_title(fig, ch_type, size=20)
Explanation: You can also use MEG sensors to estimate the scalp field by passing
meg_surf='head'. By selecting each sensor type in turn, you can compare
the scalp field estimates from each.
End of explanation |
4,138 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interactions and ANOVA
Note
Step1: Take a look at the data
Step2: Fit a linear model
Step3: Have a look at the created design matrix
Step4: Or since we initially passed in a DataFrame, we have a DataFrame available in
Step5: We keep a reference to the original untouched data in
Step6: Influence statistics
Step7: or get a dataframe
Step8: Now plot the reiduals within the groups separately
Step9: Now we will test some interactions using anova or f_test
Step10: Do an ANOVA check
Step11: The design matrix as a DataFrame
Step12: The design matrix as an ndarray
Step13: Looks like one observation is an outlier.
Step14: Replot the residuals
Step15: Plot the fitted values
Step16: From our first look at the data, the difference between Master's and PhD in the management group is different than in the non-management group. This is an interaction between the two qualitative variables management,M and education,E. We can visualize this by first removing the effect of experience, then plotting the means within each of the 6 groups using interaction.plot.
Step17: Minority Employment Data
Step18: One-way ANOVA
Step19: Two-way ANOVA
Step20: Explore the dataset
Step21: Balanced panel
Step22: You have things available in the calling namespace available in the formula evaluation namespace
Step23: Sum of squares
Illustrates the use of different types of sums of squares (I,II,II)
and how the Sum contrast can be used to produce the same output between
the 3.
Types I and II are equivalent under a balanced design.
Don't use Type III with non-orthogonal contrast - ie., Treatment | Python Code:
%matplotlib inline
from __future__ import print_function
from statsmodels.compat import urlopen
import numpy as np
np.set_printoptions(precision=4, suppress=True)
import statsmodels.api as sm
import pandas as pd
pd.set_option("display.width", 100)
import matplotlib.pyplot as plt
from statsmodels.formula.api import ols
from statsmodels.graphics.api import interaction_plot, abline_plot
from statsmodels.stats.anova import anova_lm
try:
salary_table = pd.read_csv('salary.table')
except: # recent pandas can read URL without urlopen
url = 'http://stats191.stanford.edu/data/salary.table'
fh = urlopen(url)
salary_table = pd.read_table(fh)
salary_table.to_csv('salary.table')
E = salary_table.E
M = salary_table.M
X = salary_table.X
S = salary_table.S
Explanation: Interactions and ANOVA
Note: This script is based heavily on Jonathan Taylor's class notes http://www.stanford.edu/class/stats191/interactions.html
Download and format data:
End of explanation
plt.figure(figsize=(6,6))
symbols = ['D', '^']
colors = ['r', 'g', 'blue']
factor_groups = salary_table.groupby(['E','M'])
for values, group in factor_groups:
i,j = values
plt.scatter(group['X'], group['S'], marker=symbols[j], color=colors[i-1],
s=144)
plt.xlabel('Experience');
plt.ylabel('Salary');
Explanation: Take a look at the data:
End of explanation
formula = 'S ~ C(E) + C(M) + X'
lm = ols(formula, salary_table).fit()
print(lm.summary())
Explanation: Fit a linear model:
End of explanation
lm.model.exog[:5]
Explanation: Have a look at the created design matrix:
End of explanation
lm.model.data.orig_exog[:5]
Explanation: Or since we initially passed in a DataFrame, we have a DataFrame available in
End of explanation
lm.model.data.frame[:5]
Explanation: We keep a reference to the original untouched data in
End of explanation
infl = lm.get_influence()
print(infl.summary_table())
Explanation: Influence statistics
End of explanation
df_infl = infl.summary_frame()
df_infl[:5]
Explanation: or get a dataframe
End of explanation
resid = lm.resid
plt.figure(figsize=(6,6));
for values, group in factor_groups:
i,j = values
group_num = i*2 + j - 1 # for plotting purposes
x = [group_num] * len(group)
plt.scatter(x, resid[group.index], marker=symbols[j], color=colors[i-1],
s=144, edgecolors='black')
plt.xlabel('Group');
plt.ylabel('Residuals');
Explanation: Now plot the reiduals within the groups separately:
End of explanation
interX_lm = ols("S ~ C(E) * X + C(M)", salary_table).fit()
print(interX_lm.summary())
Explanation: Now we will test some interactions using anova or f_test
End of explanation
from statsmodels.stats.api import anova_lm
table1 = anova_lm(lm, interX_lm)
print(table1)
interM_lm = ols("S ~ X + C(E)*C(M)", data=salary_table).fit()
print(interM_lm.summary())
table2 = anova_lm(lm, interM_lm)
print(table2)
Explanation: Do an ANOVA check
End of explanation
interM_lm.model.data.orig_exog[:5]
Explanation: The design matrix as a DataFrame
End of explanation
interM_lm.model.exog
interM_lm.model.exog_names
infl = interM_lm.get_influence()
resid = infl.resid_studentized_internal
plt.figure(figsize=(6,6))
for values, group in factor_groups:
i,j = values
idx = group.index
plt.scatter(X[idx], resid[idx], marker=symbols[j], color=colors[i-1],
s=144, edgecolors='black')
plt.xlabel('X');
plt.ylabel('standardized resids');
Explanation: The design matrix as an ndarray
End of explanation
drop_idx = abs(resid).argmax()
print(drop_idx) # zero-based index
idx = salary_table.index.drop(drop_idx)
lm32 = ols('S ~ C(E) + X + C(M)', data=salary_table, subset=idx).fit()
print(lm32.summary())
print('\n')
interX_lm32 = ols('S ~ C(E) * X + C(M)', data=salary_table, subset=idx).fit()
print(interX_lm32.summary())
print('\n')
table3 = anova_lm(lm32, interX_lm32)
print(table3)
print('\n')
interM_lm32 = ols('S ~ X + C(E) * C(M)', data=salary_table, subset=idx).fit()
table4 = anova_lm(lm32, interM_lm32)
print(table4)
print('\n')
Explanation: Looks like one observation is an outlier.
End of explanation
try:
resid = interM_lm32.get_influence().summary_frame()['standard_resid']
except:
resid = interM_lm32.get_influence().summary_frame()['standard_resid']
plt.figure(figsize=(6,6))
for values, group in factor_groups:
i,j = values
idx = group.index
plt.scatter(X[idx], resid[idx], marker=symbols[j], color=colors[i-1],
s=144, edgecolors='black')
plt.xlabel('X[~[32]]');
plt.ylabel('standardized resids');
Explanation: Replot the residuals
End of explanation
lm_final = ols('S ~ X + C(E)*C(M)', data = salary_table.drop([drop_idx])).fit()
mf = lm_final.model.data.orig_exog
lstyle = ['-','--']
plt.figure(figsize=(6,6))
for values, group in factor_groups:
i,j = values
idx = group.index
plt.scatter(X[idx], S[idx], marker=symbols[j], color=colors[i-1],
s=144, edgecolors='black')
# drop NA because there is no idx 32 in the final model
plt.plot(mf.X[idx].dropna(), lm_final.fittedvalues[idx].dropna(),
ls=lstyle[j], color=colors[i-1])
plt.xlabel('Experience');
plt.ylabel('Salary');
Explanation: Plot the fitted values
End of explanation
U = S - X * interX_lm32.params['X']
plt.figure(figsize=(6,6))
interaction_plot(E, M, U, colors=['red','blue'], markers=['^','D'],
markersize=10, ax=plt.gca())
Explanation: From our first look at the data, the difference between Master's and PhD in the management group is different than in the non-management group. This is an interaction between the two qualitative variables management,M and education,E. We can visualize this by first removing the effect of experience, then plotting the means within each of the 6 groups using interaction.plot.
End of explanation
try:
jobtest_table = pd.read_table('jobtest.table')
except: # don't have data already
url = 'http://stats191.stanford.edu/data/jobtest.table'
jobtest_table = pd.read_table(url)
factor_group = jobtest_table.groupby(['ETHN'])
fig, ax = plt.subplots(figsize=(6,6))
colors = ['purple', 'green']
markers = ['o', 'v']
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
ax.set_xlabel('TEST');
ax.set_ylabel('JPERF');
min_lm = ols('JPERF ~ TEST', data=jobtest_table).fit()
print(min_lm.summary())
fig, ax = plt.subplots(figsize=(6,6));
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
ax.set_xlabel('TEST')
ax.set_ylabel('JPERF')
fig = abline_plot(model_results = min_lm, ax=ax)
min_lm2 = ols('JPERF ~ TEST + TEST:ETHN',
data=jobtest_table).fit()
print(min_lm2.summary())
fig, ax = plt.subplots(figsize=(6,6));
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
fig = abline_plot(intercept = min_lm2.params['Intercept'],
slope = min_lm2.params['TEST'], ax=ax, color='purple');
fig = abline_plot(intercept = min_lm2.params['Intercept'],
slope = min_lm2.params['TEST'] + min_lm2.params['TEST:ETHN'],
ax=ax, color='green');
min_lm3 = ols('JPERF ~ TEST + ETHN', data = jobtest_table).fit()
print(min_lm3.summary())
fig, ax = plt.subplots(figsize=(6,6));
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
fig = abline_plot(intercept = min_lm3.params['Intercept'],
slope = min_lm3.params['TEST'], ax=ax, color='purple');
fig = abline_plot(intercept = min_lm3.params['Intercept'] + min_lm3.params['ETHN'],
slope = min_lm3.params['TEST'], ax=ax, color='green');
min_lm4 = ols('JPERF ~ TEST * ETHN', data = jobtest_table).fit()
print(min_lm4.summary())
fig, ax = plt.subplots(figsize=(8,6));
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
fig = abline_plot(intercept = min_lm4.params['Intercept'],
slope = min_lm4.params['TEST'], ax=ax, color='purple');
fig = abline_plot(intercept = min_lm4.params['Intercept'] + min_lm4.params['ETHN'],
slope = min_lm4.params['TEST'] + min_lm4.params['TEST:ETHN'],
ax=ax, color='green');
# is there any effect of ETHN on slope or intercept?
table5 = anova_lm(min_lm, min_lm4)
print(table5)
# is there any effect of ETHN on intercept
table6 = anova_lm(min_lm, min_lm3)
print(table6)
# is there any effect of ETHN on slope
table7 = anova_lm(min_lm, min_lm2)
print(table7)
# is it just the slope or both?
table8 = anova_lm(min_lm2, min_lm4)
print(table8)
Explanation: Minority Employment Data
End of explanation
try:
rehab_table = pd.read_csv('rehab.table')
except:
url = 'http://stats191.stanford.edu/data/rehab.csv'
rehab_table = pd.read_table(url, delimiter=",")
rehab_table.to_csv('rehab.table')
fig, ax = plt.subplots(figsize=(8,6))
fig = rehab_table.boxplot('Time', 'Fitness', ax=ax, grid=False)
rehab_lm = ols('Time ~ C(Fitness)', data=rehab_table).fit()
table9 = anova_lm(rehab_lm)
print(table9)
print(rehab_lm.model.data.orig_exog)
print(rehab_lm.summary())
Explanation: One-way ANOVA
End of explanation
try:
kidney_table = pd.read_table('./kidney.table')
except:
url = 'http://stats191.stanford.edu/data/kidney.table'
kidney_table = pd.read_table(url, delimiter=" *")
Explanation: Two-way ANOVA
End of explanation
kidney_table.groupby(['Weight', 'Duration']).size()
Explanation: Explore the dataset
End of explanation
kt = kidney_table
plt.figure(figsize=(8,6))
fig = interaction_plot(kt['Weight'], kt['Duration'], np.log(kt['Days']+1),
colors=['red', 'blue'], markers=['D','^'], ms=10, ax=plt.gca())
Explanation: Balanced panel
End of explanation
kidney_lm = ols('np.log(Days+1) ~ C(Duration) * C(Weight)', data=kt).fit()
table10 = anova_lm(kidney_lm)
print(anova_lm(ols('np.log(Days+1) ~ C(Duration) + C(Weight)',
data=kt).fit(), kidney_lm))
print(anova_lm(ols('np.log(Days+1) ~ C(Duration)', data=kt).fit(),
ols('np.log(Days+1) ~ C(Duration) + C(Weight, Sum)',
data=kt).fit()))
print(anova_lm(ols('np.log(Days+1) ~ C(Weight)', data=kt).fit(),
ols('np.log(Days+1) ~ C(Duration) + C(Weight, Sum)',
data=kt).fit()))
Explanation: You have things available in the calling namespace available in the formula evaluation namespace
End of explanation
sum_lm = ols('np.log(Days+1) ~ C(Duration, Sum) * C(Weight, Sum)',
data=kt).fit()
print(anova_lm(sum_lm))
print(anova_lm(sum_lm, typ=2))
print(anova_lm(sum_lm, typ=3))
nosum_lm = ols('np.log(Days+1) ~ C(Duration, Treatment) * C(Weight, Treatment)',
data=kt).fit()
print(anova_lm(nosum_lm))
print(anova_lm(nosum_lm, typ=2))
print(anova_lm(nosum_lm, typ=3))
Explanation: Sum of squares
Illustrates the use of different types of sums of squares (I,II,II)
and how the Sum contrast can be used to produce the same output between
the 3.
Types I and II are equivalent under a balanced design.
Don't use Type III with non-orthogonal contrast - ie., Treatment
End of explanation |
4,139 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: 如何使用 TF-Hub 构建简单的文本分类器
注:本教程使用已弃用的 TensorFlow 1 功能。有关完成此任务的新方式,请参阅 TensorFlow 2 版本。
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 有关安装 Tensorflow 的更多详细信息,请访问 https
Step3: 开始
数据
我们将尝试解决 Large Movie Review Dataset v1.0 任务(Mass 等人,2011 年)。数据集由 IMDB 电影评论组成,这些评论使用从 1 到 10 的正数标记。任务是将评论标记为负面或正面。
Step4: 模型
输入函数
Estimator 框架提供了封装 Pandas 数据帧的输入函数。
Step5: 特征列
TF-Hub 提供了一个特征列,此列在给定的文本特征上应用模块,并进一步传递模块的输出。在本教程中,我们将使用 nnlm-en-dim128 模块。对于本教程而言,最重要的事实如下:
模块将字符串的一维张量中的一批句子作为输入。
模块负责句子的预处理(例如,移除标点符号和在空格处拆分)。
模块可以使用任何输入(例如,nnlm-en-dim128 将词汇中不存在的单词散列到约 20000 个桶中)。
Step6: Estimator
要实现分类,我们可以使用 DNN 分类器(请注意本教程结尾处有关标签函数的不同建模的补充说明)。
Step7: 训练
以合理的步骤数训练 Estimator。
Step8: 预测
为训练集和测试集运行预测。
Step9: 混淆矩阵
我们可以目视检查混淆矩阵,以了解错误分类的分布。
Step10: 进一步改进
情感回归:我们使用分类器将每个样本分配给一个极性类。但实际上,我们还有另一个分类特征 - 情感。在这里,类实际上表示一个比例,并且基础值(正/负)可以很好地映射到连续范围内。我们可以通过计算回归(DNN 回归器)而不是分类(DNN 分类器)来利用此属性。
较大的模块:对于本教程而言,我们使用了较小的模块来限制内存使用。有些模块具有更大的词汇和更大的嵌入向量空间,可以提供更多的准确率点。
参数调节:我们可以通过调节元参数(例如学习率或步骤数)来提高准确率,尤其是在使用不同模块的情况下。如果我们想获得任何合理的结果,那么验证集非常重要,因为这样可以轻松建立一个模型来学习预测训练数据,而无需很好地泛化到测试集。
更复杂的模型:我们使用了一个通过嵌入每个单词并随后将其与平均值相结合来计算句子嵌入向量的模块。此外,也可以使用序贯模块(例如 Universal Sentence Encoder 模块)来更好地捕获句子的性质。或者,使用两个或多个 TF-Hub 模块的集合。
正则化:为了避免过拟合,我们可以尝试使用执行某种正则化的优化器,例如近端 Adagrad 优化器。
高级:迁移学习分析
迁移学习可以节省训练资源,即使基于小数据集训练也可以实现良好的模型泛化。在这一部分中,我们将通过使用两个不同的 TF-Hub 模块进行训练来演示这一点:
nnlm-en-dim128 - 预训练的文本嵌入向量模块;
random-nnlm-en-dim128 - 文本嵌入向量模块,其词汇和网络与 nnlm-en-dim128 相同,但权重只是随机初始化的,从未基于真实数据进行训练。
在以下两种模式下训练:
仅训练分类器(即冻结模块),以及
将分类器与模块一起训练。
我们运行一些训练和评估来查看使用各种模块如何影响准确率。
Step11: 我们来看看结果。
Step12: 我们已经看到了一些模式,但首先我们应当建立测试集的基线准确率 - 通过仅输出最具代表性的类的标签可以实现的下限: | Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
# Install TF-Hub.
!pip install seaborn
Explanation: 如何使用 TF-Hub 构建简单的文本分类器
注:本教程使用已弃用的 TensorFlow 1 功能。有关完成此任务的新方式,请参阅 TensorFlow 2 版本。
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/text_classification_with_tf_hub.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/text_classification_with_tf_hub.ipynb"> <img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png"> 在 GitHub 上查看源代码</a></td>
<td><a href="https://tfhub.dev/google/nnlm-en-dim128/1"><img src="https://tensorflow.google.cn/images/hub_logo_32px.png">查看 TF Hub 模型</a></td>
</table>
TF-Hub 是一个共享打包在可重用资源(尤其是预训练的模块)中的机器学习专业知识的平台。本教程分为两个主要部分。
入门:使用 TF-Hub 训练文本分类器
我们将使用 TF-Hub 文本嵌入向量模块训练具有合理基线准确率的简单情感分类器。然后,我们将分析预测结果以确保模型合理,并提出改进措施以提高准确率。
高级:迁移学习分析
在本部分中,我们将使用各种 TF-Hub 模块来比较它们对 Estimator 准确率的影响,并展示迁移学习的优势和缺陷。
可选前提条件
对 Tensorflow 预制 Estimator 框架有基本了解。
熟悉 Pandas 库。
设置
End of explanation
from absl import logging
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import seaborn as sns
Explanation: 有关安装 Tensorflow 的更多详细信息,请访问 https://tensorflow.google.cn/install/。
End of explanation
# Load all files from a directory in a DataFrame.
def load_directory_data(directory):
data = {}
data["sentence"] = []
data["sentiment"] = []
for file_path in os.listdir(directory):
with tf.io.gfile.GFile(os.path.join(directory, file_path), "r") as f:
data["sentence"].append(f.read())
data["sentiment"].append(re.match("\d+_(\d+)\.txt", file_path).group(1))
return pd.DataFrame.from_dict(data)
# Merge positive and negative examples, add a polarity column and shuffle.
def load_dataset(directory):
pos_df = load_directory_data(os.path.join(directory, "pos"))
neg_df = load_directory_data(os.path.join(directory, "neg"))
pos_df["polarity"] = 1
neg_df["polarity"] = 0
return pd.concat([pos_df, neg_df]).sample(frac=1).reset_index(drop=True)
# Download and process the dataset files.
def download_and_load_datasets(force_download=False):
dataset = tf.keras.utils.get_file(
fname="aclImdb.tar.gz",
origin="http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz",
extract=True)
train_df = load_dataset(os.path.join(os.path.dirname(dataset),
"aclImdb", "train"))
test_df = load_dataset(os.path.join(os.path.dirname(dataset),
"aclImdb", "test"))
return train_df, test_df
# Reduce logging output.
logging.set_verbosity(logging.ERROR)
train_df, test_df = download_and_load_datasets()
train_df.head()
Explanation: 开始
数据
我们将尝试解决 Large Movie Review Dataset v1.0 任务(Mass 等人,2011 年)。数据集由 IMDB 电影评论组成,这些评论使用从 1 到 10 的正数标记。任务是将评论标记为负面或正面。
End of explanation
# Training input on the whole training set with no limit on training epochs.
train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
train_df, train_df["polarity"], num_epochs=None, shuffle=True)
# Prediction on the whole training set.
predict_train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
train_df, train_df["polarity"], shuffle=False)
# Prediction on the test set.
predict_test_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
test_df, test_df["polarity"], shuffle=False)
Explanation: 模型
输入函数
Estimator 框架提供了封装 Pandas 数据帧的输入函数。
End of explanation
embedded_text_feature_column = hub.text_embedding_column(
key="sentence",
module_spec="https://tfhub.dev/google/nnlm-en-dim128/1")
Explanation: 特征列
TF-Hub 提供了一个特征列,此列在给定的文本特征上应用模块,并进一步传递模块的输出。在本教程中,我们将使用 nnlm-en-dim128 模块。对于本教程而言,最重要的事实如下:
模块将字符串的一维张量中的一批句子作为输入。
模块负责句子的预处理(例如,移除标点符号和在空格处拆分)。
模块可以使用任何输入(例如,nnlm-en-dim128 将词汇中不存在的单词散列到约 20000 个桶中)。
End of explanation
estimator = tf.estimator.DNNClassifier(
hidden_units=[500, 100],
feature_columns=[embedded_text_feature_column],
n_classes=2,
optimizer=tf.keras.optimizers.Adagrad(lr=0.003))
Explanation: Estimator
要实现分类,我们可以使用 DNN 分类器(请注意本教程结尾处有关标签函数的不同建模的补充说明)。
End of explanation
# Training for 5,000 steps means 640,000 training examples with the default
# batch size. This is roughly equivalent to 25 epochs since the training dataset
# contains 25,000 examples.
estimator.train(input_fn=train_input_fn, steps=5000);
Explanation: 训练
以合理的步骤数训练 Estimator。
End of explanation
train_eval_result = estimator.evaluate(input_fn=predict_train_input_fn)
test_eval_result = estimator.evaluate(input_fn=predict_test_input_fn)
print("Training set accuracy: {accuracy}".format(**train_eval_result))
print("Test set accuracy: {accuracy}".format(**test_eval_result))
Explanation: 预测
为训练集和测试集运行预测。
End of explanation
def get_predictions(estimator, input_fn):
return [x["class_ids"][0] for x in estimator.predict(input_fn=input_fn)]
LABELS = [
"negative", "positive"
]
# Create a confusion matrix on training data.
cm = tf.math.confusion_matrix(train_df["polarity"],
get_predictions(estimator, predict_train_input_fn))
# Normalize the confusion matrix so that each row sums to 1.
cm = tf.cast(cm, dtype=tf.float32)
cm = cm / tf.math.reduce_sum(cm, axis=1)[:, np.newaxis]
sns.heatmap(cm, annot=True, xticklabels=LABELS, yticklabels=LABELS);
plt.xlabel("Predicted");
plt.ylabel("True");
Explanation: 混淆矩阵
我们可以目视检查混淆矩阵,以了解错误分类的分布。
End of explanation
def train_and_evaluate_with_module(hub_module, train_module=False):
embedded_text_feature_column = hub.text_embedding_column(
key="sentence", module_spec=hub_module, trainable=train_module)
estimator = tf.estimator.DNNClassifier(
hidden_units=[500, 100],
feature_columns=[embedded_text_feature_column],
n_classes=2,
optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.003))
estimator.train(input_fn=train_input_fn, steps=1000)
train_eval_result = estimator.evaluate(input_fn=predict_train_input_fn)
test_eval_result = estimator.evaluate(input_fn=predict_test_input_fn)
training_set_accuracy = train_eval_result["accuracy"]
test_set_accuracy = test_eval_result["accuracy"]
return {
"Training accuracy": training_set_accuracy,
"Test accuracy": test_set_accuracy
}
results = {}
results["nnlm-en-dim128"] = train_and_evaluate_with_module(
"https://tfhub.dev/google/nnlm-en-dim128/1")
results["nnlm-en-dim128-with-module-training"] = train_and_evaluate_with_module(
"https://tfhub.dev/google/nnlm-en-dim128/1", True)
results["random-nnlm-en-dim128"] = train_and_evaluate_with_module(
"https://tfhub.dev/google/random-nnlm-en-dim128/1")
results["random-nnlm-en-dim128-with-module-training"] = train_and_evaluate_with_module(
"https://tfhub.dev/google/random-nnlm-en-dim128/1", True)
Explanation: 进一步改进
情感回归:我们使用分类器将每个样本分配给一个极性类。但实际上,我们还有另一个分类特征 - 情感。在这里,类实际上表示一个比例,并且基础值(正/负)可以很好地映射到连续范围内。我们可以通过计算回归(DNN 回归器)而不是分类(DNN 分类器)来利用此属性。
较大的模块:对于本教程而言,我们使用了较小的模块来限制内存使用。有些模块具有更大的词汇和更大的嵌入向量空间,可以提供更多的准确率点。
参数调节:我们可以通过调节元参数(例如学习率或步骤数)来提高准确率,尤其是在使用不同模块的情况下。如果我们想获得任何合理的结果,那么验证集非常重要,因为这样可以轻松建立一个模型来学习预测训练数据,而无需很好地泛化到测试集。
更复杂的模型:我们使用了一个通过嵌入每个单词并随后将其与平均值相结合来计算句子嵌入向量的模块。此外,也可以使用序贯模块(例如 Universal Sentence Encoder 模块)来更好地捕获句子的性质。或者,使用两个或多个 TF-Hub 模块的集合。
正则化:为了避免过拟合,我们可以尝试使用执行某种正则化的优化器,例如近端 Adagrad 优化器。
高级:迁移学习分析
迁移学习可以节省训练资源,即使基于小数据集训练也可以实现良好的模型泛化。在这一部分中,我们将通过使用两个不同的 TF-Hub 模块进行训练来演示这一点:
nnlm-en-dim128 - 预训练的文本嵌入向量模块;
random-nnlm-en-dim128 - 文本嵌入向量模块,其词汇和网络与 nnlm-en-dim128 相同,但权重只是随机初始化的,从未基于真实数据进行训练。
在以下两种模式下训练:
仅训练分类器(即冻结模块),以及
将分类器与模块一起训练。
我们运行一些训练和评估来查看使用各种模块如何影响准确率。
End of explanation
pd.DataFrame.from_dict(results, orient="index")
Explanation: 我们来看看结果。
End of explanation
estimator.evaluate(input_fn=predict_test_input_fn)["accuracy_baseline"]
Explanation: 我们已经看到了一些模式,但首先我们应当建立测试集的基线准确率 - 通过仅输出最具代表性的类的标签可以实现的下限:
End of explanation |
4,140 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 1 Exploratory data analysis
Anecdotal evidence usually fails, because
Step1: DataFrames
DataFrame is the fundamental data structure provided by pandas. A DataFrame contains a row for each record.
In addition to the data, a DataFrame also contains the variable names and their types, and it provides methods for accessing and modifying the data.
We can easily access the data frame and its columns with scripts intthe https
Step2: Exercise 1
Print value counts for <tt>birthord</tt> and compare to results published in the codebook
Step3: Print value counts for <tt>prglngth</tt> and compare to results published in the codebook
Step4: Compute the mean birthweight.
Step5: Create a new column named <tt>totalwgt_kg</tt> that contains birth weight in kilograms. Compute its mean. Remember that when you create a new column, you have to use dictionary syntax, not dot notation.
Step6: One important note
Step7: Use a boolean Series to select the records for the pregnancies that ended in live birth.
Step8: Count the number of live births with <tt>birthwgt_lb</tt> between 0 and 5 pounds (including both). The result should be 1125.
Step9: Count the number of live births with <tt>birthwgt_lb</tt> between 9 and 95 pounds (including both). The result should be 798
Step10: Use <tt>birthord</tt> to select the records for first babies and others. How many are there of each?
Step11: Compute the mean weight for first babies and others.
Step12: Compute the mean <tt>prglngth</tt> for first babies and others. Compute the difference in means, expressed in hours.
Step13: Exercise 2 | Python Code:
import matplotlib
import pandas as pd
%matplotlib inline
Explanation: Chapter 1 Exploratory data analysis
Anecdotal evidence usually fails, because:
- Small number of observations
- Selection bias
- Confirmation bias
- Inaccuracy
To address the limitations of anecdotes, we will use the tools of statistics, which include:
- Data collection
- large data
- valid data
- Descriptive statistics
- summary statistics
- visualization
- Exploratory data analysis
- patterns
- differences
- inconsistencies & limitations
- Estimation
- sample, population
- Hypothesis testing
- group
End of explanation
import nsfg
df = nsfg.ReadFemPreg()
df.head()
pregordr = df['pregordr']
pregordr[2:5]
Explanation: DataFrames
DataFrame is the fundamental data structure provided by pandas. A DataFrame contains a row for each record.
In addition to the data, a DataFrame also contains the variable names and their types, and it provides methods for accessing and modifying the data.
We can easily access the data frame and its columns with scripts intthe https://github.com/AllenDowney/ThinkStats2 repo.
End of explanation
birthord_counts = df.birthord.value_counts().sort_index()
birthord_counts
birthord_counts.plot(kind='bar')
Explanation: Exercise 1
Print value counts for <tt>birthord</tt> and compare to results published in the codebook
End of explanation
df['prglngth_cut'] = pd.cut(df.prglngth,bins=[0,13,26,50])
df.prglngth_cut.value_counts().sort_index()
Explanation: Print value counts for <tt>prglngth</tt> and compare to results published in the codebook
End of explanation
df.totalwgt_lb.mean()
Explanation: Compute the mean birthweight.
End of explanation
df['totalwgt_kg'] = 0.45359237 * df.totalwgt_lb
df.totalwgt_kg.mean()
Explanation: Create a new column named <tt>totalwgt_kg</tt> that contains birth weight in kilograms. Compute its mean. Remember that when you create a new column, you have to use dictionary syntax, not dot notation.
End of explanation
lve_birth = df.outcome == 1
lve_birth.tail()
Explanation: One important note: when you add a new column to a DataFrame, you must use dictionary syntax, like this
```python
CORRECT
df['totalwgt_lb'] = df.birthwgt_lb + df.birthwgt_oz / 16.0
Not dot notation, like this:python
WRONG!
df.totalwgt_lb = df.birthwgt_lb + df.birthwgt_oz / 16.0
```
The version with dot notation adds an attribute to the DataFrame object, but that attribute is not treated as a new column.
Create a boolean Series.
End of explanation
live = df[df.outcome == 1]
len(live)
Explanation: Use a boolean Series to select the records for the pregnancies that ended in live birth.
End of explanation
len(live[(0<=live.birthwgt_lb) & (live.birthwgt_lb<=5)])
Explanation: Count the number of live births with <tt>birthwgt_lb</tt> between 0 and 5 pounds (including both). The result should be 1125.
End of explanation
len(live[(9<=live.birthwgt_lb) & (live.birthwgt_lb<95)])
Explanation: Count the number of live births with <tt>birthwgt_lb</tt> between 9 and 95 pounds (including both). The result should be 798
End of explanation
firsts = df[df.birthord==1]
others = df[df.birthord>1]
len(firsts), len(others)
Explanation: Use <tt>birthord</tt> to select the records for first babies and others. How many are there of each?
End of explanation
firsts.totalwgt_lb.mean(), others.totalwgt_lb.mean()
Explanation: Compute the mean weight for first babies and others.
End of explanation
firsts.prglngth.mean(), others.prglngth.mean()
Explanation: Compute the mean <tt>prglngth</tt> for first babies and others. Compute the difference in means, expressed in hours.
End of explanation
import thinkstats2
resp = thinkstats2.ReadStataDct('2002FemResp.dct').ReadFixedWidth('2002FemResp.dat.gz', compression='gzip')
preg = nsfg.ReadFemPreg()
preg_map = nsfg.MakePregMap(preg)
for index, pregnum in resp.pregnum.iteritems():
caseid = resp.caseid[index]
indices = preg_map[caseid]
# check that pregnum from the respondent file equals
# the number of records in the pregnancy file
if len(indices) != pregnum:
print(caseid, len(indices), pregnum)
break
Explanation: Exercise 2
End of explanation |
4,141 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AXON is eXtended Object Notation. It's a simple notation of objects,
documents and data. It's also a text based serialization format in first place.
It tries to combine the best of JSON, XML and YAML.
pyaxon is reference implementation of the library for processing AXON with python.
<!-- TEASER_END -->
Step1: There are two API functions loads/dumps for loading/dumping from/to unicode string.
By default loading and dumping are safe. By the word "safe" we mean that there is no user code is executed while loading and dumping. Unicode strings are converted only into python objects of given types. There is "unsafe" mode too. It allows to transform unicode string into user defined objects and to dump objects into unicode string under user control. But this is the topic of another post.
Simple example
Step2: Here vals is always list of objects.
Step3: We see that the message is converted to the instance of class Element. Attribute vals.mapping is dictionary containing objects's attributes
Step4: Attributes of the object are accessable by methods get/set
Step5: Element objects has content - list of values. They are accessible by python's sequence protocol. In our case the first value is the message body of the note.
Step6: For dumping objects there are three modes. First mode is compact
Step7: Second mode is pretty dumping mode with indentations and without braces
Step8: Third mode is pretty dumping mode with indentation and braces
Step10: At the end let's consider JSON-like representation too
Step11: It has converted into python dicts
Step12: Compact dump is pretty small in size.
Step13: Dumping in pretty mode is also pretty formatted.
Step14: JSON-like objects are pretty dumps only in indented form with braces.
JSON-like example
Let's consider now JSON-like example with crossreferences and datetimes
Step15: It's easy to see that crossreference links just works
Step16: Pretty dump looks like this one
Step17: Note that sorted parameter defines whether to sort keys in dict.
XML-like example
Step18: Let's examine the value
Step19: Dataset example
Let's consider simple tabular dataset | Python Code:
from __future__ import print_function
from axon import loads, dumps
from pprint import pprint
Explanation: AXON is eXtended Object Notation. It's a simple notation of objects,
documents and data. It's also a text based serialization format in first place.
It tries to combine the best of JSON, XML and YAML.
pyaxon is reference implementation of the library for processing AXON with python.
<!-- TEASER_END -->
End of explanation
text = '''\
note {
from: "Pooh"
to: "Bee"
posted: 2006-08-15T17:30
heading: "Honey"
"Don't forget to get me honey!" }
'''
vals = loads(text)
Explanation: There are two API functions loads/dumps for loading/dumping from/to unicode string.
By default loading and dumping are safe. By the word "safe" we mean that there is no user code is executed while loading and dumping. Unicode strings are converted only into python objects of given types. There is "unsafe" mode too. It allows to transform unicode string into user defined objects and to dump objects into unicode string under user control. But this is the topic of another post.
Simple example
End of explanation
ob = vals[0]
type(ob)
Explanation: Here vals is always list of objects.
End of explanation
print(type(ob.__attrs__))
print(ob.__attrs__)
Explanation: We see that the message is converted to the instance of class Element. Attribute vals.mapping is dictionary containing objects's attributes:
End of explanation
[(attr, getattr(ob, attr)) for attr in ob.__attrs__]
Explanation: Attributes of the object are accessable by methods get/set:
End of explanation
print(ob[0])
Explanation: Element objects has content - list of values. They are accessible by python's sequence protocol. In our case the first value is the message body of the note.
End of explanation
print(dumps([ob]))
Explanation: For dumping objects there are three modes. First mode is compact:
End of explanation
print(dumps([ob], pretty=1))
Explanation: Second mode is pretty dumping mode with indentations and without braces:
End of explanation
print(dumps([ob], pretty=1, braces=1))
Explanation: Third mode is pretty dumping mode with indentation and braces:
End of explanation
text = \
{note: {
from: "Pooh"
to: "Bee"
posted: 2006-08-15T17:30
heading: "Honey"
body: "Don't forget to get me honey!"
}}
vals = loads(text)
Explanation: At the end let's consider JSON-like representation too:
End of explanation
pprint(vals)
Explanation: It has converted into python dicts:
End of explanation
print(dumps(vals))
Explanation: Compact dump is pretty small in size.
End of explanation
print(dumps(vals, pretty=1))
Explanation: Dumping in pretty mode is also pretty formatted.
End of explanation
text = '''\
{
topic: [
&1 {python: "Python related"}
&2 {axon: "AXON related"}
&3 {json: "JSON related"}
]
posts: [
{ id: 1
topic: *1
date: 2012-01-02T12:15+03
body:"..." }
{ id: 2
topic: *2
date: 2012-01-12T09:25+03
body:"..." }
{ id: 3
topic: *3
date: 2012-02-08T10:35+03
body:"..." }
]
}
'''
vals = loads(text)
pprint(vals)
Explanation: JSON-like objects are pretty dumps only in indented form with braces.
JSON-like example
Let's consider now JSON-like example with crossreferences and datetimes:
End of explanation
assert vals[0]['topic'][0] is vals[0]['posts'][0]['topic']
assert vals[0]['topic'][1] is vals[0]['posts'][1]['topic']
assert vals[0]['topic'][2] is vals[0]['posts'][2]['topic']
Explanation: It's easy to see that crossreference links just works:
End of explanation
print(dumps(vals, pretty=1, crossref=1, sorted=0))
Explanation: Pretty dump looks like this one:
End of explanation
text = '''\
html {
xmlns:"http://www.w3.org/1999/xhtml"
head {
title {"Form Example"}
link {
rel:"stylesheet"
href: "formstyle.css"
type: "text/css" }}
body {
h1 {"Form Example"}
form {
action: "sample.py"
div {
class: "formin"
"(a)"
input {type:"text" name:"text1" value:"A textbox"}}
div {
class: "formin"
"(b)"
input {type:"text" size:6 maxlength:10 name:"text2"}}
div {
class: "formb"
"(c)"
input {type:"submit" value:"Go!"}}
}
}
}
'''
vals = loads(text)
val = vals[0]
print(val)
Explanation: Note that sorted parameter defines whether to sort keys in dict.
XML-like example
End of explanation
print(type(val))
print(val.__attrs__)
head, body = val[0], val[1]
print(type(head))
title, link = head
print(title)
print(link)
h1, form = body
print(h1)
print(form.__vals__)
div1, div2, div3 = form
print(div1.__attrs__)
label1, input1 = div1
print(label1)
print(input1)
print(div2.__attrs__)
label2, input2 = div2
print(label2)
print(input2)
print(type(div3))
print(div3.__attrs__)
label3, input3 = div3
print(label3)
print(input3)
print(dumps([val], pretty=1))
print(dumps([val], pretty=1, braces=1))
Explanation: Let's examine the value:
End of explanation
text = '''\
dataset {
fields: ("id" "date" "time" "territory_id" "A" "B" "C")
(1 2012-01-10 12:35 17 3.14 22 33500)
(2 2012-01-11 13:05 27 1.25 32 11500)
(3 2012-01-12 10:45 -17 -2.26 -12 44700)
}
'''
ob = loads(text)[0]
print(ob.__tag__)
pprint(ob.__attrs__)
pprint(ob.__vals__, width=132)
print("\nPretty form of dataset:")
print(dumps([ob], pretty=1, hsize=10))
from collections import namedtuple
Datarow = namedtuple("Datarow", ob.fields)
rows = []
for line in ob:
print(type(line), line)
rows.append(Datarow(*line))
print("\n")
for row in rows:
print(type(row), row)
Explanation: Dataset example
Let's consider simple tabular dataset:
End of explanation |
4,142 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
UTSC Machine Learning Workshop
Introduction to Linear Regression
Adapted from Chapter 3 of An Introduction to Statistical Learning
Motivation
Regression problems are supervised learning problems in which the response is continuous. Classification problems are supervised learning problems in which the response is categorical. Linear regression is a technique that is useful for regression problems.
So, why are we learning linear regression?
widely used
runs fast
easy to use (not a lot of tuning required)
highly interpretable
basis for many other methods
Libraries
We'll be using scikit-learn since it provides significantly more useful functionality for machine learning in general.
Step1: Example
Step2: What are the features?
- TV
Step3: There are 200 observations, and thus 200 markets in the dataset.
Step4: Questions About the Advertising Data
Let's pretend you work for the company that manufactures and markets this widget. The company might ask you the following
Step5: Interpreting Model Coefficients
How do we interpret the TV coefficient ($\beta_1$)?
- A "unit" increase in TV ad spending is associated with a 0.047537 "unit" increase in Sales.
- Or more clearly
Step6: Thus, we would predict Sales of 9,409 widgets in that market.
Plotting the Least Squares Line
Let's plot the least squares line for Sales versus each of the features
Step7: Multiple Linear Regression
Simple linear regression can easily be extended to include multiple features. This is called multiple linear regression
Step8: How do we interpret these coefficients? For a given amount of Radio and Newspaper ad spending, an increase of $1000 in TV ad spending is associated with an increase in Sales of 45.765 widgets.
A lot of the information we have been reviewing piece-by-piece is available in the Statsmodels model summary output
Step9: For scikit-learn, we need to represent all data numerically. If the feature only has two categories, we can simply create a dummy variable that represents the categories as a binary value
Step10: Let's redo the multiple linear regression and include the Size_large feature
Step11: How do we interpret the Size_large coefficient? For a given amount of TV/Radio/Newspaper ad spending, being a large market is associated with an average increase in Sales of 57.42 widgets (as compared to a small market, which is called the baseline level).
What if we had reversed the 0/1 coding and created the feature 'Size_small' instead? The coefficient would be the same, except it would be negative instead of positive. As such, your choice of category for the baseline does not matter, all that changes is your interpretation of the coefficient.
Handling Categorical Features with More than Two Categories
Let's create a new feature called Area, and randomly assign observations to be rural, suburban, or urban
Step12: We have to represent Area numerically, but we can't simply code it as 0=rural, 1=suburban, 2=urban because that would imply an ordered relationship between suburban and urban, and thus urban is somehow "twice" the suburban category. Note that if you do have ordered categories (i.e., strongly disagree, disagree, neutral, agree, strongly agree), you can use a single dummy variable and represent the categories numerically (such as 1, 2, 3, 4, 5).
Anyway, our Area feature is unordered, so we have to create additional dummy variables. Let's explore how to do this using pandas
Step13: However, we actually only need two dummy variables, not three. Why? Because two dummies captures all of the "information" about the Area feature, and implicitly defines rural as the "baseline level".
Let's see what that looks like
Step14: Here is how we interpret the coding
Step15: How do we interpret the coefficients?
- Holding all other variables fixed, being a suburban area is associated with an average decrease in Sales of 106.56 widgets (as compared to the baseline level, which is rural).
- Being an urban area is associated with an average increase in Sales of 268.13 widgets (as compared to rural).
Linear Regression with nonLinear Terms
Let's look at another example of linear regression with nonlinear terms inside.
We will use the trees data set from pydataset package.
Step16: The dataset trees have two features Girth and Height. we want to use them to predict the Volume of the trees.
Step17: Let's examine the result of the fitting.
Step18: Can we do better than this? Let us add in non linear features | Python Code:
# imports
import pandas as pd
import seaborn as sns
#import statsmodels.formula.api as smf
from sklearn.linear_model import LinearRegression
from sklearn import metrics
import numpy as np
# allow plots to appear directly in the notebook
%matplotlib inline
Explanation: UTSC Machine Learning Workshop
Introduction to Linear Regression
Adapted from Chapter 3 of An Introduction to Statistical Learning
Motivation
Regression problems are supervised learning problems in which the response is continuous. Classification problems are supervised learning problems in which the response is categorical. Linear regression is a technique that is useful for regression problems.
So, why are we learning linear regression?
widely used
runs fast
easy to use (not a lot of tuning required)
highly interpretable
basis for many other methods
Libraries
We'll be using scikit-learn since it provides significantly more useful functionality for machine learning in general.
End of explanation
# read data into a DataFrame
data = pd.read_csv('data/Advertising.csv', index_col=0)
data.head()
Explanation: Example: Advertising Data
Let's take a look at some data, ask some questions about that data, and then use linear regression to answer those questions!
End of explanation
# print the shape of the DataFrame
data.shape
Explanation: What are the features?
- TV: advertising dollars spent on TV for a single product in a given market (in thousands of dollars)
- Radio: advertising dollars spent on Radio
- Newspaper: advertising dollars spent on Newspaper
What is the response?
- Sales: sales of a single product in a given market (in thousands of widgets)
End of explanation
# visualize the relationship between the features and the response using scatterplots
sns.pairplot(data, x_vars=['TV','Radio','Newspaper'], y_vars='Sales', size=7, aspect=0.7)
Explanation: There are 200 observations, and thus 200 markets in the dataset.
End of explanation
### SCIKIT-LEARN ###
# create X and y
feature_cols = ['TV']
X = data[feature_cols]
y = data.Sales
# instantiate and fit
lm = LinearRegression()
lm.fit(X, y)
# print the coefficients
print lm.intercept_
print lm.coef_
Explanation: Questions About the Advertising Data
Let's pretend you work for the company that manufactures and markets this widget. The company might ask you the following: On the basis of this data, how should we spend our advertising money in the future?
This general question might lead you to more specific questions:
1. Is there a relationship between ads and sales?
2. How strong is that relationship?
3. Which ad types contribute to sales?
4. What is the effect of each ad type of sales?
5. Given ad spending in a particular market, can sales be predicted?
We will explore these questions below!
Simple Linear Regression
Simple linear regression is an approach for predicting a quantitative response using a single feature (or "predictor" or "input variable"). It takes the following form:
$y = \beta_0 + \beta_1x$
What does each term represent?
- $y$ is the response
- $x$ is the feature
- $\beta_0$ is the intercept
- $\beta_1$ is the coefficient for x
Together, $\beta_0$ and $\beta_1$ are called the model coefficients. To create your model, you must "learn" the values of these coefficients. And once we've learned these coefficients, we can use the model to predict Sales!
Estimating ("Learning") Model Coefficients
Generally speaking, coefficients are estimated using the least squares criterion, which means we are find the line (mathematically) which minimizes the sum of squared residuals (or "sum of squared errors"):
What elements are present in the diagram?
- The black dots are the observed values of x and y.
- The blue line is our least squares line.
- The red lines are the residuals, which are the distances between the observed values and the least squares line.
How do the model coefficients relate to the least squares line?
- $\beta_0$ is the intercept (the value of $y$ when $x$=0)
- $\beta_1$ is the slope (the change in $y$ divided by change in $x$)
Here is a graphical depiction of those calculations:
Let's estimate the model coefficients for the advertising data:
End of explanation
# manually calculate the prediction
7.032594 + 0.047537*50
### SCIKIT-LEARN ###
# predict for a new observation
lm.predict(50)
Explanation: Interpreting Model Coefficients
How do we interpret the TV coefficient ($\beta_1$)?
- A "unit" increase in TV ad spending is associated with a 0.047537 "unit" increase in Sales.
- Or more clearly: An additional $1,000 spent on TV ads is associated with an increase in sales of 47.537 widgets.
Note that if an increase in TV ad spending was associated with a decrease in sales, $\beta_1$ would be negative.
Using the Model for Prediction
Let's say that there was a new market where the TV advertising spend was $50,000. What would we predict for the Sales in that market?
$$y = \beta_0 + \beta_1x$$
$$y = 7.032594 + 0.047537 \times 50$$
End of explanation
sns.pairplot(data, x_vars=['TV','Radio','Newspaper'], y_vars='Sales', size=7, aspect=0.7, kind='reg')
Explanation: Thus, we would predict Sales of 9,409 widgets in that market.
Plotting the Least Squares Line
Let's plot the least squares line for Sales versus each of the features:
End of explanation
### SCIKIT-LEARN ###
# create X and y
feature_cols = ['TV', 'Radio', 'Newspaper']
X = data[feature_cols]
y = data.Sales
# instantiate and fit
lm = LinearRegression()
lm.fit(X, y)
# print the coefficients
print lm.intercept_
print lm.coef_
# pair the feature names with the coefficients
zip(feature_cols, lm.coef_)
Explanation: Multiple Linear Regression
Simple linear regression can easily be extended to include multiple features. This is called multiple linear regression:
$y = \beta_0 + \beta_1x_1 + ... + \beta_nx_n$
Each $x$ represents a different feature, and each feature has its own coefficient. In this case:
$y = \beta_0 + \beta_1 \times TV + \beta_2 \times Radio + \beta_3 \times Newspaper$
Let's estimate these coefficients:
End of explanation
# set a seed for reproducibility
np.random.seed(12345)
# create a Series of booleans in which roughly half are True
nums = np.random.rand(len(data))
mask_large = nums > 0.5
# initially set Size to small, then change roughly half to be large
data['Size'] = 'small'
data.loc[mask_large, 'Size'] = 'large'
data.head()
Explanation: How do we interpret these coefficients? For a given amount of Radio and Newspaper ad spending, an increase of $1000 in TV ad spending is associated with an increase in Sales of 45.765 widgets.
A lot of the information we have been reviewing piece-by-piece is available in the Statsmodels model summary output:
Feature Selection
How do I decide which features to include in a linear model?
-the answer will be in the next session.
Handling Categorical Features with Two Categories
Up to now, all of our features have been numeric. What if one of our features was categorical?
Let's create a new feature called Size, and randomly assign observations to be small or large:
End of explanation
# create a new Series called Size_large
data['Size_large'] = data.Size.map({'small':0, 'large':1})
data.head()
Explanation: For scikit-learn, we need to represent all data numerically. If the feature only has two categories, we can simply create a dummy variable that represents the categories as a binary value:
End of explanation
# create X and y
feature_cols = ['TV', 'Radio', 'Newspaper', 'Size_large']
X = data[feature_cols]
y = data.Sales
# instantiate, fit
lm = LinearRegression()
lm.fit(X, y)
# print coefficients
zip(feature_cols, lm.coef_)
Explanation: Let's redo the multiple linear regression and include the Size_large feature:
End of explanation
# set a seed for reproducibility
np.random.seed(123456)
# assign roughly one third of observations to each group
nums = np.random.rand(len(data))
mask_suburban = (nums > 0.33) & (nums < 0.66)
mask_urban = nums > 0.66
data['Area'] = 'rural'
data.loc[mask_suburban, 'Area'] = 'suburban'
data.loc[mask_urban, 'Area'] = 'urban'
data.head()
Explanation: How do we interpret the Size_large coefficient? For a given amount of TV/Radio/Newspaper ad spending, being a large market is associated with an average increase in Sales of 57.42 widgets (as compared to a small market, which is called the baseline level).
What if we had reversed the 0/1 coding and created the feature 'Size_small' instead? The coefficient would be the same, except it would be negative instead of positive. As such, your choice of category for the baseline does not matter, all that changes is your interpretation of the coefficient.
Handling Categorical Features with More than Two Categories
Let's create a new feature called Area, and randomly assign observations to be rural, suburban, or urban:
End of explanation
# create three dummy variables using get_dummies
x = pd.get_dummies(data.Area, prefix='Area')
x.tail()
data = pd.concat([data, x], axis=1)
data.tail()
Explanation: We have to represent Area numerically, but we can't simply code it as 0=rural, 1=suburban, 2=urban because that would imply an ordered relationship between suburban and urban, and thus urban is somehow "twice" the suburban category. Note that if you do have ordered categories (i.e., strongly disagree, disagree, neutral, agree, strongly agree), you can use a single dummy variable and represent the categories numerically (such as 1, 2, 3, 4, 5).
Anyway, our Area feature is unordered, so we have to create additional dummy variables. Let's explore how to do this using pandas:
End of explanation
# create three dummy variables using get_dummies, then exclude the first dummy column
area_dummies = pd.get_dummies(data.Area, prefix='Area').iloc[:, 1:]
area_dummies.head()
Explanation: However, we actually only need two dummy variables, not three. Why? Because two dummies captures all of the "information" about the Area feature, and implicitly defines rural as the "baseline level".
Let's see what that looks like:
End of explanation
# concatenate the dummy variable columns onto the DataFrame (axis=0 means rows, axis=1 means columns)
data = pd.concat([data, area_dummies], axis=1)
data.head()
data.tail()
# create X and y
feature_cols = ['TV', 'Radio', 'Newspaper', 'Size_large', 'Area_suburban', 'Area_urban', 'Area_rural']
X = data[feature_cols]
y = data.Sales
# instantiate and fit
lm = LinearRegression()
lm.fit(X, y)
# print the coefficients
zip(feature_cols, lm.coef_)
Explanation: Here is how we interpret the coding:
- rural is coded as Area_suburban=0 and Area_urban=0
- suburban is coded as Area_suburban=1 and Area_urban=0
- urban is coded as Area_suburban=0 and Area_urban=1
If this is confusing, think about why we only needed one dummy variable for Size (Size_large), not two dummy variables (Size_small and Size_large). In general, if you have a categorical feature with k "levels", you create k-1 dummy variables.
Anyway, let's add these two new dummy variables onto the original DataFrame, and then include them in the linear regression model:
End of explanation
import pydataset
from pydataset import data
trees=data('trees')
#can use the below line to examine the detailed data description
#data('trees',show_doc=True)
trees.head()
Explanation: How do we interpret the coefficients?
- Holding all other variables fixed, being a suburban area is associated with an average decrease in Sales of 106.56 widgets (as compared to the baseline level, which is rural).
- Being an urban area is associated with an average increase in Sales of 268.13 widgets (as compared to rural).
Linear Regression with nonLinear Terms
Let's look at another example of linear regression with nonlinear terms inside.
We will use the trees data set from pydataset package.
End of explanation
#set up features and aimed result
feature_cols=["Girth", "Height"]
X=trees[feature_cols]
Y=trees.Volume
# fit with LinearRegression
lm=LinearRegression()
lm.fit(X,Y)
#print out result
zip(feature_cols, lm.coef_)
Explanation: The dataset trees have two features Girth and Height. we want to use them to predict the Volume of the trees.
End of explanation
Ypredict=lm.predict(X)
print "MSE",np.sqrt(metrics.mean_squared_error(Y, Ypredict))
#print type(X)
from matplotlib import pyplot
pyplot.plot(X["Girth"],Ypredict)
pyplot.scatter(X["Girth"],Y)
Explanation: Let's examine the result of the fitting.
End of explanation
#since we are interested in the Volume of trees
#it's nature to add in the square of Girth into our features
#add in a new feature
X["GirthSquare"]=trees["Girth"]**2.
feature_cols=["Girth", "Height","GirthSquare"]
# fit with LinearRegression
lm=LinearRegression()
lm.fit(X,Y)
#print out result
zip(feature_cols, lm.coef_)
Ypredict=lm.predict(X)
#print "MSE",np.sqrt(metrics.mean_squared_error(Y, Ypredict))
from matplotlib import pyplot
pyplot.plot(X["Girth"],Ypredict)
pyplot.scatter(X["Girth"],Y)
#We can keep on trying even higher order non liearn features
X["GirthCube"]=trees["Girth"]**3.
X["GirthFouth"]=trees["Girth"]**4.
print X.shape
feature_cols=["Girth", "Height","GirthSquare","GirthCube","GirthFouth"]
# fit with LinearRegression
lm=LinearRegression()
lm.fit(X,Y)
#print out result
zip(feature_cols, lm.coef_)
Ypredict=lm.predict(X)
#print "MSE",np.sqrt(metrics.mean_squared_error(Y, Ypredict))
#print type(X)
from matplotlib import pyplot
pyplot.plot(X["Girth"],Ypredict)
pyplot.scatter(X["Girth"],Y)
Explanation: Can we do better than this? Let us add in non linear features
End of explanation |
4,143 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A brief tour of Redis
A one-hour or less tour of Redis.
tl
Step1: Note
Step2: Remember we need to connect to the server, using Python as the client, just like we would connect to a database server. This will connect using the default port and host, which the Redis server on our VMs uses.
Step3: The simplest use of Redis is as a key-value store. We can use the get and set commands to stash values for arbitrary keys.
Step4: Not particularly fancy, but useful.
Why is this different from just using Python variables? For one thing, it's a server, so you can have multiple clients connecting.
Step5: r and r2 could be different programs, or different users, or different languages. Much like a full RDBMS environment, the server backend supports multiple concurrent users. Unlike an RDBMS, though, Redis doesn't have the same sophisticated notion of access controls, so any connecting client can access, change, or delete data.
More than just keys and values - basic data structures
Just storing keys and values on a server still isn't terribly exciting. Keep in mind that Redis is a data structure server. With that in mind, it's more interesting to look at some of its data structures, such as counters, which (unsurprisingly) track and update counts of things.
Step6: Internally, Redis stores strings, so keep in mind that you'll have to cast values before doing math.
Step7: Counters are just the beginning. Next, we have sets
Step8: And it's python, so we can do obvious things like
Step9: See what's going on here? Redis stores data structures as a server, but you can still manipulate those structures as if there were any other python variable. The differences are that they live on the server, so can be shared, and that this requires communication overhead between the client and the server.
So doesn't that slow things down? Doesn't python already have a set() built-in type? (Yes, it does.) Why is it worth the overhead?
More interesting data structures
More interesting, perhaps, are sorted sets.
Step10: Here we've created a set that stores scores and automatically sorts the set members by scores. You can add new items or update the scores at any time, and fetch the rank order as well. Think "top ten anything".
A note on keys
The keys we used are named as you wish. So, for example, you can define key naming conventions that, for example, add identifiers to the keys for easy programmatic use. Let's say you're churning through a log of product sales orders and want to count the top sales for a given hour.
Step11: csvkit alone won't do the math for you, though csvsql could help. You could load your orders into R and do it, but perhaps you don't remember R and dplyr commands. In a little loop of python, you can throw all this data at Redis and it will return answers to useful questions.
Step12: Starting to get pretty cool, right?
A practical example
Let's look at something more concrete, using a familiar source | Python Code:
import redis
Explanation: A brief tour of Redis
A one-hour or less tour of Redis.
tl:dr version:
If you don't have time to read/run this, go to Try Redis and try it yourself.
Redis is a data structure server. Not quite a database, not quite a key-value store.
It is very fast and is a great tool for rapid analysis and other cases when you need something more than "just python" or "just R" but don't want to take the time to define and implement an RDBMS schema, etc.
You have it on your VM.
Redis stands for REmote DIctionary Server. The Try Redis app is easy for a quick tour; for a few more details, read the introduction to data types.
Getting started
Redis is a server process that you connect to with a client. On your VM, you can start it with the redis-server command, but it's best to run it in its own terminal, or to start it with the server.
For our VM, the server is already running. You just need to connect to it with a client. You can do this in the shell using redis-cli, at which point you can send and receive commands directly to Redis.
Here, though, let's use it with Python. We'll probably want some of Python's other facilities to read files, control flow, manage variables, etc.
Note: this is a python 2 notebook.
End of explanation
import redis
Explanation: Note: If this happens to you, just do this in the shell:
% sudo apt-get install python-redis
(Your password is "vagrant".)
End of explanation
r = redis.StrictRedis()
Explanation: Remember we need to connect to the server, using Python as the client, just like we would connect to a database server. This will connect using the default port and host, which the Redis server on our VMs uses.
End of explanation
r.set('hi', 5)
r.get('hi')
r.get('bye')
r.set('bye', 500)
r.get('bye')
Explanation: The simplest use of Redis is as a key-value store. We can use the get and set commands to stash values for arbitrary keys.
End of explanation
r2 = redis.StrictRedis()
r2.get('bye')
r2.set('new key', 10)
r.get('new key')
Explanation: Not particularly fancy, but useful.
Why is this different from just using Python variables? For one thing, it's a server, so you can have multiple clients connecting.
End of explanation
r.get('hi')
# increment the key 'hi'
r.incr('hi')
r.incr('hi')
r.incr('hi', 20)
r.decr('hi')
r.decr('hi', 3)
r.get('hi')
Explanation: r and r2 could be different programs, or different users, or different languages. Much like a full RDBMS environment, the server backend supports multiple concurrent users. Unlike an RDBMS, though, Redis doesn't have the same sophisticated notion of access controls, so any connecting client can access, change, or delete data.
More than just keys and values - basic data structures
Just storing keys and values on a server still isn't terribly exciting. Keep in mind that Redis is a data structure server. With that in mind, it's more interesting to look at some of its data structures, such as counters, which (unsurprisingly) track and update counts of things.
End of explanation
r.get('hi') * 5
int(r.get('hi')) * 5
Explanation: Internally, Redis stores strings, so keep in mind that you'll have to cast values before doing math.
End of explanation
r.sadd('my set', 'thing one')
r.sadd('my set', 'thing two', 'thing three', 'something else')
r.smembers('my set')
r.sadd('another set', 'thing two', 'thing three', 55, 'thing six')
r.smembers('another set')
r.sinter('my set', 'another set')
r.sunion('my set', 'another set')
Explanation: Counters are just the beginning. Next, we have sets:
End of explanation
len(r.smembers('my set'))
[x.upper() for x in r.smembers('my set')]
Explanation: And it's python, so we can do obvious things like:
End of explanation
r.zadd('sorted', 5, 'blue')
r.zadd('sorted', 3, 'red')
r.zadd('sorted', 7, 'purple')
r.zadd('sorted', 10, 'pink')
r.zadd('sorted', 6, 'grey')
r.zrangebyscore('sorted', 0, 10)
r.zrevrangebyscore('sorted', 100, 0, withscores=True)
r.zrank('sorted', 'red')`
r.zincrby('sorted', 'red', 5)
r.zrevrangebyscore('sorted', 100, 0, withscores=True)
r.zrank('sorted', 'red')
Explanation: See what's going on here? Redis stores data structures as a server, but you can still manipulate those structures as if there were any other python variable. The differences are that they live on the server, so can be shared, and that this requires communication overhead between the client and the server.
So doesn't that slow things down? Doesn't python already have a set() built-in type? (Yes, it does.) Why is it worth the overhead?
More interesting data structures
More interesting, perhaps, are sorted sets.
End of explanation
r.zadd('sales:10pm', 3, 'p1')
r.zadd('sales:10pm', 1, 'p3')
r.zadd('sales:10pm', 12, 'p1')
r.zadd('sales:10pm', 5, 'p2')
r.zadd('sales:11pm', 4, 'p1')
r.zadd('sales:11pm', 8, 'p2')
r.zadd('sales:11pm', 5, 'p2')
r.zadd('sales:11pm', 2, 'p1')
r.zadd('sales:11pm', 7, 'p1')
Explanation: Here we've created a set that stores scores and automatically sorts the set members by scores. You can add new items or update the scores at any time, and fetch the rank order as well. Think "top ten anything".
A note on keys
The keys we used are named as you wish. So, for example, you can define key naming conventions that, for example, add identifiers to the keys for easy programmatic use. Let's say you're churning through a log of product sales orders and want to count the top sales for a given hour.
End of explanation
r.zrevrangebyscore('sales:10pm', 100, 0, withscores=True)
r.zrevrangebyscore('sales:11pm', 100, 0, withscores=True)
r.zunionstore('sales:combined', ['sales:10pm', 'sales:11pm'])
r.zrevrangebyscore('sales:combined', 100, 0, withscores=True)
Explanation: csvkit alone won't do the math for you, though csvsql could help. You could load your orders into R and do it, but perhaps you don't remember R and dplyr commands. In a little loop of python, you can throw all this data at Redis and it will return answers to useful questions.
End of explanation
import csv
MAX_COUNT = 10000
count = 0
fp = open('bikeshare-q1.csv', 'rb')
reader = csv.DictReader(fp)
while count < MAX_COUNT:
ride = reader.next()
r.zincrby('start_station', ride['start_station'], 1)
r.zincrby('end_station', ride['end_station'], 1)
r.rpush('bike:%s' % ride['bike_id'], ride['end_station'])
count += 1
r.zrevrangebyscore('start_station', 10000, 0, start=0, num=10, withscores=True, score_cast_func=int)
print 'last bike seen:', ride['bike_id']
r.lrange('bike:%s' % ride['bike_id'], 0, 50)
Explanation: Starting to get pretty cool, right?
A practical example
Let's look at something more concrete, using a familiar source: bikeshare data. What if we want to count station use and track bike movements?
End of explanation |
4,144 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: Migration Examples
Step2: prepare some simple data for demonstration from the standard Titanic dataset,
Step3: and create a method to instantiate a simplistic sample optimizer to use with our various TensorFlow 1 Estimator and TensorFlow 2 Keras models.
Step4: Example 1
Step5: TF2
Step6: Example 2
Step7: TF2
Step8: Example 3
Step9: TF2
Step10: Example 4
Step11: Create a TensorFlow dataset. Note that Decision Forests support natively many types of features and do not need pre-processing.
Step12: Train the model on the train_dataset dataset.
Step13: Evaluate the quality of the model on the eval_dataset dataset.
Step14: Gradient Boosted Trees is just one of the many decision forests algorithms avaiable in TensorFlow Decision Forests. For example, Random Forests (available as tfdf.keras.GradientBoostedTreesModel is very resistant to overfitting) while CART (available as tfdf.keras.CartModel) is great for model interpretation.
In the next example, we train and plot a Random Forest model.
Step15: Finally, in the next example, we train and evaluate a CART model. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
!pip install tensorflow_decision_forests
import keras
import pandas as pd
import tensorflow as tf
import tensorflow.compat.v1 as tf1
import tensorflow_decision_forests as tfdf
Explanation: Migration Examples: Canned Estimators
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/migrate/canned_estimators"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/canned_estimators.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/canned_estimators.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/migrate/canned_estimators.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Canned (or Premade) Estimators have traditionally been used in TensorFlow 1 as quick and easy ways to train models for a variety of typical use cases. TensorFlow 2 provides straightforward approximate substitutes for a number of them by way of Keras models. For those canned estimators that do not have built-in TensorFlow 2 substitutes, you can still build your own replacement fairly easily.
This guide walks through a few examples of direct equivalents and custom substitutions to demonstrate how TensorFlow 1's tf.estimator-derived models can be migrated to TF2 with Keras.
Namely, this guide includes examples for migrating:
* From tf.estimator's LinearEstimator, Classifier or Regressor in TensorFlow 1 to Keras tf.compat.v1.keras.models.LinearModel in TensorFlow 2
* From tf.estimator's DNNEstimator, Classifier or Regressor in TensorFlow 1 to a custom Keras DNN ModelKeras in TensorFlow 2
* From tf.estimator's DNNLinearCombinedEstimator, Classifier or Regressor in TensorFlow 1 to tf.compat.v1.keras.models.WideDeepModel in TensorFlow 2
* From tf.estimator's BoostedTreesEstimator, Classifier or Regressor in TensorFlow 1 to tf.compat.v1.keras.models.WideDeepModel in TensorFlow 2
A common precursor to the training of a model is feature preprocessing, which is done for TensorFlow 1 Estimator models with tf.feature_column. For more information on feature preprocessing in TensorFlow 2, see this guide on migrating feature columns.
Setup
Start with a couple of necessary TensorFlow imports,
End of explanation
x_train = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
x_eval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv')
x_train['sex'].replace(('male', 'female'), (0, 1), inplace=True)
x_eval['sex'].replace(('male', 'female'), (0, 1), inplace=True)
x_train['alone'].replace(('n', 'y'), (0, 1), inplace=True)
x_eval['alone'].replace(('n', 'y'), (0, 1), inplace=True)
x_train['class'].replace(('First', 'Second', 'Third'), (1, 2, 3), inplace=True)
x_eval['class'].replace(('First', 'Second', 'Third'), (1, 2, 3), inplace=True)
x_train.drop(['embark_town', 'deck'], axis=1, inplace=True)
x_eval.drop(['embark_town', 'deck'], axis=1, inplace=True)
y_train = x_train.pop('survived')
y_eval = x_eval.pop('survived')
# Data setup for TensorFlow 1 with `tf.estimator`
def _input_fn():
return tf1.data.Dataset.from_tensor_slices((dict(x_train), y_train)).batch(32)
def _eval_input_fn():
return tf1.data.Dataset.from_tensor_slices((dict(x_eval), y_eval)).batch(32)
FEATURE_NAMES = [
'age', 'fare', 'sex', 'n_siblings_spouses', 'parch', 'class', 'alone'
]
feature_columns = []
for fn in FEATURE_NAMES:
feat_col = tf1.feature_column.numeric_column(fn, dtype=tf.float32)
feature_columns.append(feat_col)
Explanation: prepare some simple data for demonstration from the standard Titanic dataset,
End of explanation
def create_sample_optimizer(tf_version):
if tf_version == 'tf1':
optimizer = lambda: tf.keras.optimizers.Ftrl(
l1_regularization_strength=0.001,
learning_rate=tf1.train.exponential_decay(
learning_rate=0.1,
global_step=tf1.train.get_global_step(),
decay_steps=10000,
decay_rate=0.9))
elif tf_version == 'tf2':
optimizer = tf.keras.optimizers.Ftrl(
l1_regularization_strength=0.001,
learning_rate=tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=0.1, decay_steps=10000, decay_rate=0.9))
return optimizer
Explanation: and create a method to instantiate a simplistic sample optimizer to use with our various TensorFlow 1 Estimator and TensorFlow 2 Keras models.
End of explanation
linear_estimator = tf.estimator.LinearEstimator(
head=tf.estimator.BinaryClassHead(),
feature_columns=feature_columns,
optimizer=create_sample_optimizer('tf1'))
linear_estimator.train(input_fn=_input_fn, steps=100)
linear_estimator.evaluate(input_fn=_eval_input_fn, steps=10)
Explanation: Example 1: Migrating from LinearEstimator
TF1: Using LinearEstimator
In TensorFlow 1, you can use tf.estimator.LinearEstimator to create a baseline linear model for regression and classification problems.
End of explanation
linear_model = tf.compat.v1.keras.experimental.LinearModel()
linear_model.compile(loss='mse', optimizer=create_sample_optimizer('tf2'), metrics=['accuracy'])
linear_model.fit(x_train, y_train, epochs=10)
linear_model.evaluate(x_eval, y_eval, return_dict=True)
Explanation: TF2: Using Keras LinearModel
In TensorFlow 2, you can create an instance of the Keras tf.compat.v1.keras.models.LinearModel which is the substitute to the tf.estimator.LinearEstimator. The tf.compat.v1.keras path is used to signify that the pre-made model exists for compatibility.
End of explanation
dnn_estimator = tf.estimator.DNNEstimator(
head=tf.estimator.BinaryClassHead(),
feature_columns=feature_columns,
hidden_units=[128],
activation_fn=tf.nn.relu,
optimizer=create_sample_optimizer('tf1'))
dnn_estimator.train(input_fn=_input_fn, steps=100)
dnn_estimator.evaluate(input_fn=_eval_input_fn, steps=10)
Explanation: Example 2: Migrating from DNNEstimator
TF1: Using DNNEstimator
In TensorFlow 1, you can use tf.estimator.DNNEstimator to create a baseline DNN model for regression and classification problems.
End of explanation
dnn_model = tf.keras.models.Sequential(
[tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(1)])
dnn_model.compile(loss='mse', optimizer=create_sample_optimizer('tf2'), metrics=['accuracy'])
dnn_model.fit(x_train, y_train, epochs=10)
dnn_model.evaluate(x_eval, y_eval, return_dict=True)
Explanation: TF2: Using Keras to Create a Custom DNN Model
In TensorFlow 2, you can create a custom DNN model to substitute for one generated by tf.estimator.DNNEstimator, with similar levels of user-specified customization (for instance, as in the previous example, the ability to customize a chosen model optimizer).
A similar workflow can be used to replace tf.estimator.experimental.RNNEstimator with a Keras RNN Model. Keras provides a number of built-in, customizable choices by way of tf.keras.layers.RNN, tf.keras.layers.LSTM, and tf.keras.layers.GRU - see here for more details.
End of explanation
optimizer = create_sample_optimizer('tf1')
combined_estimator = tf.estimator.DNNLinearCombinedEstimator(
head=tf.estimator.BinaryClassHead(),
# Wide settings
linear_feature_columns=feature_columns,
linear_optimizer=optimizer,
# Deep settings
dnn_feature_columns=feature_columns,
dnn_hidden_units=[128],
dnn_optimizer=optimizer)
combined_estimator.train(input_fn=_input_fn, steps=100)
combined_estimator.evaluate(input_fn=_eval_input_fn, steps=10)
Explanation: Example 3: Migrating from DNNLinearCombinedEstimator
TF1: Using DNNLinearCombinedEstimator
In TensorFlow 1, you can use tf.estimator.DNNLinearCombinedEstimator to create a baseline combined model for regression and classification problems with customization capacity for both its linear and DNN components.
End of explanation
# Create LinearModel and DNN Model as in Examples 1 and 2
optimizer = create_sample_optimizer('tf2')
linear_model = tf.compat.v1.keras.experimental.LinearModel()
linear_model.compile(loss='mse', optimizer=optimizer, metrics=['accuracy'])
linear_model.fit(x_train, y_train, epochs=10, verbose=0)
dnn_model = tf.keras.models.Sequential(
[tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(1)])
dnn_model.compile(loss='mse', optimizer=optimizer, metrics=['accuracy'])
combined_model = tf.compat.v1.keras.experimental.WideDeepModel(linear_model,
dnn_model)
combined_model.compile(
optimizer=[optimizer, optimizer], loss='mse', metrics=['accuracy'])
combined_model.fit([x_train, x_train], y_train, epochs=10)
combined_model.evaluate(x_eval, y_eval, return_dict=True)
Explanation: TF2: Using Keras WideDeepModel
In TensorFlow 2, you can create an instance of the Keras tf.compat.v1.keras.models.WideDeepModel to substitute for one generated by tf.estimator.DNNLinearCombinedEstimator, with similar levels of user-specified customization (for instance, as in the previous example, the ability to customize a chosen model optimizer).
This WideDeepModel is constructed on the basis of a constituent LinearModel and a custom DNN Model, both of which are discussed in the preceding two examples. A custom linear model can also be used in place of the built-in Keras LinearModel if desired.
If you would like to build your own model instead of a canned estimator, check out how to build a keras.Sequential model. For more information on custom training and optimizers you can also checkout this guide.
End of explanation
!pip install tensorflow_decision_forests
Explanation: Example 4: Migrating from BoostedTreesEstimator
TF1: Using BoostedTreesEstimator
In TensorFlow 1, you could use tf.estimator.BoostedTreesEstimator to create a baseline to create a baseline Gradient Boosting model using an ensemble of decision trees for regression and classification problems. This functionality is no longer included in TensorFlow 2.
bt_estimator = tf1.estimator.BoostedTreesEstimator(
head=tf.estimator.BinaryClassHead(),
n_batches_per_layer=1,
max_depth=10,
n_trees=1000,
feature_columns=feature_columns)
bt_estimator.train(input_fn=_input_fn, steps=1000)
bt_estimator.evaluate(input_fn=_eval_input_fn, steps=100)
TF2: Using TensorFlow Decision Forests
In TensorFlow 2, tf.estimator.BoostedTreesEstimator is replaced by tfdf.keras.GradientBoostedTreesModel from the TensorFlow Decision Forests package.
TensorFlow Decision Forests provides various advantages over the tf.estimator.BoostedTreesEstimator, notably regarding quality, speed, ease of use and flexibility. To learn about TensorFlow Decision Forests, start with the beginner colab.
The following example shows how to train a Gradient Boosted Trees model using TensorFlow 2:
Install TensorFlow Decision Forests.
End of explanation
train_dataframe = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
eval_dataframe = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv')
# Convert the Pandas Dataframes into TensorFlow datasets.
train_dataset = tfdf.keras.pd_dataframe_to_tf_dataset(train_dataframe, label="survived")
eval_dataset = tfdf.keras.pd_dataframe_to_tf_dataset(eval_dataframe, label="survived")
Explanation: Create a TensorFlow dataset. Note that Decision Forests support natively many types of features and do not need pre-processing.
End of explanation
# Use the default hyper-parameters of the model.
gbt_model = tfdf.keras.GradientBoostedTreesModel()
gbt_model.fit(train_dataset)
Explanation: Train the model on the train_dataset dataset.
End of explanation
gbt_model.compile(metrics=['accuracy'])
gbt_evaluation = gbt_model.evaluate(eval_dataset, return_dict=True)
print(gbt_evaluation)
Explanation: Evaluate the quality of the model on the eval_dataset dataset.
End of explanation
# Train a Random Forest model
rf_model = tfdf.keras.RandomForestModel()
rf_model.fit(train_dataset)
# Evaluate the Random Forest model
rf_model.compile(metrics=['accuracy'])
rf_evaluation = rf_model.evaluate(eval_dataset, return_dict=True)
print(rf_evaluation)
Explanation: Gradient Boosted Trees is just one of the many decision forests algorithms avaiable in TensorFlow Decision Forests. For example, Random Forests (available as tfdf.keras.GradientBoostedTreesModel is very resistant to overfitting) while CART (available as tfdf.keras.CartModel) is great for model interpretation.
In the next example, we train and plot a Random Forest model.
End of explanation
# Train a CART model
cart_model = tfdf.keras.CartModel()
cart_model.fit(train_dataset)
# Plot the CART model
tfdf.model_plotter.plot_model_in_colab(cart_model, max_depth=2)
Explanation: Finally, in the next example, we train and evaluate a CART model.
End of explanation |
4,145 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.algo - Tri plus rapide que prévu
Dans le cas général, le coût d'un algorithme de tri est en $O(n \ln n)$. Mais il existe des cas particuliers pour lesquels on peut faire plus court. Par exemple, on suppose que l'ensemble à trier contient plein de fois le même élément.
Step1: trier un plus petit ensemble
Step2: On peut calculer la distribution de ces éléments.
Step3: Plutôt que de trier le tableau initial, on peut trier l'histogramme qui contient moins d'élément.
Step4: Puis on recontruit le tableau initial mais trié
Step5: On crée une fonction qui assemble toutes les opérations. Le coût du nivrau tri est en $O(d \ln d + n)$ où $d$ est le nombre d'éléments distincts de l'ensemble initial.
Step6: Les temps d'exécution ne sont pas très probants car la fonction sort est immplémentée en C et qu'elle utilise l'algorithme timsort. Cet algorithme est un algorithme adaptatif tel que smoothsort. Le coût varie en fonction des données à trier. Il identifie d'abord les séquences déjà triées, trie les autres parties et fusionne l'ensemble. Trier un tableau déjà trié revient à détecter qu'il est déjà trié. Le coût est alors linéaire $O(n)$. Cela explique le commentaire The slowest run took 19.47 times longer than the fastest. ci-dessous où le premier tri est beaucoup plus long que les suivant qui s'appliquent à un tableau déjà trié. Quoiqu'il en soit, il n'est pas facile de comparer les deux implémentations en terme de temps.
Step7: évolution en fonction de n
Pour réussir à valider l'idée de départ. On regarde l'évolution des deux algorithmes en fonction du nombre d'observations.
Step8: L'algorithme de tri de Python est plutôt efficace puisque son coût paraît linéaire en apparence.
Step9: On ajoute un logarithme.
Step10: Il faut grossier le trait. | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
Explanation: 1A.algo - Tri plus rapide que prévu
Dans le cas général, le coût d'un algorithme de tri est en $O(n \ln n)$. Mais il existe des cas particuliers pour lesquels on peut faire plus court. Par exemple, on suppose que l'ensemble à trier contient plein de fois le même élément.
End of explanation
import random
ens = [random.randint(0,99) for i in range(10000)]
Explanation: trier un plus petit ensemble
End of explanation
def histogram(ens):
hist = {}
for e in ens:
hist[e] = hist.get(e, 0) + 1
return hist
hist = histogram(ens)
list(hist.items())[:5]
Explanation: On peut calculer la distribution de ces éléments.
End of explanation
sorted_hist = list(hist.items())
sorted_hist.sort()
Explanation: Plutôt que de trier le tableau initial, on peut trier l'histogramme qui contient moins d'élément.
End of explanation
def tableau(sorted_hist):
res = []
for k, v in sorted_hist:
for i in range(v):
res.append(k)
return res
sorted_ens = tableau(sorted_hist)
sorted_ens[:5]
Explanation: Puis on recontruit le tableau initial mais trié :
End of explanation
def sort_with_hist(ens):
hist = histogram(ens)
sorted_hist = list(hist.items())
sorted_hist.sort()
return tableau(sorted_hist)
from random import shuffle
shuffle(ens)
%timeit sort_with_hist(ens)
def sort_with_nohist(ens):
return list(sorted(ens))
shuffle(ens)
%timeit sort_with_nohist(ens)
Explanation: On crée une fonction qui assemble toutes les opérations. Le coût du nivrau tri est en $O(d \ln d + n)$ où $d$ est le nombre d'éléments distincts de l'ensemble initial.
End of explanation
def sort_with_nohist_nocopy(ens):
ens.sort()
return ens
shuffle(ens)
%timeit sort_with_nohist_nocopy(ens)
Explanation: Les temps d'exécution ne sont pas très probants car la fonction sort est immplémentée en C et qu'elle utilise l'algorithme timsort. Cet algorithme est un algorithme adaptatif tel que smoothsort. Le coût varie en fonction des données à trier. Il identifie d'abord les séquences déjà triées, trie les autres parties et fusionne l'ensemble. Trier un tableau déjà trié revient à détecter qu'il est déjà trié. Le coût est alors linéaire $O(n)$. Cela explique le commentaire The slowest run took 19.47 times longer than the fastest. ci-dessous où le premier tri est beaucoup plus long que les suivant qui s'appliquent à un tableau déjà trié. Quoiqu'il en soit, il n'est pas facile de comparer les deux implémentations en terme de temps.
End of explanation
def tableaux_aleatoires(ns, d):
for n in ns:
yield [random.randint(0,d-1) for i in range(n)]
import pandas
import time
def mesure(enss, fonc):
res = []
for ens in enss:
cl = time.perf_counter()
fonc(ens)
diff = time.perf_counter() - cl
res.append(dict(n=len(ens), time=diff))
return pandas.DataFrame(res)
df = mesure(tableaux_aleatoires(range(100, 30000, 100), 100), sort_with_nohist)
df.plot(x="n", y="time")
df = mesure(tableaux_aleatoires(range(100, 30000, 100), 100), sort_with_hist)
df.plot(x="n", y="time")
Explanation: évolution en fonction de n
Pour réussir à valider l'idée de départ. On regarde l'évolution des deux algorithmes en fonction du nombre d'observations.
End of explanation
df = mesure(tableaux_aleatoires(range(100, 30000, 200), int(1e10)), sort_with_nohist)
df.plot(x="n", y="time")
Explanation: L'algorithme de tri de Python est plutôt efficace puisque son coût paraît linéaire en apparence.
End of explanation
from math import log
df["nlnn"] = df["n"] * df["n"].apply(log) * 4.6e-8
df.plot(x="n", y=["time", "nlnn"])
Explanation: On ajoute un logarithme.
End of explanation
from math import exp
list(map(int, map(exp, range(5, 14))))
df100 = mesure(tableaux_aleatoires(map(int, map(exp, range(5, 14))), 100), sort_with_nohist)
dfM = mesure(tableaux_aleatoires(map(int, map(exp, range(5, 14))), 1e9), sort_with_nohist)
df = df100.copy()
df.columns = ["n", "d=100"]
df["d=1e9"] = dfM["time"]
df.plot(x="n", y=["d=100", "d=1e9"])
Explanation: Il faut grossier le trait.
End of explanation |
4,146 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Planet Analytics API Tutorial
Getting Analytic Feed Results
This notebook shows how to paginate through Planet Analytic Feed Results for an existing analytics Subscription to construct a combined geojson feature collection that can be imported into geospatial analysis tools.
Setup
To use this notebook, you need an api key for a Planet account with access to the Analytics API.
API Key and Test Connection
Set API_KEY below if it is not already in your notebook as an environment variable.
See the Analytics API Docs for more details on authentication.
Step1: Specify Analytics Subscription of Interest
Below we will list your available subscription ids and some metadata in a dataframe and then select a subscription of interest.
Step2: Pick a subscription from which to pull results, and replace the ID below.
Step3: Getting subscription results
In this section, we will make sure that we can get data from the subscription of interest by fetching the latest page of results.
Step4: Pagination
The response json above will only include the most recent 250 detections by default. For subscriptions with many results, you can page through
Step5: More results can be fetched by following the next link. Let's look at the links section of the response
Step7: To get more results, we will want the link with a rel of next
Step8: Using this url, we can fetch the next page of results
Step9: Aggregating results
Each page of results comes as one feature collection. We can combine the features from different pages of results into one big feature collection. Below we will page through all results in the subscription from the past 3 months and make a combined feature collection.
Results in the API are ordered by a created timestamp. This corresponds the time that the feature was published to a Feed and does not necessarily match the observed timestamp in the feature's properties, which corresponds to when the source imagery for a feature was collected.
Step10: Saving Results
We can now save the combined geojson feature collection to a file.
Step11: After downloading the aggregated geojson file with the file link above, try importing the data into a geojson-compatible tool for visualization and exploration | Python Code:
import os
import requests
# if your Planet API Key is not set as an environment variable, you can paste it below
API_KEY = os.environ.get('PL_API_KEY', 'PASTE_YOUR_KEY_HERE')
# alternatively, you can just set your API key directly as a string variable:
# API_KEY = "YOUR_PLANET_API_KEY_HERE"
# construct auth tuple for use in the requests library
BASIC_AUTH = (API_KEY, '')
BASE_URL = "https://api.planet.com/analytics/"
subscriptions_list_url = BASE_URL + 'subscriptions' + '?limit=1000'
resp = requests.get(subscriptions_list_url, auth=BASIC_AUTH)
if resp.status_code == 200:
print('Yay, you can access the Analytics API')
subscriptions = resp.json()['data']
print('Available subscriptions:', len(subscriptions))
else:
print('Something is wrong:', resp.content)
Explanation: Planet Analytics API Tutorial
Getting Analytic Feed Results
This notebook shows how to paginate through Planet Analytic Feed Results for an existing analytics Subscription to construct a combined geojson feature collection that can be imported into geospatial analysis tools.
Setup
To use this notebook, you need an api key for a Planet account with access to the Analytics API.
API Key and Test Connection
Set API_KEY below if it is not already in your notebook as an environment variable.
See the Analytics API Docs for more details on authentication.
End of explanation
import pandas as pd
pd.options.display.max_rows = 1000
df = pd.DataFrame(subscriptions)
df['start'] = pd.to_datetime(df['startTime']).dt.date
df['end'] = pd.to_datetime(df['endTime']).dt.date
df[['id', 'title', 'description', 'start', 'end']]
Explanation: Specify Analytics Subscription of Interest
Below we will list your available subscription ids and some metadata in a dataframe and then select a subscription of interest.
End of explanation
# This example ID is for a subscription of ship detections in the Port of Oakland
# You can replace this ID with your own subscription ID
SUBSCRIPTION_ID = '9db92275-1d89-4d3b-a0b6-68abd2e94142'
Explanation: Pick a subscription from which to pull results, and replace the ID below.
End of explanation
import json
# Construct the url for the subscription's results collection
subscription_results_url = BASE_URL + 'collections/' + SUBSCRIPTION_ID + '/items'
print("Request URL: {}".format(subscription_results_url))
# Get subscription results collection
resp = requests.get(subscription_results_url, auth=BASIC_AUTH)
if resp.status_code == 200:
print('Yay, you can access analytic feed results!')
subscription_results = resp.json()
print(json.dumps(subscription_results, sort_keys=True, indent=4))
else:
print('Something is wrong:', resp.content)
Explanation: Getting subscription results
In this section, we will make sure that we can get data from the subscription of interest by fetching the latest page of results.
End of explanation
print(len(subscription_results['features']))
Explanation: Pagination
The response json above will only include the most recent 250 detections by default. For subscriptions with many results, you can page through
End of explanation
subscription_results['links']
Explanation: More results can be fetched by following the next link. Let's look at the links section of the response:
End of explanation
def get_next_link(results_json):
Given a response json from one page of subscription results, get the url for the next page of results.
for link in results_json['links']:
if link['rel'] == 'next':
return link['href']
return None
next_link = get_next_link(subscription_results)
print('next page url: ' + next_link)
Explanation: To get more results, we will want the link with a rel of next
End of explanation
next_results = requests.get(next_link, auth=BASIC_AUTH).json()
print(json.dumps(next_results, sort_keys=True, indent=4))
Explanation: Using this url, we can fetch the next page of results
End of explanation
latest_feature = subscription_results['features'][0]
creation_datestring = latest_feature['created']
print('latest feature creation date:', creation_datestring)
from dateutil.parser import parse
# this date string can be parsed as a datetime and converted to a date
latest_date = parse(creation_datestring).date()
latest_date
from datetime import timedelta
min_date = latest_date - timedelta(days=90)
print('Aggregate all detections from after this date:', min_date)
feature_collection = {'type': 'FeatureCollection', 'features': []}
next_link = subscription_results_url
while next_link:
results = requests.get(next_link, auth=BASIC_AUTH).json()
next_features = results['features']
if next_features:
latest_feature_creation = parse(next_features[0]['created']).date()
earliest_feature_creation = parse(next_features[-1]['created']).date()
print('Fetched {} features fetched ({}, {})'.format(
len(next_features), earliest_feature_creation, latest_feature_creation))
feature_collection['features'].extend(next_features)
next_link = get_next_link(results)
else:
next_link = None
print('Total features: {}'.format(len(feature_collection['features'])))
Explanation: Aggregating results
Each page of results comes as one feature collection. We can combine the features from different pages of results into one big feature collection. Below we will page through all results in the subscription from the past 3 months and make a combined feature collection.
Results in the API are ordered by a created timestamp. This corresponds the time that the feature was published to a Feed and does not necessarily match the observed timestamp in the feature's properties, which corresponds to when the source imagery for a feature was collected.
End of explanation
from IPython.display import FileLink, FileLinks
os.makedirs('data', exist_ok=True)
filename = 'data/collection_{}.geojson'.format(SUBSCRIPTION_ID)
with open(filename, 'w') as file:
json.dump(feature_collection, file)
FileLink(filename)
Explanation: Saving Results
We can now save the combined geojson feature collection to a file.
End of explanation
import geopandas as gpd
gpd.read_file(filename)
Explanation: After downloading the aggregated geojson file with the file link above, try importing the data into a geojson-compatible tool for visualization and exploration:
- geojson.io
- kepler gl
The saved geojson file can also be used to make a geopandas dataframe.
End of explanation |
4,147 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Unsupervised Anomaly Detection based on Forecasts
Anomaly detection detects data points in data that does not fit well with the rest of data. In this notebook we demonstrate how to do anomaly detection using Chronos's built-in model MTNet
For demonstration, we use the publicly available cluster trace data cluster-trace-v2018 of Alibaba Open Cluster Trace Program. You can find the dataset introduction <a href="https
Step3: Download raw dataset and load into dataframe
Now we download the dataset and load it into a pandas dataframe.Steps are as below
Step4: Below are some example records of the data
Step5: Data pre-processing
Now we need to do data cleaning and preprocessing on the raw data. Note that this part could vary for different dataset.
For the machine_usage data, the pre-processing convert the time step in seconds to timestamp starting from 2018-01-01.
Step6: Feature Engineering & Data Preperation
For feature engineering, we use hour as feature in addition to the target cpu usage.
For data preperation, we resample the average of cpu_usage in minutes, impute the data to handle missing data and scale the data. At last we generate the sample in numpy ndarray for Forecaster to use.
We generate a built-in TSDataset to complete the whole processing.
Step7: Time series forecasting
Step8: First, we initialize a mtnet_forecaster according to input data shape. Specifcally, look_back should equal (long_series_num+1)*series_length . Details refer to chronos docs <a href="https
Step9: MTNet needs to preprocess the X into another format, so we call MTNetForecaster.preprocess_input on train_x and test_x.
Step10: Now we train the model and wait till it finished.
Step11: Use the model for prediction and inverse the scaling of the prediction results.
Step12: Calculate the symetric mean absolute percentage error.
Step13: Anomaly detection
Step14: Get a new dataframe which contains y_true,y_pred,anomalies value.
Step15: Draw anomalies in line chart. | Python Code:
def get_result_df(y_true_unscale, y_pred_unscale, ano_index, look_back,target_col='cpu_usage'):
Add prediction and anomaly value to dataframe.
result_df = pd.DataFrame({"y_true": y_true_unscale.squeeze(), "y_pred": y_pred_unscale.squeeze()})
result_df['anomalies'] = 0
result_df.loc[result_df.index[ano_index], 'anomalies'] = 1
result_df['anomalies'] = result_df['anomalies'] > 0
return result_df
def plot_anomalies_value(date, y_true, y_pred, anomalies):
plot the anomalies value
fig, axs = plt.subplots(figsize=(16,6))
axs.plot(date, y_true,color='blue', label='y_true')
axs.plot(date, y_pred,color='orange', label='y_pred')
axs.scatter(date[anomalies].tolist(), y_true[anomalies], color='red', label='anomalies value')
axs.set_title('the anomalies value')
plt.xlabel('datetime')
plt.legend(loc='upper left')
plt.show()
Explanation: Unsupervised Anomaly Detection based on Forecasts
Anomaly detection detects data points in data that does not fit well with the rest of data. In this notebook we demonstrate how to do anomaly detection using Chronos's built-in model MTNet
For demonstration, we use the publicly available cluster trace data cluster-trace-v2018 of Alibaba Open Cluster Trace Program. You can find the dataset introduction <a href="https://github.com/alibaba/clusterdata/blob/master/cluster-trace-v2018/trace_2018.md" target="_blank">here</a>. In particular, we use machine usage data to demonstrate anomaly detection, you can download the separate data file directly with <a href="http://clusterdata2018pubcn.oss-cn-beijing.aliyuncs.com/machine_usage.tar.gz" target="_blank">machine_usage</a>.
Helper functions
This section defines some helper functions to be used in the following procedures. You can refer to it later when they're used.
End of explanation
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df_1932 = pd.read_csv("m_1932.csv", header=None, usecols=[1,2,3], names=["time_step", "cpu_usage","mem_usage"])
Explanation: Download raw dataset and load into dataframe
Now we download the dataset and load it into a pandas dataframe.Steps are as below:
* First, download the raw data <a href="http://clusterdata2018pubcn.oss-cn-beijing.aliyuncs.com/machine_usage.tar.gz" target="_blank">machine_usage</a>. Or run the script get_data.sh to download the raw data.It will download the resource usage of each machine from m_1932 to m_2085.
* Second, run grep m_1932 machine_usage.csv > m_1932.csv to extract records of machine 1932. Or run extract_data.sh.We use machine 1932 as an example in this notebook.You can choose any machines in the similar way.
* Finally, use pandas to load m_1932.csv into a dataframe as shown below.
End of explanation
df_1932.head()
df_1932.sort_values(by="time_step", inplace=True)
df_1932.reset_index(inplace=True)
df_1932.sort_values(by="time_step").plot(y="cpu_usage", x="time_step", figsize=(16,6),title="cpu_usage of machine 1932")
Explanation: Below are some example records of the data
End of explanation
df_1932.reset_index(inplace=True)
df_1932["time_step"] = pd.to_datetime(df_1932["time_step"], unit='s', origin=pd.Timestamp('2018-01-01'))
Explanation: Data pre-processing
Now we need to do data cleaning and preprocessing on the raw data. Note that this part could vary for different dataset.
For the machine_usage data, the pre-processing convert the time step in seconds to timestamp starting from 2018-01-01.
End of explanation
from zoo.chronos.data import TSDataset
from sklearn.preprocessing import StandardScaler
# we look back one hour data which is of the frequency of 1min.
look_back = 60
horizon = 1
tsdata_train, tsdata_val, tsdata_test = TSDataset.from_pandas(df_1932, dt_col="time_step", target_col="cpu_usage", with_split=True, val_ratio = 0.1, test_ratio=0.1)
standard_scaler = StandardScaler()
for tsdata in [tsdata_train, tsdata_val, tsdata_test]:
tsdata.resample(interval='1min', merge_mode="mean")\
.impute(mode="last")\
.gen_dt_feature()\
.scale(standard_scaler, fit=(tsdata is tsdata_train))\
.roll(lookback=look_back, horizon=horizon, feature_col = ["HOUR"])\
x_train, y_train = tsdata_train.to_numpy()
x_val, y_val = tsdata_val.to_numpy()
x_test, y_test = tsdata_test.to_numpy()
y_train, y_val, y_test = y_train[:, 0, :], y_val[:, 0, :], y_test[:, 0, :]
x_train.shape, y_train.shape, x_val.shape, y_val.shape, x_test.shape, y_test.shape
Explanation: Feature Engineering & Data Preperation
For feature engineering, we use hour as feature in addition to the target cpu usage.
For data preperation, we resample the average of cpu_usage in minutes, impute the data to handle missing data and scale the data. At last we generate the sample in numpy ndarray for Forecaster to use.
We generate a built-in TSDataset to complete the whole processing.
End of explanation
from zoo.chronos.forecaster.mtnet_forecaster import MTNetForecaster
Explanation: Time series forecasting
End of explanation
mtnet_forecaster = MTNetForecaster(target_dim=horizon,
feature_dim=x_train.shape[-1],
long_series_num=3,
series_length=15
)
Explanation: First, we initialize a mtnet_forecaster according to input data shape. Specifcally, look_back should equal (long_series_num+1)*series_length . Details refer to chronos docs <a href="https://analytics-zoo.github.io/master/#Chronos/overview" target="_blank">here</a>.
End of explanation
# mtnet requires reshape of input x before feeding into model.
x_train_mtnet = mtnet_forecaster.preprocess_input(x_train)
x_val_mtnet = mtnet_forecaster.preprocess_input(x_val)
x_test_mtnet = mtnet_forecaster.preprocess_input(x_test)
Explanation: MTNet needs to preprocess the X into another format, so we call MTNetForecaster.preprocess_input on train_x and test_x.
End of explanation
%%time
hist = mtnet_forecaster.fit(x = x_train_mtnet, y = y_train, batch_size=128, epochs=20)
Explanation: Now we train the model and wait till it finished.
End of explanation
y_pred_val = mtnet_forecaster.predict(x_val_mtnet)
y_pred_test = mtnet_forecaster.predict(x_test_mtnet)
y_pred_val_unscale = tsdata_val.unscale_numpy(np.expand_dims(y_pred_val, axis=1))[:, 0, :]
y_pred_test_unscale = tsdata_test.unscale_numpy(np.expand_dims(y_pred_test, axis=1))[:, 0, :]
y_val_unscale = tsdata_val.unscale_numpy(np.expand_dims(y_val, axis=1))[:, 0, :]
y_test_unscale = tsdata_test.unscale_numpy(np.expand_dims(y_test, axis=1))[:, 0, :]
Explanation: Use the model for prediction and inverse the scaling of the prediction results.
End of explanation
# evaluate with sMAPE
from zoo.orca.automl.metrics import Evaluator
smape = Evaluator.evaluate("smape", y_test_unscale, y_pred_test_unscale)
print(f"sMAPE is {'%.2f' % smape}")
Explanation: Calculate the symetric mean absolute percentage error.
End of explanation
from zoo.chronos.detector.anomaly import ThresholdDetector
ratio=0.01
thd=ThresholdDetector()
thd.set_params(ratio=ratio)
thd.fit(y_val_unscale,y_pred_val_unscale)
print("The threshold of validation dataset is:",thd.th)
anomaly_scores_val = thd.score()
val_res_ano_idx = np.where(anomaly_scores_val > 0)[0]
print("The index of anomalies in validation dataset is:",val_res_ano_idx)
anomaly_scores_test = thd.score(y_test_unscale,y_pred_test_unscale)
test_res_ano_idx = np.where(anomaly_scores_test > 0)[0]
print("The index of anoalies in test dataset is:",test_res_ano_idx)
Explanation: Anomaly detection
End of explanation
val_result_df = get_result_df(y_val_unscale, y_pred_val_unscale, val_res_ano_idx, look_back)
test_result_df = get_result_df(y_test_unscale, y_pred_test_unscale, test_res_ano_idx, look_back)
Explanation: Get a new dataframe which contains y_true,y_pred,anomalies value.
End of explanation
plot_anomalies_value(val_result_df.index, val_result_df.y_true, val_result_df.y_pred, val_result_df.anomalies)
plot_anomalies_value(test_result_df.index, test_result_df.y_true, test_result_df.y_pred, test_result_df.anomalies)
Explanation: Draw anomalies in line chart.
End of explanation |
4,148 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
## <p style="text-align
Step1: how to draw samples from a gaussian distribution
Step2: other distributions ...
Step3: $\log_{10}(d) = 1 + \mu /5 $
Step4: 2. plotting
Step5: 3. IO (text files & fits files)
Step6: LAMOST spectra
Step7: CRVAL1 = 3.5682 / Central wavelength (log10) of first pixel
CD1_1 = 0.0001 / Log10 dispersion per pixel
CRPIX1 = 1 / Starting pixel (1-indexed)
CTYPE1 = 'LINEAR ' / | Python Code:
import numpy as np
print(dir(np.random))
Explanation: ## <p style="text-align: center; font-size: 4em;"> Python tutorial 2 </p>
1. random number generators: numpy.random
https://docs.scipy.org/doc/numpy/reference/routines.random.html
End of explanation
%pylab inline
import matplotlib.pyplot as plt
from matplotlib import rcParams
rcParams.update({'font.size': 20})
rdata = np.random.randn(1000)
fig = plt.figure(figsize=(6, 4))
plt.hist(rdata)
print(np.mean(rdata), np.median(rdata), np.std(rdata))
np.std(np.random.randn(1000) + np.random.randn(1000))
Explanation: how to draw samples from a gaussian distribution
End of explanation
randexp = np.random.exponential(2., size=(1000))
hist(randexp, np.linspace(0,10,50));
randps = np.random.poisson(10, size=(10000,))
hist(randps, np.arange(20));
M = 4.
m = 15.
merr = 0.1
rand_m = np.random.randn(1000)*0.1+m
hist(rand_m);
Explanation: other distributions ...
End of explanation
rand_d = 10.**(1+0.2*(rand_m-M))
hist(rand_d, np.linspace(1300, 1900, 30));
Explanation: $\log_{10}(d) = 1 + \mu /5 $
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rcParams
rcParams.update({'font.size':20})
# fig = plt.figure(figsize=(10,10))
x = np.linspace(0, 6*np.pi, 100)
plt.plot(x, np.cos(x), 'rv--');
plt.plot(x, np.sin(x), 'bs-.', alpha=.1);
plt.scatter(x, np.cos(x)+0.2, s=np.random.rand(*x.shape)*80, c=np.sin(x)+1)
Explanation: 2. plotting
End of explanation
# use numpy.savetxt & numpy.loadtxt
a = np.random.randn(4, 5)
print(a)
np.savetxt('./data/text/rdata.dat', a)
!gedit ./data/text/rdata.dat
b = np.loadtxt('./data/text/rdata.dat')
print(b)
a==b.reshape(4, 5)
impath = "./data/image_data/G178_final.850.fits"
%pylab inline
from matplotlib import rcParams
rcParams.update({'font.size': 20})
from aplpy import FITSFigure
fig = FITSFigure(impath)
fig.show_colorscale()
impath = "./data/wise_image/w1_cut.fits"
%pylab inline
%matplotlib inline
from matplotlib import rcParams
rcParams.update({'font.size': 20})
from aplpy import FITSFigure
fig = FITSFigure(impath)
fig.show_colorscale()
fig.show_grayscale()
Explanation: 3. IO (text files & fits files)
End of explanation
ls ./data/lamost_dr2_spectra/
specpath = "./data/lamost_dr2_spectra/spec-55892-F9205_sp09-174.fits"
from astropy.io import fits
hl = fits.open(specpath)
hl.info()
hl
hl[0]
hl[0].header
Explanation: LAMOST spectra
End of explanation
wave = 10.**(hl[0].header['CRVAL1']+np.arange(hl[0].header['NAXIS1'])*hl[0].header['CD1_1'])
wave
np.log10(wave)
flux = hl[0].data # [flux, ivar, wave, and_mask, or_mask]
flux
%pylab
%matplotlib inline
fig = figure(figsize=(10, 5))
plt.plot(wave, flux[0, :])
# fig.savefig("here goes the file path")
Explanation: CRVAL1 = 3.5682 / Central wavelength (log10) of first pixel
CD1_1 = 0.0001 / Log10 dispersion per pixel
CRPIX1 = 1 / Starting pixel (1-indexed)
CTYPE1 = 'LINEAR ' /
End of explanation |
4,149 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multiple Stripe Analysis (MSA) for Single Degree of Freedom (SDOF) Oscillators
In this method, a single degree of freedom (SDOF) model of each structure is subjected to non-linear time history analysis using a suite of ground motion records scaled to multple stripes of intensity measure. The displacements of the SDOF due to each ground motion record are used as input to determine the distribution of buildings in each damage state for each level of ground motion intensity. A regression algorithm is then applied to derive the fragility model.
The figure below illustrates the results of a Multiple Stripe Analysis, from which the fragility function is built.
<img src="../../../../figures/MSA_example.jpg" width="500" align="middle">
Note
Step1: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
If the User wants to specify the cyclic hysteretic behaviour of the SDOF system, please input the path of the file where the hysteretic parameters are contained, using the variable sdof_hysteresis. The parameters should be defined according to the format described in the RMTK manual. If instead default parameters want to be assumed, please set the sdof_hysteresis variable to "Default"
Step2: Load ground motion records
For what concerns the ground motions to be used in th Multiple Stripe Analysis the following inputs are required
Step3: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
Currently the user can provide spectral displacement, capacity curve dependent and interstorey drift damage model type.
If the damage model type is interstorey drift the user has to input interstorey drift values of the MDOF system. The user can then provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements of the SDOF system, otherwise a linear relationship is assumed.
Step4: Obtain the damage probability matrix
The following parameters need to be defined in the cell below in order to calculate the damage probability matrix
Step5: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above
Step6: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above
Step7: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above
Step8: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions
Step9: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above | Python Code:
import MSA_on_SDOF
from rmtk.vulnerability.common import utils
import numpy as np
%matplotlib inline
Explanation: Multiple Stripe Analysis (MSA) for Single Degree of Freedom (SDOF) Oscillators
In this method, a single degree of freedom (SDOF) model of each structure is subjected to non-linear time history analysis using a suite of ground motion records scaled to multple stripes of intensity measure. The displacements of the SDOF due to each ground motion record are used as input to determine the distribution of buildings in each damage state for each level of ground motion intensity. A regression algorithm is then applied to derive the fragility model.
The figure below illustrates the results of a Multiple Stripe Analysis, from which the fragility function is built.
<img src="../../../../figures/MSA_example.jpg" width="500" align="middle">
Note: To run the code in a cell:
Click on the cell to select it.
Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
End of explanation
capacity_curves_file = "../../../../../rmtk_data/capacity_curves_Sd-Sa.csv"
sdof_hysteresis = "Default"
#sdof_hysteresis = "../../../../../rmtk_data/pinching_parameters.csv"
from read_pinching_parameters import read_parameters
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
capacity_curves = utils.check_SDOF_curves(capacity_curves)
utils.plot_capacity_curves(capacity_curves)
hysteresis = read_parameters(sdof_hysteresis)
Explanation: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
If the User wants to specify the cyclic hysteretic behaviour of the SDOF system, please input the path of the file where the hysteretic parameters are contained, using the variable sdof_hysteresis. The parameters should be defined according to the format described in the RMTK manual. If instead default parameters want to be assumed, please set the sdof_hysteresis variable to "Default"
End of explanation
gmrs_folder = "../../../../../rmtk_data/accelerograms"
minT, maxT = 0.1, 2.0
no_bins = 4
no_rec_bin = 4
record_scaled_folder = "../../../../../rmtk_data/Scaled_trial"
gmrs = utils.read_gmrs(gmrs_folder)
#utils.plot_response_spectra(gmrs, minT, maxT)
Explanation: Load ground motion records
For what concerns the ground motions to be used in th Multiple Stripe Analysis the following inputs are required:
1. gmrs_folder: path to the folder containing the ground motion records to be used in the analysis. Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.
2. record_scaled_folder. In this folder there should be a csv file for each Intensity Measure bin selected for the MSA, containing the names of the records that should be scaled to that IM bin, and the corresponding scaling factors. An example of this type of file is provided in the RMTK manual.
3. no_bins: number of Intensity Measure bins.
4. no_rec_bin: number of records per bin
If the user wants to plot acceleration, displacement and velocity response spectra, the function utils.plot_response_spectra(gmrs, minT, maxT) should be un-commented. The parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.
End of explanation
damage_model_file = "../../../../../rmtk_data/damage_model_Sd.csv"
damage_model = utils.read_damage_model(damage_model_file)
Explanation: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
Currently the user can provide spectral displacement, capacity curve dependent and interstorey drift damage model type.
If the damage model type is interstorey drift the user has to input interstorey drift values of the MDOF system. The user can then provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements of the SDOF system, otherwise a linear relationship is assumed.
End of explanation
damping_ratio = 0.05
degradation = False
msa = {}; msa['n. bins']=no_bins; msa['records per bin']=no_rec_bin; msa['input folder']=record_scaled_folder
PDM, Sds, IML_info = MSA_on_SDOF.calculate_fragility(capacity_curves, hysteresis, msa, gmrs,
damage_model, damping_ratio, degradation)
Explanation: Obtain the damage probability matrix
The following parameters need to be defined in the cell below in order to calculate the damage probability matrix:
1. damping_ratio: This parameter defines the damping ratio for the structure.
2. degradation: This boolean parameter should be set to True or False to specify whether structural degradation should be considered in the analysis or not.
End of explanation
import MSA_post_processing
IMT = "Sa"
T = 0.466
regression_method = "max likelihood"
fragility_model = MSA_post_processing.calculate_fragility_model(PDM,gmrs,IML_info,IMT,msa,damage_model,
T,damping_ratio, regression_method)
Explanation: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:
1. IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sa","Sd" and "HI" (Housner Intensity).
2. period: This parameter defines the period for which a spectral intensity measure should be computed. If Housner Intensity is selected as intensity measure a range of periods should be defined instead (for example T=np.arange(0.3,3.61,0.01)).
3. regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are "least squares" and "max likelihood".
End of explanation
minIML, maxIML = 0.01, 4
utils.plot_fragility_model(fragility_model, minIML, maxIML)
Explanation: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
End of explanation
taxonomy = "RC"
minIML, maxIML = 0.01, 3.00
output_type = "csv"
output_path = "../../../../../phd_thesis/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
Explanation: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
cons_model_file = "../../../../../rmtk_data/cons_model.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00,
2.20, 2.40, 2.60, 2.80, 3.00, 3.20, 3.40, 3.60, 3.80, 4.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
utils.plot_vulnerability_model(vulnerability_model)
Explanation: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:
1. cons_model_file: This parameter specifies the path of the consequence model file.
2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.
3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are "lognormal", "beta", and "PMF".
End of explanation
taxonomy = "RC"
output_type = "csv"
output_path = "../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
Explanation: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation |
4,150 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Save out dataset for Evaluation
Step1: Merge user data with feature data
Step2: Eliminate Rows with viewed items that don't have features
this may break up some trajectories (view1,view2,view3-removed, view4,buy).
Step3: Eliminate Rows with bought items that don't have features
this will eliminate whole trajectories (view1,view2,view3,buy), because each of these rows is labeled with the buy id
Step4: Eliminate Rows <20 minutes before buy
Step5: Eliminate Users with <5 previously viewed items
Step6: Only use First Buy per User
Step7: Remove Features from DF before Saving
Step8: Save Out
Step9: Sub-Sample (save out v1000)
Step10: Create Smaller spu_fea for subsample | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# get data
user_profile = pd.read_csv('../data_user_view_buy/user_profile.csv',sep='\t',header=None)
user_profile.columns = ['user_id','buy_spu','buy_sn','buy_ct3','view_spu','view_sn','view_ct3','time_interval','view_cnt','view_seconds']
user_profile.head()
spu_fea = pd.read_pickle("../data_nn_features/spu_fea.pkl") #takes forever to load
spu_fea.head()
spu_fea = spu_fea.reset_index()
Explanation: Save out dataset for Evaluation: Dataset Eval 1
this saves out a smaller dataset to compare different recommendation algorithms on
it removes rows with viewed items that do not have features
it removes items viewed less 20 minutes before buying
it then removes users with <5 viewed items before buying.
Versions of Dataset:
- v1: starting point
- v2: removing rows for second items bought by user - I only want want trajectory per user so that I don't mess things up later (calculating similarity etc).
End of explanation
spu_fea['view_spu']=spu_fea['spu_id']
spu_fea['view_spu']=spu_fea['spu_id']
user_profile_w_features = user_profile.merge(spu_fea,on='view_spu',how='left')
print('before merge nrow: {0}').format(len(user_profile))
print('after merge nrows:{0}').format(len(user_profile_w_features))
user_profile_w_features.head(20)
# takes too long
# user_profile_w_features.to_csv('../../data_user_view_buy/user_profile_items_with_features.csv') # I think this takes to long to save.
Explanation: Merge user data with feature data
End of explanation
len(user_profile_w_features)
user_profile_w_features_nonnull = user_profile_w_features.loc[~user_profile_w_features.features.isnull(),]
len(user_profile_w_features_nonnull)
Explanation: Eliminate Rows with viewed items that don't have features
this may break up some trajectories (view1,view2,view3-removed, view4,buy).
End of explanation
spus_with_features =user_profile_w_features_nonnull.spu_id.unique() #
user_profile_w_features_nonnull = user_profile_w_features_nonnull[user_profile_w_features_nonnull['buy_spu'].isin(spus_with_features)]
len(user_profile_w_features_nonnull)
Explanation: Eliminate Rows with bought items that don't have features
this will eliminate whole trajectories (view1,view2,view3,buy), because each of these rows is labeled with the buy id
End of explanation
# remove rows <20 minutes before
user_profile_w_features_nonnull_20 = user_profile_w_features_nonnull.loc[(user_profile_w_features_nonnull.time_interval/60.0)>20.0]
len(user_profile_w_features_nonnull_20)
Explanation: Eliminate Rows <20 minutes before buy
End of explanation
view_counts_per_user = user_profile_w_features_nonnull_20[['user_id','view_spu']].groupby(['user_id']).agg(['count'])
view_counts_per_user.head()
user_profile_w_features_nonnull_20_5 = user_profile_w_features_nonnull_20.join(view_counts_per_user, on='user_id', rsuffix='_r')
columns = user_profile_w_features_nonnull_20_5.columns.values
columns[-1]='view_spu_count'
user_profile_w_features_nonnull_20_5.columns=columns
user_profile_w_features_nonnull_20_5.head()
user_profile_w_features_nonnull_20_5 = user_profile_w_features_nonnull_20_5.loc[user_profile_w_features_nonnull_20_5.view_spu_count>5,]
len(user_profile_w_features_nonnull_20_5)
Explanation: Eliminate Users with <5 previously viewed items
End of explanation
user_profile_w_features_nonnull_20_5.user_id.unique()
# (super slow way of doing it)
user_profile_w_features_nonnull_20_5['drop']=0
for user_id in user_profile_w_features_nonnull_20_5.user_id.unique():
# get bought items per user
buy_spus = user_profile_w_features_nonnull_20_5.loc[user_profile_w_features_nonnull_20_5.user_id==user_id,'buy_spu'].unique()
# eliminate second, third .. purchases
if len(buy_spus)>1:
for buy_spu in buy_spus[1::]:
user_profile_w_features_nonnull_20_5.loc[(user_profile_w_features_nonnull_20_5.user_id==user_id)&(user_profile_w_features_nonnull_20_5.buy_spu==buy_spu),'drop']=1
print(len(user_profile_w_features_nonnull_20_5))
user_profile_w_features_nonnull_20_5 = user_profile_w_features_nonnull_20_5.loc[user_profile_w_features_nonnull_20_5['drop']!=1]
print(len(user_profile_w_features_nonnull_20_5))
Explanation: Only use First Buy per User
End of explanation
user_profile_w_features_nonnull_20_5_nofeatures = user_profile_w_features_nonnull_20_5.drop('features',axis=1)
Explanation: Remove Features from DF before Saving
End of explanation
user_profile_w_features_nonnull_20_5_nofeatures.to_pickle('../data_user_view_buy/user_profile_items_nonnull_features_20_mins_5_views_v2.pkl')
Explanation: Save Out
End of explanation
# sample 1000 users
np.random.seed(1000)
users_sample = np.random.choice(user_profile_w_features_nonnull_20_5_nofeatures.user_id.unique(),size=1000)
print(users_sample[0:10])
user_profile_sample = user_profile_w_features_nonnull_20_5_nofeatures.loc[user_profile_w_features_nonnull_20_5_nofeatures.user_id.isin(users_sample),]
print(len(user_profile_sample))
print(len(user_profile_sample.user_id.unique()))
user_profile_sample.to_pickle('../data_user_view_buy/user_profile_items_nonnull_features_20_mins_5_views_v2_sample1000.pkl')
Explanation: Sub-Sample (save out v1000)
End of explanation
intersection_of_spus = set(list(user_profile_sample.view_spu.unique())+list(user_profile_sample.buy_spu.unique()))
spu_fea_sample = spu_fea.loc[spu_fea['spu_id'].isin(list(intersection_of_spus))]
len(spu_fea)
len(spu_fea_sample)
spu_fea_sample.to_pickle('../data_nn_features/spu_fea_sample1000.pkl')
Explanation: Create Smaller spu_fea for subsample
End of explanation |
4,151 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GTEx v8 eQTL tissue-specific all SNP gene associations
Files in gs
Step1: After generating the text files as above, ran the below to get the files bgzipped so we can read them in and create Hail Tables.
paste gtex_eQTL_paths_in.txt gtex_eQTL_paths_out.txt |
while read infile outfile;
do
gsutil -u broad-ctsa cat $infile |
gzip -d |
bgzip -c |
gsutil cp - $outfile
done
Now can generate Hail Tables (do this on a cluster)
Step2: Add entries for eQTL Hail Tables to config
Now can create entries in datasets.json for new tables
Step4: Create schemas for docs for eQTL Hail Tables
Step5: GTEx v8 sQTL tissue-specific all SNP gene associations
Step6: Files were converted from .gz to .bgz in same way as eQTL files were above.
paste gtex_sQTL_paths_in.txt gtex_sQTL_paths_out.txt |
while read infile outfile;
do
gsutil -u broad-ctsa cat $infile |
gzip -d |
bgzip -c |
gsutil cp - $outfile
done
Create GTEx v8 sQTL Hail Tables
Step7: Add entries for sQTL Hail Tables to config
Step9: Create schemas for docs for sQTL Hail Tables | Python Code:
# Generate list of all eQTL all association files in gs://gtex-resources
list_eqtl_files_gz = subprocess.run(["gsutil",
"-u",
"broad-ctsa",
"ls",
"gs://gtex-resources/GTEx_Analysis_v8_QTLs/GTEx_Analysis_v8_eQTL_all_associations/"],
stdout=subprocess.PIPE)
eqtl_files_gz = list_eqtl_files_gz.stdout.decode('utf-8').split()
# Write eQTL file paths to text for input
with open("gtex_eQTL_paths_in.txt", "w") as f:
for eqtl_file in eqtl_files_gz:
f.write(f"{eqtl_file}\n")
# Change bucket to "gs://hail-datasets-tmp" and filename extension to ".bgz" and write to another text file for output
with open("gtex_eQTL_paths_out.txt", "w") as f:
for eqtl_file in eqtl_files_gz:
eqtl_file_out = eqtl_file.replace("gs://gtex-resources", "gs://hail-datasets-tmp").replace(".gz", ".bgz")
f.write(f"{eqtl_file_out}\n")
Explanation: GTEx v8 eQTL tissue-specific all SNP gene associations
Files in gs://gtex-resources/GTEx_Analysis_v8_QTLs/GTEx_Analysis_v8_eQTL_all_associations/ were gzipped, so we need to get them bgzipped and moved over to gs://hail-datasets-tmp. First I generated a text file for the input paths and a text file for desired output paths.
End of explanation
# Generate list of .bgz files in gs://hail-datasets-tmp
with open("gtex_eQTL_paths_out.txt") as f:
eqtl_files = f.read().splitlines()
for eqtl_file in eqtl_files_bgz:
print(eqtl_file)
ht = hl.import_table(eqtl_file,
force_bgz=True,
types = {"gene_id": hl.tstr,
"variant_id": hl.tstr,
"tss_distance": hl.tint32,
"ma_samples": hl.tint32,
"ma_count": hl.tint32,
"maf": hl.tfloat64,
"pval_nominal": hl.tfloat64,
"slope": hl.tfloat64,
"slope_se": hl.tfloat64})
name = "GTEx_eQTL_allpairs_" + eqtl_file.split(".")[0].split("/")[-1]
version = "v8"
build = "GRCh38"
ht2 = ht.annotate(locus = hl.locus(ht.variant_id.split("_")[0],
hl.int(ht.variant_id.split("_")[1]),
reference_genome=build),
alleles = [ht.variant_id.split("_")[2],
ht.variant_id.split("_")[3]])
ht2 = ht2.select("locus", "alleles", "gene_id", "variant_id", "tss_distance",
"ma_samples", "ma_count", "maf", "pval_nominal", "slope", "slope_se")
ht2 = ht2.key_by("locus", "alleles")
n_rows = ht2.count()
n_partitions = ht2.n_partitions()
ht2 = ht2.annotate_globals(metadata=hl.struct(name=name,
version=version,
reference_genome=build,
n_rows=n_rows,
n_partitions=n_partitions))
for region in ["us"]:
output_file = f"gs://hail-datasets-{region}/{name}_{version}_{build}.ht"
ht2.write(output_file, overwrite=False)
print(f"Wrote {name} to Hail Table.\n")
Explanation: After generating the text files as above, ran the below to get the files bgzipped so we can read them in and create Hail Tables.
paste gtex_eQTL_paths_in.txt gtex_eQTL_paths_out.txt |
while read infile outfile;
do
gsutil -u broad-ctsa cat $infile |
gzip -d |
bgzip -c |
gsutil cp - $outfile
done
Now can generate Hail Tables (do this on a cluster):
Create GTEx v8 eQTL Hail Tables
End of explanation
# Open our datasets config file so we can add our new entries
datasets_path = os.path.abspath("../../hail/python/hail/experimental/datasets.json")
with open(datasets_path, "r") as f:
datasets = json.load(f)
# Get list of GTEx eQTL tables in hail-datasets-us
list_datasets = subprocess.run(["gsutil", "-u", "broad-ctsa", "ls", "gs://hail-datasets-us"], stdout=subprocess.PIPE)
all_datasets = list_datasets.stdout.decode('utf-8').split()
tables = [x.strip("/") for x in all_datasets if "GTEx_eQTL_allpairs_" in x]
for table in tables:
gs_us_url = table
gs_eu_url = table.replace("hail-datasets-us", "hail-datasets-eu")
aws_url = table.replace("gs", "s3", 1).replace("hail-datasets-us", "hail-datasets-us-east-1")
full_table_name = table.split("/")[-1]
build = full_table_name.split("_")[-1].replace(".ht", "")
version = full_table_name.split("_")[-2]
tissue_name = full_table_name.replace("GTEx_eQTL_allpairs_", "").replace(f"_{version}_{build}.ht", "")
json_entry = {
"annotation_db": {
"key_properties": []
},
"description": f"GTEx: {tissue_name} eQTL tissue-specific all SNP gene "
f"associations Hail Table. All variant-gene cis-eQTL associations "
f"tested in each tissue (including non-significant associations).",
"url": "https://gtexportal.org/home/datasets",
"versions": [
{
"reference_genome": build,
"url": {
"aws": {
"us": f"{aws_url}"
},
"gcp": {
"us": f"{gs_us_url}",
"eu": f"{gs_eu_url}"
}
},
"version": version
}
]
}
datasets[f"GTEx_eQTL_allpairs_{tissue_name}"] = json_entry
# Write new entries back to datasets.json config:
with open(datasets_path, "w") as f:
json.dump(datasets, f, sort_keys=True, ensure_ascii=False, indent=2)
Explanation: Add entries for eQTL Hail Tables to config
Now can create entries in datasets.json for new tables:
End of explanation
import textwrap
output_dir = os.path.abspath("../../hail/python/hail/docs/datasets/schemas")
datasets_path = os.path.abspath("../../hail/python/hail/experimental/datasets.json")
with open(datasets_path, "r") as f:
datasets = json.load(f)
names = [name for name in list(datasets.keys()) if "GTEx_eQTL_allpairs_" in name]
for name in names:
versions = sorted(set(dataset["version"] for dataset in datasets[name]["versions"]))
if not versions:
versions = [None]
reference_genomes = sorted(set(dataset["reference_genome"] for dataset in datasets[name]["versions"]))
if not reference_genomes:
reference_genomes = [None]
print(name)
print(versions[0])
print(reference_genomes[0] + "\n")
path = [dataset["url"]["gcp"]["us"]
for dataset in datasets[name]["versions"]
if all([dataset["version"] == versions[0],
dataset["reference_genome"] == reference_genomes[0]])]
assert len(path) == 1
path = path[0]
table = hl.methods.read_table(path)
description = table.describe(handler=lambda x: str(x)).split("\n")
description = "\n".join([line.rstrip() for line in description])
if path.endswith(".ht"):
table_class = "hail.Table"
else:
table_class = "hail.MatrixTable"
template = .. _{dataset}:
{dataset}
{underline1}
* **Versions:** {versions}
* **Reference genome builds:** {ref_genomes}
* **Type:** :class:`{class}`
Schema ({version0}, {ref_genome0})
{underline2}
.. code-block:: text
{schema}
context = {
"dataset": name,
"underline1": len(name) * "=",
"version0": versions[0],
"ref_genome0": reference_genomes[0],
"versions": ", ".join([str(version) for version in versions]),
"ref_genomes": ", ".join([str(reference_genome) for reference_genome in reference_genomes]),
"underline2": len("".join(["Schema (", str(versions[0]), ", ", str(reference_genomes[0]), ")"])) * "~",
"schema": textwrap.indent(description, " "),
"class": table_class
}
with open(output_dir + f"/{name}.rst", "w") as f:
f.write(template.format(**context).strip())
Explanation: Create schemas for docs for eQTL Hail Tables
End of explanation
# Generate list of all sQTL all association files in gs://gtex-resources
list_sqtl_files_gz = subprocess.run(["gsutil",
"-u",
"broad-ctsa",
"ls",
"gs://gtex-resources/GTEx_Analysis_v8_QTLs/GTEx_Analysis_v8_sQTL_all_associations/"],
stdout=subprocess.PIPE)
sqtl_files_gz = list_sqtl_files_gz.stdout.decode('utf-8').split()
# Write sQTL file paths to text for input
with open("gtex_sQTL_paths_in.txt", "w") as f:
for sqtl_file in sqtl_files_gz:
f.write(f"{sqtl_file}\n")
# Change bucket to "gs://hail-datasets-tmp" and filename extension to ".bgz" and write to another text file for output
with open("gtex_sQTL_paths_out.txt", "w") as f:
for sqtl_file in sqtl_files_gz:
sqtl_file_out = sqtl_file.replace("gs://gtex-resources", "gs://hail-datasets-tmp").replace(".gz", ".bgz")
f.write(f"{sqtl_file_out}\n")
Explanation: GTEx v8 sQTL tissue-specific all SNP gene associations
End of explanation
# Generate list of .bgz files in gs://hail-datasets-tmp
with open("gtex_sQTL_paths_out.txt") as f:
sqtl_files = f.read().splitlines()
for sqtl_file in sqtl_files:
print(sqtl_file)
ht = hl.import_table(sqtl_file,
force_bgz=True,
types = {"phenotype_id": hl.tstr,
"variant_id": hl.tstr,
"tss_distance": hl.tint32,
"ma_samples": hl.tint32,
"ma_count": hl.tint32,
"maf": hl.tfloat64,
"pval_nominal": hl.tfloat64,
"slope": hl.tfloat64,
"slope_se": hl.tfloat64})
name = "GTEx_sQTL_allpairs_" + sqtl_file.split(".")[0].split("/")[-1]
version = "v8"
build = "GRCh38"
ht2 = ht.annotate(intron = hl.locus_interval(ht.phenotype_id.split(":")[0],
hl.int32(ht.phenotype_id.split(":")[1]),
hl.int32(ht.phenotype_id.split(":")[2]),
reference_genome="GRCh38"),
cluster = ht.phenotype_id.split(":")[-2],
gene_id = ht.phenotype_id.split(":")[-1],
locus = hl.locus(ht.variant_id.split("_")[0],
hl.int(ht.variant_id.split("_")[1]),
reference_genome=build),
alleles = [ht.variant_id.split("_")[2],
ht.variant_id.split("_")[3]])
ht2 = ht2.annotate(phenotype_id = hl.struct(intron=ht2.intron,
cluster=ht2.cluster,
gene_id=ht2.gene_id))
ht2 = ht2.select("locus", "alleles", "phenotype_id", "tss_distance",
"ma_samples", "ma_count", "maf", "pval_nominal", "slope", "slope_se")
n_rows = ht2.count()
n_partitions = ht2.n_partitions()
ht2 = ht2.annotate_globals(metadata=hl.struct(name=name,
version=version,
reference_genome=build,
n_rows=n_rows,
n_partitions=n_partitions))
ht2 = ht2.key_by("locus", "alleles")
for region in ["us"]:
output_file = f"gs://hail-datasets-{region}/{name}_{version}_{build}.ht"
ht2.write(output_file, overwrite=False)
print(f"Wrote {name} to Hail Table.\n")
Explanation: Files were converted from .gz to .bgz in same way as eQTL files were above.
paste gtex_sQTL_paths_in.txt gtex_sQTL_paths_out.txt |
while read infile outfile;
do
gsutil -u broad-ctsa cat $infile |
gzip -d |
bgzip -c |
gsutil cp - $outfile
done
Create GTEx v8 sQTL Hail Tables
End of explanation
# Open our datasets config file so we can add our new entries
datasets_path = os.path.abspath("../../hail/python/hail/experimental/datasets.json")
with open(datasets_path, "r") as f:
datasets = json.load(f)
# Get list of GTEx sQTL tables in hail-datasets-us
list_datasets = subprocess.run(["gsutil", "-u", "broad-ctsa", "ls", "gs://hail-datasets-us"], stdout=subprocess.PIPE)
all_datasets = list_datasets.stdout.decode('utf-8').split()
tables = [x.strip("/") for x in all_datasets if "GTEx_sQTL_allpairs_" in x]
for table in tables:
gs_us_url = table
gs_eu_url = table.replace("hail-datasets-us", "hail-datasets-eu")
aws_url = table.replace("gs", "s3", 1).replace("hail-datasets-us", "hail-datasets-us-east-1")
full_table_name = table.split("/")[-1]
build = full_table_name.split("_")[-1].replace(".ht", "")
version = full_table_name.split("_")[-2]
tissue_name = full_table_name.replace("GTEx_sQTL_allpairs_", "").replace(f"_{version}_{build}.ht", "")
json_entry = {
"annotation_db": {
"key_properties": []
},
"description": f"GTEx: {tissue_name} sQTL tissue-specific all SNP gene "
f"associations Hail Table. All variant-gene cis-sQTL associations "
f"tested in each tissue (including non-significant associations).",
"url": "https://gtexportal.org/home/datasets",
"versions": [
{
"reference_genome": build,
"url": {
"aws": {
"us": f"{aws_url}"
},
"gcp": {
"us": f"{gs_us_url}",
"eu": f"{gs_eu_url}"
}
},
"version": version
}
]
}
datasets[f"GTEx_sQTL_allpairs_{tissue_name}"] = json_entry
# Write new entries back to datasets.json config:
with open(datasets_path, "w") as f:
json.dump(datasets, f, sort_keys=True, ensure_ascii=False, indent=2)
Explanation: Add entries for sQTL Hail Tables to config
End of explanation
import textwrap
output_dir = os.path.abspath("../../hail/python/hail/docs/datasets/schemas")
datasets_path = os.path.abspath("../../hail/python/hail/experimental/datasets.json")
with open(datasets_path, "r") as f:
datasets = json.load(f)
names = [name for name in list(datasets.keys()) if "GTEx_sQTL_allpairs_" in name]
for name in names:
versions = sorted(set(dataset["version"] for dataset in datasets[name]["versions"]))
if not versions:
versions = [None]
reference_genomes = sorted(set(dataset["reference_genome"] for dataset in datasets[name]["versions"]))
if not reference_genomes:
reference_genomes = [None]
print(name)
print(versions[0])
print(reference_genomes[0] + "\n")
path = [dataset["url"]["gcp"]["us"]
for dataset in datasets[name]["versions"]
if all([dataset["version"] == versions[0],
dataset["reference_genome"] == reference_genomes[0]])]
assert len(path) == 1
path = path[0]
table = hl.methods.read_table(path)
description = table.describe(handler=lambda x: str(x)).split("\n")
description = "\n".join([line.rstrip() for line in description])
if path.endswith(".ht"):
table_class = "hail.Table"
else:
table_class = "hail.MatrixTable"
template = .. _{dataset}:
{dataset}
{underline1}
* **Versions:** {versions}
* **Reference genome builds:** {ref_genomes}
* **Type:** :class:`{class}`
Schema ({version0}, {ref_genome0})
{underline2}
.. code-block:: text
{schema}
context = {
"dataset": name,
"underline1": len(name) * "=",
"version0": versions[0],
"ref_genome0": reference_genomes[0],
"versions": ", ".join([str(version) for version in versions]),
"ref_genomes": ", ".join([str(reference_genome) for reference_genome in reference_genomes]),
"underline2": len("".join(["Schema (", str(versions[0]), ", ", str(reference_genomes[0]), ")"])) * "~",
"schema": textwrap.indent(description, " "),
"class": table_class
}
with open(output_dir + f"/{name}.rst", "w") as f:
f.write(template.format(**context).strip())
Explanation: Create schemas for docs for sQTL Hail Tables
End of explanation |
4,152 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
%matplotlib inline
Linear Regression Example
This example uses the only the first feature of the diabetes dataset, in
order to illustrate a two-dimensional plot of this regression technique. The
straight line can be seen in the plot, showing how linear regression attempts
to draw a straight line that will best minimize the residual sum of squares
between the observed responses in the dataset, and the responses predicted by
the linear approximation.
The coefficients, the residual sum of squares and the variance score are also
calculated.
Step1: Implemente o coeficiente de determinação e verifique o score obtido
Step2: Execute o notebook usando agora o dataset aerogerador na pasta do github. | Python Code:
print(__doc__)
# Code source: Jaques Grobler
# License: BSD 3 clause
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets, linear_model
# Load the diabetes dataset
diabetes = datasets.load_diabetes()
# Use only one feature
diabetes_X = diabetes.data[:, np.newaxis, 2]
# Split the data into training/testing sets
diabetes_X_train = diabetes_X[:-20]
diabetes_X_test = diabetes_X[-20:]
# Split the targets into training/testing sets
diabetes_y_train = diabetes.target[:-20]
diabetes_y_test = diabetes.target[-20:]
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(diabetes_X_train, diabetes_y_train)
# The coefficients
print('Coefficients: \n', regr.coef_)
# The intercept
print('Intercept: \n', regr.intercept_)
# The mean squared error
print("Mean squared error: %.2f"
% np.mean((regr.predict(diabetes_X_test) - diabetes_y_test) ** 2))
# Plot outputs
plt.scatter(diabetes_X_test, diabetes_y_test, color='black')
plt.plot(diabetes_X_test, regr.predict(diabetes_X_test), color='blue',
linewidth=3)
plt.xticks(())
plt.yticks(())
plt.show()
regr.coef_
regr.score(diabetes_X_test, diabetes_y_test)
Explanation: %matplotlib inline
Linear Regression Example
This example uses the only the first feature of the diabetes dataset, in
order to illustrate a two-dimensional plot of this regression technique. The
straight line can be seen in the plot, showing how linear regression attempts
to draw a straight line that will best minimize the residual sum of squares
between the observed responses in the dataset, and the responses predicted by
the linear approximation.
The coefficients, the residual sum of squares and the variance score are also
calculated.
End of explanation
def total_sum_of_squares(y):
mean_y = np.mean(y)
return sum((v-mean_y)**2 for v in y)
def r_squared(y,yb):
#y = valor real; yb = valor real
return 1.0 - sum((y-yb)**2)/total_sum_of_squares(y)
yb = regr.predict(diabetes_X_test)
print r_squared(diabetes_y_test,yb)
#usamos a função score do scikit-learn para verificar se não há erros
print regr.score(diabetes_X_test,diabetes_y_test)
Explanation: Implemente o coeficiente de determinação e verifique o score obtido
End of explanation
data = np.loadtxt("aerogerador.txt",delimiter=",")
#vamos embaralhar os dados antes de dividir treino e teste
rdata = np.random.permutation(data)
X = rdata[:,0]
y = rdata[:,1]
nt = int(len(X) * 0.8)
X_train = X[:nt]
X_test = X[nt:]
y_train = y[:nt]
y_test = y[nt:]
#quando o o dataset possui apenas 1 feature, precisamos usar reshape para
#evitar warnings (ou futuros erros) no scikit-learn
X_train = X_train.reshape(-1,1)
X_test = X_test.reshape(-1,1)
regr = linear_model.LinearRegression()
regr.fit(X_train, y_train)
# The coefficients
print('Coefficients: \n', [regr.coef_ , regr.intercept_])
# The mean squared error
print("Mean squared error: %.2f"
% np.mean((regr.predict(X_test) - y_test) ** 2))
print ("R-squared: %.2f" % r_squared(regr.predict(X_test),y_test))
# Plot outputs
plt.scatter(X_test, y_test, color='black')
plt.plot(X_test, regr.predict(X_test), color='blue',
linewidth=3)
plt.xticks(())
plt.yticks(())
plt.show()
Explanation: Execute o notebook usando agora o dataset aerogerador na pasta do github.
End of explanation |
4,153 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Point source plotting basics
In 3ML, we distinguish between data and model plotting. Data plots contian real data points and the over-plotted model is (sometimes) folded through an instrument response. Therefore, the x-axis is not always in the same units across instruments if there is energy dispersion.
However, all instuments see the same model and a multi-wavelength fit can be viewed in model space without complication. 3ML uses one interface to plot both MLE and Bayesian fitted models. To demonstrate we will use toy data simulated from a powerlaw and two gaussians for MLE fits and an exponentially cutoff power law with one gaussian for Bayesian fits.
First we load the analysis results
Step1: Plotting a single analysis result
The easiest way to plot is to call plot_point_source_spectra. By default, it plots in photon space with a range of 10-40000 keV evaluated at 100 logrithmic points
Step2: Flux and energy units
We use astropy units to specify both the flux and energy units.
* The plotting routine understands photon, energy ($F_{\nu}$) and $\nu F_{
\nu}$ flux units;
energy units can be energy, frequency, or wavelength
a custom range can be applied.
changing flux units
Step3: changing energy units
Step4: Plotting components
Sometimes it is interesting to see the components in a composite model. We can specify the use_components switch. Here we will use Bayesian results. Note that all features work with MLE of Bayesian results.
Step5: Notice that the duplicated components have the subscripts n1 and n2. If we want to specify which components to plot, we must use these subscripts.
Step6: If we want to see the total model with the components, just add total to the components list.
Additionally, we can change the confidence interval for the contours from the default of 1$\sigma$ (0.68) to 2$\sigma$ (0.95).
Step7: Additional features
Explore the docstring to see all the available options. Default configurations can be altered in the 3ML config file.
Use asymmetric errors and alter the default color map
Step8: turn of contours and the legend and increase the number of points plotted
Step9: colors or color maps can be specfied
Step10: Further modifications to plotting style, legend style, etc. can be modified either in the 3ML configuration
Step11: or by directly passing dictionary arguments to the the plot command. Examine the docstring for more details!
Plotting multiple results
Any number of results can be plotted together. Simply provide them as arguments. You can mix and match MLE and Bayesian results as well as plotting their components.
Step12: Specify particular colors for each analysis and broaden the contours
Step13: As with single results, we can choose to plot the components for all the sources. | Python Code:
%matplotlib inline
jtplot.style(context="talk", fscale=1, ticks=True, grid=False)
import matplotlib.pyplot as plt
plt.style.use("mike")
import numpy as np
from threeML import *
from threeML.io.package_data import get_path_of_data_file
#mle1 = load_analysis_results(get_path_of_data_file("datasets/toy_xy_mle1.fits"))
bayes1 = load_analysis_results(get_path_of_data_file("datasets/toy_xy_bayes2.fits"))
Explanation: Point source plotting basics
In 3ML, we distinguish between data and model plotting. Data plots contian real data points and the over-plotted model is (sometimes) folded through an instrument response. Therefore, the x-axis is not always in the same units across instruments if there is energy dispersion.
However, all instuments see the same model and a multi-wavelength fit can be viewed in model space without complication. 3ML uses one interface to plot both MLE and Bayesian fitted models. To demonstrate we will use toy data simulated from a powerlaw and two gaussians for MLE fits and an exponentially cutoff power law with one gaussian for Bayesian fits.
First we load the analysis results:
End of explanation
_ = plot_point_source_spectra(mle1,ene_min=1,ene_max=1E3)
Explanation: Plotting a single analysis result
The easiest way to plot is to call plot_point_source_spectra. By default, it plots in photon space with a range of 10-40000 keV evaluated at 100 logrithmic points:
End of explanation
_ = plot_point_source_spectra(mle1,ene_min=1,ene_max=1E3,flux_unit='1/(m2 s MeV)')
_ = plot_point_source_spectra(mle1,ene_min=1,ene_max=1E3,flux_unit='erg/(cm2 day keV)')
_ = plot_point_source_spectra(mle1,ene_min=1,ene_max=1E3,flux_unit='keV2/(cm2 s keV)')
Explanation: Flux and energy units
We use astropy units to specify both the flux and energy units.
* The plotting routine understands photon, energy ($F_{\nu}$) and $\nu F_{
\nu}$ flux units;
energy units can be energy, frequency, or wavelength
a custom range can be applied.
changing flux units
End of explanation
_ = plot_point_source_spectra(mle1,
ene_min=.001,
ene_max=1E3,
energy_unit='MeV')
# energy ranges can also be specified in units
_ = plot_point_source_spectra(mle1,
ene_min=1*astropy_units.keV,
ene_max=1*astropy_units.MeV)
_ = plot_point_source_spectra(mle1,
ene_min=1E3*astropy_units.Hz,
ene_max=1E7*astropy_units.Hz)
_ = plot_point_source_spectra(mle1,
ene_min=1E1*astropy_units.nm,
ene_max=1E3*astropy_units.nm,
xscale='linear') # plotting with a linear scale
Explanation: changing energy units
End of explanation
_ = plot_point_source_spectra(bayes1,
ene_min=1,
ene_max=1E3,
use_components=True
)
_=plt.ylim(bottom=1)
Explanation: Plotting components
Sometimes it is interesting to see the components in a composite model. We can specify the use_components switch. Here we will use Bayesian results. Note that all features work with MLE of Bayesian results.
End of explanation
_ = plot_point_source_spectra(mle1,
flux_unit='erg/(cm2 s keV)',
ene_min=1,
ene_max=1E3,
use_components=True,
components_to_use=['Gaussian_n1','Gaussian_n2'])
_=plt.ylim(bottom=1E-20)
Explanation: Notice that the duplicated components have the subscripts n1 and n2. If we want to specify which components to plot, we must use these subscripts.
End of explanation
_ = plot_point_source_spectra(bayes1,
flux_unit='erg/(cm2 s keV)',
ene_min=1,
ene_max=1E3,
use_components=True,
components_to_use=['total','Gaussian'],
confidence_level=0.95)
_=plt.ylim(bottom=1E-9)
_ = plot_point_source_spectra(mle1,
flux_unit='erg/(cm2 s keV)',
ene_min=1,
ene_max=1E3,
use_components=True,
fit_cmap='jet', # specify a color map
contour_colors='k', # specify a color for all contours
components_to_use=['total','Gaussian_n2','Gaussian_n1'])
_=plt.ylim(bottom=1E-16)
Explanation: If we want to see the total model with the components, just add total to the components list.
Additionally, we can change the confidence interval for the contours from the default of 1$\sigma$ (0.68) to 2$\sigma$ (0.95).
End of explanation
threeML_config['model plot']['point source plot']['fit cmap'] = 'plasma'
_ = plot_point_source_spectra(mle1, equal_tailed=False)
Explanation: Additional features
Explore the docstring to see all the available options. Default configurations can be altered in the 3ML config file.
Use asymmetric errors and alter the default color map
End of explanation
_ = plot_point_source_spectra(mle1, show_legend=False, show_contours=False, num_ene=500)
Explanation: turn of contours and the legend and increase the number of points plotted
End of explanation
_ = plot_point_source_spectra(mle1, fit_colors='orange', contour_colors='blue')
Explanation: colors or color maps can be specfied
End of explanation
threeML_config['model plot']['point source plot']
Explanation: Further modifications to plotting style, legend style, etc. can be modified either in the 3ML configuration:
End of explanation
_ = plot_point_source_spectra(mle1, bayes1,ene_min=1)
_=plt.ylim(bottom=1E-1)
Explanation: or by directly passing dictionary arguments to the the plot command. Examine the docstring for more details!
Plotting multiple results
Any number of results can be plotted together. Simply provide them as arguments. You can mix and match MLE and Bayesian results as well as plotting their components.
End of explanation
_ = plot_point_source_spectra(mle1,
bayes1,
ene_min=1.,
confidence_level=.95,
equal_tailed=False,
fit_colors=['orange','green'],
contour_colors='blue')
_ =plt.ylim(bottom=1E-1)
Explanation: Specify particular colors for each analysis and broaden the contours
End of explanation
_ = plot_point_source_spectra(mle1,
bayes1,
ene_min=1.,
use_components=True)
_=plt.ylim(bottom=1E-4)
Explanation: As with single results, we can choose to plot the components for all the sources.
End of explanation |
4,154 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Running Spatial Correlations + options
The goal of this log is to show the API of the spatial correlations and the options available.
With this code, it is possible to run the spatial correlations on masked and unmasked data.
Also, it is possible to apply a correction,
called symmetric averaging, which is a
derivative of a method by Schatzel (1988)
Step1: 1. Try on 1D data
Step2: Plot the data
Step3: Correlations for different cases
Step4: 2. Try for 2D data
(In this case, even no mask has a strong effect on data. No mask still contains a ''mask'' since at higher correlation lengths we are correlating less points. Symmetric averaging excels to overcome these effects here.)
Step5: plot 2D data
Step6: Correlations (2D)
Step7: Correlation Cross sections
Step8: 3. Try with different id's in different regions of image
Step9: Plot mask
Step10: plot correlations
Here, we see that without symmetric averaging, the correlations quickly come back at values higher than the point of initial correlation, whereas with symmetric averaging, the result looks more as what is expected, a nice Gaussian like curve centered in image. (Center of image is zero correlation) | Python Code:
%matplotlib inline
import numpy as np
#from pyCXD.tools.CrossCorrelator import CrossCorrelator
from skbeam.core.correlation import CrossCorrelator
import matplotlib.pyplot as plt
from skbeam.core.roi import ring_edges, segmented_rings
# for some convolutions, used to smooth images (make spatially correlated images)
# avoid more dependencies for this example
def convol2d(a,b=None,axes=(-2,-1)):
''' convolve a and b along axes axes
if axes 1 element, then convolves along that dimension
only works with dimensions 1 or 2 (1 or 2 axes)
'''
from numpy.fft import fft2, ifft2
if(b is None):
b = a
return ifft2(fft2(a,axes=axes)*np.conj(fft2(b,axes=axes)),axes=axes).real
def pos2extent(pos):
# convenience routine to turn positions to extent
# left right bottom top. For 2D data
extent = [pos[1][0], pos[1][-1], pos[0][-1], pos[0][0]]
return extent
Explanation: Running Spatial Correlations + options
The goal of this log is to show the API of the spatial correlations and the options available.
With this code, it is possible to run the spatial correlations on masked and unmasked data.
Also, it is possible to apply a correction,
called symmetric averaging, which is a
derivative of a method by Schatzel (1988):
Schätzel, Klaus, Martin Drewel, and Sven Stimac. "Photon correlation measurements at large lag times: improving statistical accuracy." Journal of Modern Optics 35.4 (1988): 711-718.
Technique adapted to arbitrary masks by Julien Lhermitte, Jan 2017
The correlation function in 1 dimension is:
$$C = \frac{1}{N(k)} \sum \limits_j^{N_t} I_j I_{j+k} M_j M_{j+k}$$
We may normalize it by its average intensity in two different ways:
1. Naive averaging:
the normalized correlation function is just divided by the average squared:
$$cc_{reg} = \frac{CC}{\bar{I}^2}$$
where:
$$\bar{I} = \frac{1}{N(k)} \sum \limits_{j=1}^{N_t} I_j$$
is average intensity
and where $$N(k) = \sum \limits_{j= 1}^{N_t}M_j M_{j+k}$$
(Note that in the limit of no mask, $N(k) = N_t$ as it should, mask has effect of inducing a $k$ dependence on the effective ''$N_t$'')
2. Symmetric Averaging:
For symmetric averaging, we define two new averages, $I_p$ and $I_f$ (I 'past' and I 'future'):
$$I_p = \frac{1}{N(k)} \sum \limits_j I_j M_j M_{j+k}$$
$$I_f = \frac{1}{N(k)} \sum \limits_l I_{l+k} M_l M_{l+k}$$
we define symmetric averaging as:
$$cc_{sym} = \frac{CC}{\bar{I}_p \bar{I}_p}$$
Schatzel shows this averaging is superior for the case of a simple ''mask'' : a 1D time series (data outside of time range is ''masked'')
Import some essential libraries/code
End of explanation
# test 1D data
sigma = .1
Npoints = 1000
x = np.linspace(-10, 10, Npoints)
y = convol2d(np.random.random(Npoints)*10, np.exp(-x**2/(2*sigma**2)),axes=(-1,))
mask_1D = np.ones_like(y)
mask_1D[10:20] = 0
mask_1D[60:90] = 0
mask_1D[111:137] = 0
mask_1D[211:237] = 0
mask_1D[411:537] = 0
mask_1D *= mask_1D[::-1]
y_masked = y*mask_1D
cc1D = CrossCorrelator(mask_1D.shape)
cc1D_symavg = CrossCorrelator(mask_1D.shape,normalization='symavg')
cc1D_masked = CrossCorrelator(mask_1D.shape,mask=mask_1D)
cc1D_masked_symavg = CrossCorrelator(mask_1D.shape, mask=mask_1D,normalization='symavg')
ycorr_1D = cc1D(y)
ycorr_1D_masked = cc1D_masked(y*mask_1D)
ycorr_1D_symavg = cc1D_symavg(y)
ycorr_1D_masked_symavg = cc1D_masked_symavg(y*mask_1D)
# the x axis
ycorr_1D_x = cc1D.positions
ycorr_1D_masked_x = cc1D_masked.positions
ycorr_1D_symavg_x = cc1D_symavg.positions
ycorr_1D_masked_symavg_x = cc1D_masked_symavg.positions
ycorr_1D[0].shape
Explanation: 1. Try on 1D data
End of explanation
# plot 1D Data
plt.figure(0);plt.clf();
plt.plot(x,y)
plt.plot(x,y*mask_1D)
plt.xlabel("position")
plt.ylabel("intensity (arb. units)")
Explanation: Plot the data
End of explanation
plt.figure(1);plt.clf();
plt.plot(ycorr_1D_x, ycorr_1D,color='k',label='regular')
plt.plot(ycorr_1D_masked_x, ycorr_1D_masked,color='r',label='masked')
plt.plot(ycorr_1D_symavg_x, ycorr_1D_symavg,color='g',label='symavg')
plt.plot(ycorr_1D_masked_symavg_x, ycorr_1D_masked_symavg,color='b',label='masked + symavg')
plt.ylim(.9,1.2)
plt.xlabel("shift ($\Delta x$)")
plt.ylabel("Correlation")
plt.legend()
Explanation: Correlations for different cases
End of explanation
# test 2D data
Npoints2 = 100
x2 = np.linspace(-10, 10, Npoints2)
X, Y = np.meshgrid(x2,x2)
Z = np.random.random((Npoints2,Npoints2))
Z = convol2d(Z, np.exp(-(X**2 + Y**2)/2./sigma**2))
mask_2D = np.ones_like(Z)
mask_2D[10:20, 10:20] = 0
mask_2D[73:91, 45:67] = 0
mask_2D[1:20, 90:] = 0
cc2D = CrossCorrelator(mask_2D.shape)
cc2D_symavg = CrossCorrelator(mask_2D.shape,normalization='symavg')
cc2D_masked = CrossCorrelator(mask_2D.shape,mask=mask_2D)
cc2D_masked_symavg = CrossCorrelator(mask_2D.shape, mask=mask_2D,normalization='symavg')
ycorr_2D = cc2D(Z)
ycorr_2D_masked = cc2D_masked(Z*mask_2D)
ycorr_2D_symavg = cc2D_symavg(Z)
ycorr_2D_masked_symavg = cc2D_masked_symavg(Z*mask_2D)
ycorr_2D_pos = cc2D.positions
ycorr_2D_masked_pos = cc2D_masked.positions
ycorr_2D_symavg_pos = cc2D_symavg.positions
ycorr_2D_masked_symavg_pos = cc2D_masked_symavg.positions
Explanation: 2. Try for 2D data
(In this case, even no mask has a strong effect on data. No mask still contains a ''mask'' since at higher correlation lengths we are correlating less points. Symmetric averaging excels to overcome these effects here.)
End of explanation
plt.figure(2);plt.clf();
plt.subplot(2,2,1)
plt.title("not masked")
plt.imshow(Z)
plt.subplot(2,2,2)
plt.title("masked")
plt.imshow(Z*mask_2D)
Explanation: plot 2D data
End of explanation
vmin=.95; vmax=1.03
plt.figure(3);plt.clf();
plt.subplot(2,2,1)
plt.title("regular")
plt.imshow(ycorr_2D,extent = pos2extent(ycorr_2D_pos))
#plt.axhline(ycorr_2D_masked.shape[0]//2)
plt.clim(vmin,vmax)
plt.xlim(-30,30)
plt.ylim(-30,30)
plt.subplot(2,2,2)
plt.title("masked")
plt.imshow(ycorr_2D_masked, extent = pos2extent(ycorr_2D_masked_pos))
#plt.axhline(ycorr_2D_masked.shape[0]//2)
plt.clim(vmin,vmax)
plt.xlim(-30,30)
plt.ylim(-30,30)
plt.subplot(2,2,3)
plt.title("symavg")
plt.imshow(ycorr_2D_symavg, extent = pos2extent(ycorr_2D_symavg_pos))
#plt.axhline(ycorr_2D_masked.shape[0]//2)
plt.clim(vmin,vmax)
plt.xlim(-30,30)
plt.ylim(-30,30)
plt.subplot(2,2,4)
plt.title("mask + symavg")
plt.imshow(ycorr_2D_masked_symavg, extent = pos2extent(ycorr_2D_masked_symavg_pos))
#plt.axhline(ycorr_2D_masked.shape[0]//2)
plt.clim(vmin,vmax)
plt.xlim(-30,30)
plt.ylim(-30,30)
Explanation: Correlations (2D)
End of explanation
plt.figure(4);plt.clf();
plt.plot(cc2D.positions[1], ycorr_2D[cc2D.centers[0]],label="reg")
plt.plot(cc2D_masked.positions[1], ycorr_2D_masked[cc2D_masked.centers[0]],label="masked")
plt.plot(cc2D_symavg.positions[1], ycorr_2D_symavg[cc2D_symavg.centers[0]],label="symavg")
plt.plot(cc2D_masked_symavg.positions[1], ycorr_2D_masked_symavg[cc2D_masked_symavg.centers[0]],label="masked+symavg")
plt.ylim(0.8, 1.2)
plt.xlabel("shift ($\Delta x$)")
plt.ylabel("Correlation")
plt.legend()
Explanation: Correlation Cross sections
End of explanation
# make id numbers
edges = ring_edges(1, 20, num_rings=2)
segments = 5
x0, y0 = np.array(mask_2D.shape)//2
maskids = segmented_rings(edges,segments,(y0,x0),mask_2D.shape)
cc2D_ids = CrossCorrelator(mask_2D.shape, mask=maskids)
cc2D_ids_symavg = CrossCorrelator(mask_2D.shape,mask=maskids,normalization='symavg')
ycorr_ids_2D = cc2D_ids(Z)
ycorr_ids_2D_symavg = cc2D_ids_symavg(Z)
Explanation: 3. Try with different id's in different regions of image
End of explanation
plt.figure(2);plt.clf();
plt.imshow(maskids)
Explanation: Plot mask
End of explanation
vmin=.95; vmax=1.1
fig, axes = plt.subplots(2,4)
ax1 = axes[:len(axes)//2].ravel()
ax2 = axes[len(axes)//2:].ravel()
for i in range(len(ax1)):
plt.sca(ax1[i])
plt.title("regular")
plt.imshow(ycorr_ids_2D[i],extent=pos2extent(cc2D_ids.positions[i]))
plt.clim(vmin,vmax)
plt.sca(ax2[i])
plt.title("sym avg")
plt.imshow(ycorr_ids_2D_symavg[i],extent=pos2extent(cc2D_ids_symavg.positions[i]))
plt.clim(vmin,vmax)
## Cross correlate image with itself shifted
Z2 = np.roll(np.roll(Z, 4,axis=0),-5,axis=1)
ycorr_ids_2D_shift = cc2D_ids(Z, Z2)
centers_ids_2D_shift = cc2D_ids.centers
ycorr_ids_2D_symavg_shift = cc2D_ids_symavg(Z,Z2)
centers_ids_2D_symavg_shift = cc2D_ids_symavg.centers
vmin=.95; vmax=1.05
fig, axes = plt.subplots(2,4)
ax1 = axes[:len(axes)//2].ravel()
ax2 = axes[len(axes)//2:].ravel()
for i in range(len(ax1)):
plt.sca(ax1[i])
plt.title("regular")
plt.imshow(ycorr_ids_2D_shift[i], extent=pos2extent(cc2D_ids.positions[i]))
yc, xc = centers_ids_2D_shift[i]
plt.axvline(xc)
plt.axhline(yc)
plt.clim(vmin,vmax)
plt.sca(ax2[i])
plt.title("sym avg")
plt.imshow(ycorr_ids_2D_symavg_shift[i], extent=pos2extent(cc2D_ids_symavg.positions[i]))
yc, xc = centers_ids_2D_symavg_shift[i]
plt.axvline(xc)
plt.axhline(yc)
plt.clim(vmin,vmax)
mask_test = (maskids == 1).astype(float)
from scipy.signal import fftconvolve
from numpy.fft import fft2, ifft2, fftshift
cc = fftconvolve(mask_test, mask_test, mode='same')
cc2 = fftshift(ifft2((np.conj(fft2(mask_test)))*fft2(mask_test)).real)
Explanation: plot correlations
Here, we see that without symmetric averaging, the correlations quickly come back at values higher than the point of initial correlation, whereas with symmetric averaging, the result looks more as what is expected, a nice Gaussian like curve centered in image. (Center of image is zero correlation)
End of explanation |
4,155 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Non-parametric between conditions cluster statistic on single trial power
This script shows how to compare clusters in time-frequency
power estimates between conditions. It uses a non-parametric
statistical procedure based on permutations and cluster
level statistics.
The procedure consists of
Step1: Set parameters
Step2: Factor to downsample the temporal dimension of the TFR computed by
tfr_morlet. Decimation occurs after frequency decomposition and can
be used to reduce memory usage (and possibly comptuational time of downstream
operations such as nonparametric statistics) if you don't need high
spectrotemporal resolution.
Step3: Compute statistic
Step4: View time-frequency plots | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet
from mne.stats import permutation_cluster_test
from mne.datasets import sample
print(__doc__)
Explanation: Non-parametric between conditions cluster statistic on single trial power
This script shows how to compare clusters in time-frequency
power estimates between conditions. It uses a non-parametric
statistical procedure based on permutations and cluster
level statistics.
The procedure consists of:
extracting epochs for 2 conditions
compute single trial power estimates
baseline line correct the power estimates (power ratios)
compute stats to see if the power estimates are significantly different
between conditions.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
tmin, tmax = -0.2, 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
ch_name = 'MEG 1332' # restrict example to one channel
# Load condition 1
reject = dict(grad=4000e-13, eog=150e-6)
event_id = 1
epochs_condition_1 = mne.Epochs(raw, events, event_id, tmin, tmax,
picks=picks, baseline=(None, 0),
reject=reject, preload=True)
epochs_condition_1.pick_channels([ch_name])
# Load condition 2
event_id = 2
epochs_condition_2 = mne.Epochs(raw, events, event_id, tmin, tmax,
picks=picks, baseline=(None, 0),
reject=reject, preload=True)
epochs_condition_2.pick_channels([ch_name])
Explanation: Set parameters
End of explanation
decim = 2
freqs = np.arange(7, 30, 3) # define frequencies of interest
n_cycles = 1.5
tfr_epochs_1 = tfr_morlet(epochs_condition_1, freqs,
n_cycles=n_cycles, decim=decim,
return_itc=False, average=False)
tfr_epochs_2 = tfr_morlet(epochs_condition_2, freqs,
n_cycles=n_cycles, decim=decim,
return_itc=False, average=False)
tfr_epochs_1.apply_baseline(mode='ratio', baseline=(None, 0))
tfr_epochs_2.apply_baseline(mode='ratio', baseline=(None, 0))
epochs_power_1 = tfr_epochs_1.data[:, 0, :, :] # only 1 channel as 3D matrix
epochs_power_2 = tfr_epochs_2.data[:, 0, :, :] # only 1 channel as 3D matrix
Explanation: Factor to downsample the temporal dimension of the TFR computed by
tfr_morlet. Decimation occurs after frequency decomposition and can
be used to reduce memory usage (and possibly comptuational time of downstream
operations such as nonparametric statistics) if you don't need high
spectrotemporal resolution.
End of explanation
threshold = 6.0
T_obs, clusters, cluster_p_values, H0 = \
permutation_cluster_test([epochs_power_1, epochs_power_2], out_type='mask',
n_permutations=100, threshold=threshold, tail=0)
Explanation: Compute statistic
End of explanation
times = 1e3 * epochs_condition_1.times # change unit to ms
evoked_condition_1 = epochs_condition_1.average()
evoked_condition_2 = epochs_condition_2.average()
plt.figure()
plt.subplots_adjust(0.12, 0.08, 0.96, 0.94, 0.2, 0.43)
plt.subplot(2, 1, 1)
# Create new stats image with only significant clusters
T_obs_plot = np.nan * np.ones_like(T_obs)
for c, p_val in zip(clusters, cluster_p_values):
if p_val <= 0.05:
T_obs_plot[c] = T_obs[c]
plt.imshow(T_obs,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', cmap='gray')
plt.imshow(T_obs_plot,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', cmap='RdBu_r')
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title('Induced power (%s)' % ch_name)
ax2 = plt.subplot(2, 1, 2)
evoked_contrast = mne.combine_evoked([evoked_condition_1, evoked_condition_2],
weights=[1, -1])
evoked_contrast.plot(axes=ax2, time_unit='s')
plt.show()
Explanation: View time-frequency plots
End of explanation |
4,156 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Selecting model hyperparameters by cross-validation
The overview is that we will split the dataset into args.num_folds distinct partitions ("folds"). (This example description will use args.num_folds=10.) Eight of the folds will be used for training, one will be used for validation, and one for testing. This is performed for each set of hyperparameters. Finally, that process is repeated using each fold as the validation and test set. In this work, when fold i is the validation set, then fold i+1%10 is the test set; the other folds are taken as the training set. Other strategies can be used, but this results in exactly a single prediction for each data record in each of the validation and test set for each set of hyperparameters.
The rough purpose of each of the dataset types is as follows.
training. given the hyperparameters, set the model parameters.
validation. select the hyperparameters which perform the best on an unseen dataset.
testing. evaluate the selected hyperparameters on a different unseen dataset.
Goal
This notebook demonstrates the necessary steps to distribute training across a cluster using dask, select hyperparameters using a validation set, collect the test set predictions, and evaluate the model performance using the pyllars library.
Step1: Provide "command line" arguments
Step2: TODO
Step3: Load a small regression dataset
We will use the boston dataset made available in sklearn. All of its features are numeric, and the target is also numeric.
Step4: Create our estimator and hyperparameter grid
We will use a simple scaling followed by extreme gradient boosting for regression.
Step5: Create an iterator over folds and hyperparameter configurations
As described in the introduction, we will test each set of hyperparameters in our grid on each fold (while training on eight of the other folds). For selecting our final set of hyperparameters, we will use performance on the validation fold. (More details are given on this procedure below.)
Concretely, we will accomplish this by iterating over the cross-product of the sets of hyperparameters and validation set folds. (As described, given the validation fold index, we can determine the training and testing folds.) | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
# initialize a logger for ipython
import pyllars.logging_utils as logging_utils
logger = logging_utils.get_ipython_logger()
# create an argparse namespace to hold parameters
from argparse import Namespace
args = Namespace()
# create (or connect to) a dask cluster
import pyllars.dask_utils as dask_utils
cluster_location = "LOCAL"
cluster_restart=False
dask_utils.add_dask_values_to_args(
args,
cluster_location=cluster_location,
num_procs=3,
num_threads_per_proc=1
)
dask_client, cluster = dask_utils.connect(args)
# machine learning imports
import sklearn.datasets
import sklearn.model_selection
import sklearn.pipeline
import sklearn.preprocessing
import xgboost
# other tools and helpers
import itertools
import json
import pyllars.ml_utils as ml_utils
import numpy as np
import pandas as pd
Explanation: Selecting model hyperparameters by cross-validation
The overview is that we will split the dataset into args.num_folds distinct partitions ("folds"). (This example description will use args.num_folds=10.) Eight of the folds will be used for training, one will be used for validation, and one for testing. This is performed for each set of hyperparameters. Finally, that process is repeated using each fold as the validation and test set. In this work, when fold i is the validation set, then fold i+1%10 is the test set; the other folds are taken as the training set. Other strategies can be used, but this results in exactly a single prediction for each data record in each of the validation and test set for each set of hyperparameters.
The rough purpose of each of the dataset types is as follows.
training. given the hyperparameters, set the model parameters.
validation. select the hyperparameters which perform the best on an unseen dataset.
testing. evaluate the selected hyperparameters on a different unseen dataset.
Goal
This notebook demonstrates the necessary steps to distribute training across a cluster using dask, select hyperparameters using a validation set, collect the test set predictions, and evaluate the model performance using the pyllars library.
End of explanation
args.random_state = 8675309 # several steps require a seed. we will use the same one to avoid unexpected results.
args.num_folds = 10 # use 10-fold cross-validation
args.evaluation_metric = 'mean_squared_error'
args.selection_strategy = np.argmin
# standard pydata imports
import joblib
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import pandas as pd
import pathlib
import seaborn as sns; sns.set(style='white', color_codes=True)
import tqdm
Explanation: Provide "command line" arguments
End of explanation
from typing import Any, Callable, Container, Dict, Iterable, List, NamedTuple, Optional, Set, Tuple
# this function *is not* in ml_utils
# we can do whatever we want here
def evaluate_hyperparameters_helper(
hv:List,
args:Namespace,
estimator_template:sklearn.base.BaseEstimator,
data:pd.DataFrame,
collect_metrics:Callable,
train_folds:Optional[Any]=None,
split_field:str='fold',
target_field:str='target',
target_transform:Optional[Callable]=None,
target_inverse_transform:Optional[Callable]=None,
collect_metrics_kwargs:Optional[Dict]=None,
fields_to_ignore:Optional[Container[str]]=None) -> NamedTuple:
# these come from our iterator
hyperparameters = hv[0]
validation_folds = hv[1]
# we know we are doing 10-fold cv
test_folds = (validation_folds + 1) % args.num_folds
res = ml_utils.evaluate_hyperparameters(
estimator_template=estimator_template,
hyperparameters=hyperparameters,
validation_folds=validation_folds,
test_folds=test_folds,
data=data,
collect_metrics=collect_metrics,
train_folds=train_folds,
split_field=split_field,
target_field=target_field,
target_transform=target_transform,
target_inverse_transform=target_inverse_transform,
collect_metrics_kwargs=collect_metrics_kwargs,
fields_to_ignore=fields_to_ignore
)
return res
Explanation: TODO: The evaluation always uses predict. Currently, there is not a way to tell it to use predict_proba (or anything else).
End of explanation
def load_dataset():
data = sklearn.datasets.load_boston()
df = pd.DataFrame(data['data'], columns=data['feature_names'])
df['target'] = data['target']
return df
df = load_dataset()
###
# Determine the fold of row.
#
# N.B. This could be performed once in an "offline" step
# and stored as another column in the dataframe
###
folds = ml_utils.get_cv_folds(
df['target'],
num_splits=args.num_folds,
use_stratified=False, # stratified does not work for regression
shuffle=True,
random_state=args.random_state # ensure we always shuffle the same way
)
# add a column to indicate the fold of each row
df['fold'] = folds
df.head()
Explanation: Load a small regression dataset
We will use the boston dataset made available in sklearn. All of its features are numeric, and the target is also numeric.
End of explanation
# We could also include things like PCA here. However, one
# advantage of trees (and forests) is that we can assign
# importances to features; if we use dimensionality reduction
# or other techniques which significantly change the
# interpretation of the features, though, we largely lose
# that advantage.
estimator_template = sklearn.pipeline.Pipeline([
('scaler',sklearn.preprocessing.StandardScaler()),
('xgb', xgboost.XGBRegressor())
])
# In practice, this is a small hyperparameter grid; xgboost has
# many more hyperparameters that can be worth investigating.
#
# A quick overview of some of the most important hyperparameters and
# their interpretation is available here:
# https://www.analyticsvidhya.com/blog/2016/03/complete-guide-parameter-tuning-xgboost-with-codes-python/
hyperparam_grid = sklearn.model_selection.ParameterGrid({
'xgb__n_estimators': [50, 100, 500],
'xgb__learning_rate': [.01,0.1],
})
Explanation: Create our estimator and hyperparameter grid
We will use a simple scaling followed by extreme gradient boosting for regression.
End of explanation
# for simplicity, just create lists of everything
# In principle, a lazy generator or something more fancy could
# be used; in practice, this is usually not necessary unless
# the hyperparameter grid contains very large objects.
hyperparam_grid = list(hyperparam_grid)
folds = list(range(args.num_folds))
# an iterator over (hyperparameter, validation_fold) tuples
hp_fold_it = itertools.product(hyperparam_grid, folds)
hp_fold_it = list(hp_fold_it)
df.head()
hyperparameters, validation_fold = hp_fold_it[51]
test_fold = (validation_fold + 1) % 10
res = ml_utils.evaluate_hyperparameters(
estimator_template=estimator_template,
hyperparameters=hyperparameters,
validation_folds=validation_fold,
test_folds=test_fold,
data=df,
split_field='fold',
target_field='target',
collect_metrics=ml_utils.collect_regression_metrics,
target_transform=np.log1p,
target_inverse_transform=np.expm1
)
res.metrics_val
res = evaluate_hyperparameters_helper(
hp_fold_it[51],
args=args,
estimator_template=estimator_template,
data=df,
split_field='fold',
target_field='target',
collect_metrics=ml_utils.collect_regression_metrics,
target_transform=np.log1p,
target_inverse_transform=np.expm1
)
res.metrics_val
res.hyperparameters_str
f_res = dask_utils.apply_iter(
hp_fold_it,
dask_client,
evaluate_hyperparameters_helper,
args=args,
estimator_template=estimator_template,
data=df,
split_field='fold',
target_field='target',
collect_metrics=ml_utils.collect_regression_metrics,
target_transform=np.log1p,
target_inverse_transform=np.expm1,
return_futures=True
)
dask_utils.check_status(f_res)
all_res = dask_utils.collect_results(f_res)
all_res[51].metrics_val
def _get_res(res):
ret_val = {
'validation_{}'.format(k): v
for k,v in res.metrics_val.items()
}
ret_test = {
'test_{}'.format(k): v
for k,v in res.metrics_test.items()
}
ret = ret_val
ret.update(ret_test)
hp_string = json.dumps(res.hyperparameters)
ret['hyperparameters_str'] = hp_string
ret['hyperparameters'] = res.hyperparameters
ret['validation_fold'] = res.fold_val
ret['test_fold'] = res.fold_test
return ret
###
# Create the results data frame
###
results = [
_get_res(res) for res in all_res
]
df_results = pd.DataFrame(results)
df_results.head()
###
# Based on the performance on the validation set, select
# the best hyperparameters.
###
hp_groups = df_results.groupby('hyperparameters_str')
validation_evaluation_metric = "validation_{}".format(args.evaluation_metric)
test_evaluation_metric = "test_{}".format(args.evaluation_metric)
val_performance = hp_groups[validation_evaluation_metric].mean()
# now, select the best
val_best = args.selection_strategy(val_performance)
val_best
m_val_best = (df_results['hyperparameters_str'] == val_best)
df_results[m_val_best]
# go back and select the predictions for the best hyperparameters
best_res = [
res for res in all_res
if res.hyperparameters_str == val_best
]
len(best_res)
best_res[0].predictions_test
best_res[0].true_test
hp_groups['test_mean_absolute_error'].mean()
df_results.columns
###
# Extract masks for the training, validation, and testing
# sets.
###
splits = ml_utils.get_train_val_test_splits(
df,
validation_splits=validation_set,
test_splits=test_set,
split_field='fold'
)
###
# Create the data matrices necessary for the various
# sklearn operations we will perform later.
###
# we do not want to use metadata fields
fields_to_ignore = [
'fold'
]
fold_data = ml_utils.get_fold_data(
df,
target_field='target',
m_train=splits.training,
m_test=splits.test,
m_validation=splits.validation,
fields_to_ignore=fields_to_ignore
)
###
# Based on the template of our estimator pipeline template
# and hyperparameters, create a concrete estimator with the
# specified hyperparameters.
###
estimator = sklearn.clone(estimator_template)
estimator = estimator.set_params(**hyperparameters)
###
# Transform the target variable by taking the log(1+y).
#
# This operation may not help in all domains.
###
y_train = np.log1p(fold_data.y_train)
###
# Fit the estimator on the training set
###
estimator_fit = estimator.fit(fold_data.X_train, y_train)
###
# Use the fit estimator to make predictions on *both* the
# validation and testing set. We will use both of these later.
###
y_pred = estimator_fit.predict(fold_data.X_test)
y_val = estimator_fit.predict(fold_data.X_validation)
# make sure to transform the predictions back to the original
# scale using exp(y-1)
y_pred = np.expm1(y_pred)
y_val = np.expm1(y_val)
###
# Collect various evaluation metrics of the trained model
# on the validation and testing data.
###
metrics_val = ml_utils.collect_regression_metrics(fold_data.y_validation, y_val, prefix="val_")
metrics_test = ml_utils.collect_regression_metrics(fold_data.y_test, y_pred, prefix="test_")
metrics_test
metrics_val
###
# Construct a summary of the hyperparameters, validation
# testing set, as well as the performance of the trained
# model on those sets. We will use the performance on the
# validation set across all folds in order to select the
# optimal hyperparameters for evaluation on the test set.
###
ret = {
'val_set': validation_set,
'test_set': test_set,
'hyperparameter': str(hyperparameters),
}
ret.update(metrics_test)
ret.update(metrics_val)
Explanation: Create an iterator over folds and hyperparameter configurations
As described in the introduction, we will test each set of hyperparameters in our grid on each fold (while training on eight of the other folds). For selecting our final set of hyperparameters, we will use performance on the validation fold. (More details are given on this procedure below.)
Concretely, we will accomplish this by iterating over the cross-product of the sets of hyperparameters and validation set folds. (As described, given the validation fold index, we can determine the training and testing folds.)
End of explanation |
4,157 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Survived SibSp Parch | Problem:
import pandas as pd
df = pd.DataFrame({'Survived': [0,1,1,1,0],
'SibSp': [1,1,0,1,0],
'Parch': [0,0,0,0,1]})
import numpy as np
def g(df):
family = np.where((df['SibSp'] + df['Parch']) >= 1 , 'Has Family', 'No Family')
return df.groupby(family)['Survived'].mean()
result = g(df.copy()) |
4,158 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced settings for WHFast
Step1: By default WHFast synchronizes and recalculates the Jacobi coordinates from the inertial ones every timestep. This guarantees that the user always gets physical particle states for output, and ensures reliable output if the user decides to, e.g., grow the particles' masses between timesteps.
Now that you understand the pitfalls, if you want to boost WHFast's performance, you simply set
Step2: Now it becomes the user's responsibility to appropriately synchronize and recalculate jacobi coordinates when needed. You can tell WHFast to recalculate Jacobi coordinates for a given timestep (say after you change a particle's mass) with the sim.ri_whfast.recalculate_jacobi_this_timestep flag. After it recalculates Jacobi coordinates, WHFast will reset this flag to zero, so you just set it each time you mess with the particles.
Step3: In our test case with a single planet, there is effectively no interaction step, and by combining Kepler steps we get almost the full factor of 2 speedup we expect. Because Kepler steps are expensive (by virtue of having to solve the transcendental Kepler equation), this will always be an important performance boost for few-planet cases.
Note that one case where REBOUND needs to synchronize every timestep is if you're using the MEGNO chaos indicator. So if you call
Step4: REBOUND will synchronize every timestep even if you set sim.ri_whfast.safe_mode = 0 and never explicitly call sim.integrator_synchronize().
Modifying particles/forces
Again, if performance is a factor in your simulations, you would not want to write a custom stepper in python that modifies the particles, since this will be very slow. You could either write a modified C version of reb_integrate in src/librebound.c (the flags are defined in librebound.h, and have the same name as the python ones, just without sim. in front), or you can use the REBOUNDXF library, which takes care of this for you and supports many typically used modifications. We again illustrate a simple scheme with python code
Step5: Here, because we grow the mass of the planet every timestep, we have to recalculate Jacobi coordinates every timestep (since they depend on the masses of the particles). We therefore manually set the flag to recalculate them the next timestep every time we make a change. Here we would actually get the same result if we just left sim.ri_whfast.safe_mode = 1, since when recalculating Jacobi coordinates, WHFast automatically has to synchronize in order to get real positions and velocities for the planets. In this case WHFast is therefore synchronizing and recalculating Jacobi coordinates every timestep.
But imagine now that instead of growing the mass, we continually add an impulse to vx
Step6: This would not give accurate results, because the sim.particles[1].vx we access after sim.step() isn't a physical velocity (it's missing a half-Kepler step). It's basically at an intermediate point in the calculation. In order to make this work, one would call sim.integrator_synchronize() between sim.step() and accessing sim.particles[1].vx, to ensure the velocity is physical.
Symplectic correctors
Symplectic correctors make the Wisdom-Holman scheme higher order (without symplectic correctors it's second order). The great thing about them is that they only need to get applied when you synchronize. So if you just need to synchronize to output, and there are many timesteps between outputs, they represent a very small performance loss for a huge boost in accuracy (compare for example the green line (11th order corrector) to the red line (no corrector) in Fig. 4 of Rein & Tamayo 2015--beyond the right of the plot, where the round-off errors dominate, the two lines would rise in unison). We have implemented symplectic correctors up to order 11. You can set the order with (must be an odd number), e.g., | Python Code:
import rebound
import numpy as np
def test_case():
sim = rebound.Simulation()
sim.integrator = 'whfast'
sim.add(m=1.) # add the Sun
sim.add(m=3.e-6, a=1.) # add Earth
sim.move_to_com()
sim.dt = 0.2
return sim
Explanation: Advanced settings for WHFast: Extra speed, accuracy, and additional forces
There are several performance enhancements one can make to WHFast. However, each one has pitfalls that an inexperienced user can unwittingly fall into. We therefore chose safe default settings that make the integrator difficult to misuse. This makes the default WHFast substantially slower and less accurate than it can be. Here we describe how to alter the integrator settings to improve WHFast's performance.
TL;DR
As long as
you don't add, remove or otherwise modify particles between timesteps
you get your outputs by passing a list of output times ahead of time and access the particles pointer between calls to sim.integrate() (see, e.g., the Visualization section of WHFast.ipynb)
you can set sim.ri_whfast.safemode = 0 to get a substantial performance boost. Under the same stipulations, you can set sim.ri_whfast.corrector = 11 to get much higher accuracy, at a nearly negligible loss of performance (as long as there are many timesteps between outputs).
If you want to modify particles, or if the code breaks with these advanced settings, read below for details, and check out the Common mistake with WHFast section at the bottom of WHFast.ipynb.
The Wisdom-Holman algorithm
In order to understand and apply the various integrator flags, we need to first understand the Wisdom-Holman scheme (see, e.g., Wisdom & Holman 1991, or Rein & Tamayo 2015 for more details).
The Wisdom-Holman algorithm consists of alternating Keplerian steps that evolve particles on their two-body Keplerian orbits around the star with interaction steps that apply impulses to the particles' velocities from the interactions between bodies. The basic algorithm for a single timestep $dt$ is a Leapfrog Drift-Kick-Drift scheme with an interaction kick over the full $dt$ sandwiched between half timesteps of Keplerian drift:
$H_{Kepler}(dt/2)\:H_{Interaction}(dt)\:H_{Kepler}(dt/2)$
Timesteps like the one above are then concatenated over the full integration:
$H_{Kepler}(dt/2)\:H_{Interaction}(dt)\:H_{Kepler}(dt/2)$ $H_{Kepler}(dt/2)\:H_{Interaction}(dt)\:H_{Kepler}(dt/2)$ ... $H_{Kepler}(dt/2)\:H_{Interaction}(dt)\:H_{Kepler}(dt/2)$
Combining Kepler steps and synchronizing
It turns out that Kepler steps take longer than interaction steps as long as you don't have many planets, so an obvious and important performance boost would be to combine adjacent Kepler half-steps into full ones, i.e.:
$H_{Kepler}(dt/2)\:H_{Interaction}(dt)\:H_{Kepler}(dt)\:H_{Interaction}(dt)\:H_{Kepler}(dt) ... \:H_{Interaction}(dt)\:H_{Kepler}(dt/2)$
The issue is that if you were to, say, output the state of the particles as the simulation progressed, the positions would not correspond to anything real, since the beginning (or end) of one of the full $H_{Kepler}(dt)$ steps corresponds to some intermediate step in an abstract sequence of calculations for a given timestep. In order to get the particles' actual positions, we would have to calculate to the end the timestep we want the output for by splitting a full Kepler step back into two half-steps, e.g.,
$H_{Kepler}(dt/2)\:H_{Interaction}(dt)\:H_{Kepler}(dt)\:H_{Interaction}(dt)\:H_{Kepler}(dt/2) \text{PRINT OUTPUT} H_{Kepler}(dt/2) H_{Interaction}(dt)\:H_{Kepler}(dt)$...
We call this step of reinserting half-Kepler steps to obtain the physical state of the particles synchronizing. This must be done whenever the actual states of the particles are required, e.g., before every output, or if one wanted to use the particles' states to compute additional changes to the particle orbits between timesteps. It is also necessary to synchronize each timestep whenever the MEGNO chaos indicator is being computed.
Conversions between Jacobi and Inertial Coordinates
It turns out that the most convenient coordinate system to work in for performing the Kepler steps is Jacobi coordinates (see, e.g., 9.5.4 of Murray & Dermott). WHFast therefore works in Jacobi coordinates, converting to inertial coordinates when it needs to (e.g. for output, and for doing the direct gravity calculation in the interaction step, which is most easily done in inertial coordinates).
One feature of WHFast is that it works in whatever inertial coordinate system you choose for your initial conditions. This means that whatever happens behind the scenes, the user always gets the particles' inertial coordinates at the front end. At the beginning of every timestep, WHFast therefore has to somehow obtain the Jacobi coordinates. The straightforward thing would be to convert from the inertial coordinates to Jacobi coordinates every timestep, but these conversions slow things down, and they represent extra operations that grow the round-off error.
WHFast therefore stores the Jacobi coordinates internally throughout the time it is running, and only recalculates Jacobi coordinates from the inertial ones if told to do so. Since Jacobi coordinates reference particles to the center of mass of all the particles with indices lower than their own (typically all the particles interior to them), the main reason you would have to recalculate Jacobi coordinates is if between timesteps you choose to somehow change the particles' positions or velocities (give them kicks in addition to their mutual gravity), or change the particles' masses.
Overriding the defaults
Let's begin by importing rebound, and defining a simple function to reset rebound and initialize a new simulation with a test case,
End of explanation
sim = test_case()
sim.ri_whfast.safe_mode = 0
Explanation: By default WHFast synchronizes and recalculates the Jacobi coordinates from the inertial ones every timestep. This guarantees that the user always gets physical particle states for output, and ensures reliable output if the user decides to, e.g., grow the particles' masses between timesteps.
Now that you understand the pitfalls, if you want to boost WHFast's performance, you simply set
End of explanation
import time
Porb = 2*np.pi # orbital period for Earth, using units of G = 1, solar masses, AU and yr/2pi
sim = test_case()
print("safe_mode = {0}".format(sim.ri_whfast.safe_mode))
start_time = time.time()
sim.integrate(1.e5*Porb)
sim.status()
print("Safe integration took {0} seconds".format(time.time() - start_time))
sim = test_case()
sim.ri_whfast.safe_mode = 0
start_time = time.time()
sim.integrate(1.e5*Porb)
sim.status()
print("Manual integration took {0} seconds".format(time.time() - start_time))
Explanation: Now it becomes the user's responsibility to appropriately synchronize and recalculate jacobi coordinates when needed. You can tell WHFast to recalculate Jacobi coordinates for a given timestep (say after you change a particle's mass) with the sim.ri_whfast.recalculate_jacobi_this_timestep flag. After it recalculates Jacobi coordinates, WHFast will reset this flag to zero, so you just set it each time you mess with the particles.
End of explanation
sim.init_megno()
Explanation: In our test case with a single planet, there is effectively no interaction step, and by combining Kepler steps we get almost the full factor of 2 speedup we expect. Because Kepler steps are expensive (by virtue of having to solve the transcendental Kepler equation), this will always be an important performance boost for few-planet cases.
Note that one case where REBOUND needs to synchronize every timestep is if you're using the MEGNO chaos indicator. So if you call
End of explanation
sim = test_case()
sim.ri_whfast.safe_mode = 0
def integrate_mod(sim, t_final):
while sim.t < t_final:
sim.step()
sim.particles[1].m += 1.e-10
sim.ri_whfast.recalculate_jacobi_this_timestep = 1
sim.integrator_synchronize()
Explanation: REBOUND will synchronize every timestep even if you set sim.ri_whfast.safe_mode = 0 and never explicitly call sim.integrator_synchronize().
Modifying particles/forces
Again, if performance is a factor in your simulations, you would not want to write a custom stepper in python that modifies the particles, since this will be very slow. You could either write a modified C version of reb_integrate in src/librebound.c (the flags are defined in librebound.h, and have the same name as the python ones, just without sim. in front), or you can use the REBOUNDXF library, which takes care of this for you and supports many typically used modifications. We again illustrate a simple scheme with python code:
End of explanation
sim = test_case()
sim.ri_whfast.safe_mode = 1
def integrate_mod(sim, t_final):
while sim.t < t_final:
sim.step()
sim.particles[1].vx += 1.e-10*sim.dt
sim.ri_whfast.recalculate_jacobi_this_timestep = 1
sim.integrator_synchronize()
Explanation: Here, because we grow the mass of the planet every timestep, we have to recalculate Jacobi coordinates every timestep (since they depend on the masses of the particles). We therefore manually set the flag to recalculate them the next timestep every time we make a change. Here we would actually get the same result if we just left sim.ri_whfast.safe_mode = 1, since when recalculating Jacobi coordinates, WHFast automatically has to synchronize in order to get real positions and velocities for the planets. In this case WHFast is therefore synchronizing and recalculating Jacobi coordinates every timestep.
But imagine now that instead of growing the mass, we continually add an impulse to vx:
End of explanation
sim.ri_whfast.corrector = 11
Explanation: This would not give accurate results, because the sim.particles[1].vx we access after sim.step() isn't a physical velocity (it's missing a half-Kepler step). It's basically at an intermediate point in the calculation. In order to make this work, one would call sim.integrator_synchronize() between sim.step() and accessing sim.particles[1].vx, to ensure the velocity is physical.
Symplectic correctors
Symplectic correctors make the Wisdom-Holman scheme higher order (without symplectic correctors it's second order). The great thing about them is that they only need to get applied when you synchronize. So if you just need to synchronize to output, and there are many timesteps between outputs, they represent a very small performance loss for a huge boost in accuracy (compare for example the green line (11th order corrector) to the red line (no corrector) in Fig. 4 of Rein & Tamayo 2015--beyond the right of the plot, where the round-off errors dominate, the two lines would rise in unison). We have implemented symplectic correctors up to order 11. You can set the order with (must be an odd number), e.g.,
End of explanation |
4,159 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Performing an unbinned analysis
In this tutorial you will learn to fit a parametric model to the event data (unbinned fit) and how to inspect the fit residuals
Now you are ready to fit the models for the source and the background to the data.
We start by importing gammalib, ctools, and cscripts.
Step1: We will also use matplotlib to display the results.
Step2: Finally we add to our path the directory containing the example plotting scripts provided with the ctools installation.
Step3: Preparing the model
First, we will merge the two models we derived in the previous tutorials for source and background.
Step4: Note how the source found by cssrcdetect was named Src001. We will call it Crab instead. Also, spectral parameters are set to default values, but you should make sure they are appropriate so that the model fit runs seamlessly. In this case it is best to set the value of the pivot energy for the Crab within our energy range (> 0.66 TeV). We will set it to 1 TeV.
Step5: We will save this model to disk for later use.
Step6: Fitting the model to the data
The model fit is performed by ctlike. We will use the previously selected events.
Step7: We can now look at the results from the optimisation and the fitted model.
Step8: The statistical 1-sigma positional uncertainty corresponds to 0.12 arcmin. Systematic uncertainties are not computed. The fitted position can be compared to the values of 83.629±0.005 degrees in Right Ascension and 22.012±0.001 degrees in Declination reported in Holler et al. (2017).
According to SIMBAD, the Crab nebula is situated at a Right Ascension of 83.633 degrees and a Declination of 22.015 degrees, which is 0.013 degrees (0.82 arcmin) away from the fitted position.
The intensity at 1 TeV of the Crab was fitted to (4.89±0.27)×10−11 photons cm−2 s−1 TeV−1 the spectral index of the power law is −2.70±0.07. This can be compared to the values of (3.45±0.05)×10−11 photons cm−2 s−1 TeV−1 and −2.63±0.01 reported in Aharonian et al. (2006), A&A, 457, 899 (note that the datasets, calibrations etc. used in our analysis are not the same as in that paper).
In the ctlike run above the energy dispersion, which relates the true photon energies to the energies of the reconstructed events, was not taken into account. By default energy dispersion usage is disabled since it involves an extra dimension in the data analysis which slows down the computations. We can run accounting for energy dispersion to compare the results.
Step9: You can verify that the results are broadly consistents with those obtained ignoring the energy dispersion.
Inspecting the fit residuals
Following a model fit, you should always inspect the fit residuals. First let’s inspect the spectral residuals. You can do this using the csresspec script.
Step10: The spectral fit looks satisfactory. Finally you should also inspect the spatial residuals. You do this using the csresmap script.
Step11: We will inspect the residual map with a slight smoothing to suppress statistical fluctuations. | Python Code:
import gammalib
import ctools
import cscripts
Explanation: Performing an unbinned analysis
In this tutorial you will learn to fit a parametric model to the event data (unbinned fit) and how to inspect the fit residuals
Now you are ready to fit the models for the source and the background to the data.
We start by importing gammalib, ctools, and cscripts.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: We will also use matplotlib to display the results.
End of explanation
import sys
import os
sys.path.append(os.environ['CTOOLS']+'/share/examples/python/')
Explanation: Finally we add to our path the directory containing the example plotting scripts provided with the ctools installation.
End of explanation
srcmodel = 'crab.xml'
bkgmodel = 'bkgmodel.xml'
models = gammalib.GModels()
for inmodels in [srcmodel,bkgmodel]:
for model in gammalib.GModels(inmodels):
models.append(model)
print(models)
Explanation: Preparing the model
First, we will merge the two models we derived in the previous tutorials for source and background.
End of explanation
models['Src001'].name('Crab')
models['Crab']['PivotEnergy'].value(1.e6)
Explanation: Note how the source found by cssrcdetect was named Src001. We will call it Crab instead. Also, spectral parameters are set to default values, but you should make sure they are appropriate so that the model fit runs seamlessly. In this case it is best to set the value of the pivot energy for the Crab within our energy range (> 0.66 TeV). We will set it to 1 TeV.
End of explanation
modelfile = 'crab_models.xml'
models.save(modelfile)
Explanation: We will save this model to disk for later use.
End of explanation
obsfile = 'obs_crab_selected.xml'
like = ctools.ctlike()
like['inobs'] = obsfile
like['inmodel'] = modelfile
like.run()
Explanation: Fitting the model to the data
The model fit is performed by ctlike. We will use the previously selected events.
End of explanation
print(like.opt())
print(like.obs().models()['Crab'])
Explanation: We can now look at the results from the optimisation and the fitted model.
End of explanation
like['edisp'] = True
like.run()
print(like.opt())
print(like.obs().models()['Crab'])
Explanation: The statistical 1-sigma positional uncertainty corresponds to 0.12 arcmin. Systematic uncertainties are not computed. The fitted position can be compared to the values of 83.629±0.005 degrees in Right Ascension and 22.012±0.001 degrees in Declination reported in Holler et al. (2017).
According to SIMBAD, the Crab nebula is situated at a Right Ascension of 83.633 degrees and a Declination of 22.015 degrees, which is 0.013 degrees (0.82 arcmin) away from the fitted position.
The intensity at 1 TeV of the Crab was fitted to (4.89±0.27)×10−11 photons cm−2 s−1 TeV−1 the spectral index of the power law is −2.70±0.07. This can be compared to the values of (3.45±0.05)×10−11 photons cm−2 s−1 TeV−1 and −2.63±0.01 reported in Aharonian et al. (2006), A&A, 457, 899 (note that the datasets, calibrations etc. used in our analysis are not the same as in that paper).
In the ctlike run above the energy dispersion, which relates the true photon energies to the energies of the reconstructed events, was not taken into account. By default energy dispersion usage is disabled since it involves an extra dimension in the data analysis which slows down the computations. We can run accounting for energy dispersion to compare the results.
End of explanation
residuals = 'residuals.fits'
resspec = cscripts.csresspec(like.obs())
resspec['stack'] = True
resspec['components'] = True
resspec['ebinalg'] = 'LOG'
resspec['emin'] = 0.66
resspec['emax'] = 100.
resspec['enumbins'] = 20
resspec['proj'] = 'CAR'
resspec['coordsys'] = 'CEL'
resspec['xref'] = 83.63
resspec['yref'] = 22.01
resspec['binsz'] = 0.02
resspec['nxpix'] = 200
resspec['nypix'] = 200
resspec['algorithm'] = 'SIGNIFICANCE'
resspec['outfile'] = residuals
resspec.execute()
from show_residuals import plot_residuals
plot_residuals(residuals,'',0)
Explanation: You can verify that the results are broadly consistents with those obtained ignoring the energy dispersion.
Inspecting the fit residuals
Following a model fit, you should always inspect the fit residuals. First let’s inspect the spectral residuals. You can do this using the csresspec script.
End of explanation
resmap = cscripts.csresmap(like.obs())
resmap['emin'] = 0.66
resmap['emax'] = 100.0
resmap['proj'] = 'CAR'
resmap['coordsys'] = 'CEL'
resmap['xref'] = 83.63
resmap['yref'] = 22.01
resmap['binsz'] = 0.02
resmap['nxpix'] = 200
resmap['nypix'] = 200
resmap['algorithm'] = 'SUBDIV'
resmap.run()
Explanation: The spectral fit looks satisfactory. Finally you should also inspect the spatial residuals. You do this using the csresmap script.
End of explanation
resmap._resmap.smooth('GAUSSIAN',0.1)
ax = plt.subplot()
plt.imshow(resmap._resmap.array(),origin='lower',
cmap='bwr',vmin=-1,vmax=1,
extent=[83.63+0.02*200,83.63-0.02*200,22.01-0.02*200,22.01+0.02*200])
# Boundaries of the coord grid
ax.set_xlabel('R.A. (deg)')
ax.set_ylabel('Dec (deg)')
cbar = plt.colorbar()
cbar.set_label('Residuals/Total Counts')
Explanation: We will inspect the residual map with a slight smoothing to suppress statistical fluctuations.
End of explanation |
4,160 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kmeans from scratch
1.data production
Step1: 我们以(1, 1), (1, 2), (2, 2), (2, 1)四个点为中心产生了随机分布的点,如果我们的聚类算法正确的话,我们找到的中心点应该和这四个点很接近。先用简单的语言描述 kmeans 算法步骤:
第一步 - 随机选择 K 个点作为点的聚类中心,这表示我们要将数据分为 K 类。
第二步 - 遍历所有的点 P, 算出 P 到每个聚类中心的距离,将 P 放到最近的聚类中心的点集中。遍历结束后我们将得到 K 个点集。
第三步 - 遍历每一个点集,算出每一个点集的中心位置,将其作为新的聚类中心。
第四步 - 重复步骤 2 和步骤 3,直到聚类中心位置不再移动。
Step2: 寻找 K 值
以上已经介绍了 KMeans 方法的具体流程,但是我们还面临一个问题,如何确定 K 值——在以上的演示中,由于数据是我们自己生成的,所以我们很容易就确定了 K 值,但是真实的环境下,我们往往不能立马清楚 K 值的大小。
一种比较通用的解决方法是计算每个点到自己的聚类中心的平均距离,虽然说,K 值越大,理论上这个平均距离会越小。但是当我们画出平均距离随K值的变化曲线后,会发现其中存在一个肘点——在这个肘点前,平均距离随K值变大迅速下降,而在这个肘点后,平均距离的下降将变得缓慢。现在我们使用 sklearn 库中的 KMeans 方法来跑一下聚类过程,然后将到聚类中心的平均值变化作图。 | Python Code:
#produce data set near the center
import numpy as np
import matplotlib.pyplot as plt
real_center = [(1,1),(1,2),(2,2),(2,1)]
point_number = 50
points_x = []
points_y = []
for center in real_center:
offset_x, offset_y = np.random.randn(point_number) * 0.3, np.random.randn(point_number) * 0.25
x_val, y_val = center[0] + offset_x, center[1] + offset_y
points_x.append(x_val)
points_y.append(y_val)
points_x = np.concatenate(points_x)
points_y = np.concatenate(points_y)
# 绘制点图
plt.scatter(points_x, points_y, color='green', marker='+')
# 绘制中心点
center_x, center_y = zip(*real_center)
plt.scatter(center_x, center_y, color='red', marker='^')
plt.xlim(0, 3)
plt.ylim(0, 3)
plt.show()
Explanation: Kmeans from scratch
1.data production
End of explanation
# 第一步,随机选择 K 个点
K = 4
p_list = np.stack([points_x, points_y], axis=1)
index = np.random.choice(len(p_list), size=K)
centeroid = p_list[index]
# 以下是画图部分
for p in centeroid:
plt.scatter(p[0], p[1], marker='^')
plt.xlim(0, 3)
plt.ylim(0, 3)
plt.show()
# 第二步,遍历所有点 P,将 P 放入最近的聚类中心的集合中
points_set = {key: [] for key in range(K)}
for p in p_list:
nearest_index = np.argmin(np.sum((centeroid - p) ** 2, axis=1) ** 0.5)
points_set[nearest_index].append(p)
# 以下是画图部分
for k_index, p_set in points_set.items():
p_xs = [p[0] for p in p_set]
p_ys = [p[1] for p in p_set]
plt.scatter(p_xs, p_ys, color='C{}'.format(k_index))
for ix, p in enumerate(centeroid):
plt.scatter(p[0], p[1], color='C{}'.format(ix), marker='^', edgecolor='black', s=128)
plt.xlim(0, 3)
plt.ylim(0, 3)
plt.show()
# 第三步,遍历每一个点集,计算新的聚类中心
for k_index, p_set in points_set.items():
p_xs = [p[0] for p in p_set]
p_ys = [p[1] for p in p_set]
centeroid[k_index, 0] = sum(p_xs) / len(p_set)
centeroid[k_index, 1] = sum(p_ys) / len(p_set)
# 第四步,重复进行以上步骤
for i in range(10):
points_set = {key: [] for key in range(K)}
for p in p_list:
nearest_index = np.argmin(np.sum((centeroid - p) ** 2, axis=1) ** 0.5)
points_set[nearest_index].append(p)
for k_index, p_set in points_set.items():
p_xs = [p[0] for p in p_set]
p_ys = [p[1] for p in p_set]
centeroid[k_index, 0] = sum(p_xs) / len(p_set)
centeroid[k_index, 1] = sum(p_ys) / len(p_set)
for k_index, p_set in points_set.items():
p_xs = [p[0] for p in p_set]
p_ys = [p[1] for p in p_set]
plt.scatter(p_xs, p_ys, color='C{}'.format(k_index))
for ix, p in enumerate(centeroid):
plt.scatter(p[0], p[1], color='C{}'.format(ix), marker='^', edgecolor='black', s=128)
plt.xlim(0, 3)
plt.ylim(0, 3)
plt.annotate('{} episode'.format(i + 1), xy=(2, 2.5), fontsize=14)
plt.show()
print(centeroid)
Explanation: 我们以(1, 1), (1, 2), (2, 2), (2, 1)四个点为中心产生了随机分布的点,如果我们的聚类算法正确的话,我们找到的中心点应该和这四个点很接近。先用简单的语言描述 kmeans 算法步骤:
第一步 - 随机选择 K 个点作为点的聚类中心,这表示我们要将数据分为 K 类。
第二步 - 遍历所有的点 P, 算出 P 到每个聚类中心的距离,将 P 放到最近的聚类中心的点集中。遍历结束后我们将得到 K 个点集。
第三步 - 遍历每一个点集,算出每一个点集的中心位置,将其作为新的聚类中心。
第四步 - 重复步骤 2 和步骤 3,直到聚类中心位置不再移动。
End of explanation
from sklearn.cluster import KMeans
loss = []
for i in range(1, 10):
kmeans = KMeans(n_clusters=i, max_iter=100).fit(p_list)
loss.append(kmeans.inertia_ / point_number / K)
plt.plot(range(1, 10), loss)
plt.show()
Explanation: 寻找 K 值
以上已经介绍了 KMeans 方法的具体流程,但是我们还面临一个问题,如何确定 K 值——在以上的演示中,由于数据是我们自己生成的,所以我们很容易就确定了 K 值,但是真实的环境下,我们往往不能立马清楚 K 值的大小。
一种比较通用的解决方法是计算每个点到自己的聚类中心的平均距离,虽然说,K 值越大,理论上这个平均距离会越小。但是当我们画出平均距离随K值的变化曲线后,会发现其中存在一个肘点——在这个肘点前,平均距离随K值变大迅速下降,而在这个肘点后,平均距离的下降将变得缓慢。现在我们使用 sklearn 库中的 KMeans 方法来跑一下聚类过程,然后将到聚类中心的平均值变化作图。
End of explanation |
4,161 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Initialization
Welcome to the first assignment of "Improving Deep Neural Networks".
Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning.
If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results.
A well chosen initialization can
Step2: You would like a classifier to separate the blue dots from the red dots.
1 - Neural Network model
You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with
Step4: 2 - Zero initialization
There are two types of parameters to initialize in a neural network
Step5: Expected Output
Step6: The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary
Step8: The model is predicting 0 for every example.
In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression.
<font color='blue'>
What you should remember
Step9: Expected Output
Step10: If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes.
Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
Step12: Observations
Step13: Expected Output | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
Explanation: Initialization
Welcome to the first assignment of "Improving Deep Neural Networks".
Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning.
If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results.
A well chosen initialization can:
- Speed up the convergence of gradient descent
- Increase the odds of gradient descent converging to a lower training (and generalization) error
To get started, run the following cell to load the packages and the planar dataset you will try to classify.
End of explanation
def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"):
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
Explanation: You would like a classifier to separate the blue dots from the red dots.
1 - Neural Network model
You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with:
- Zeros initialization -- setting initialization = "zeros" in the input argument.
- Random initialization -- setting initialization = "random" in the input argument. This initializes the weights to large random values.
- He initialization -- setting initialization = "he" in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015.
Instructions: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this model() calls.
End of explanation
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l-1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: 2 - Zero initialization
There are two types of parameters to initialize in a neural network:
- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$
- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$
Exercise: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
End of explanation
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
Explanation: Expected Output:
<table>
<tr>
<td>
**W1**
</td>
<td>
[[ 0. 0. 0.]
[ 0. 0. 0.]]
</td>
</tr>
<tr>
<td>
**b1**
</td>
<td>
[[ 0.]
[ 0.]]
</td>
</tr>
<tr>
<td>
**W2**
</td>
<td>
[[ 0. 0.]]
</td>
</tr>
<tr>
<td>
**b2**
</td>
<td>
[[ 0.]]
</td>
</tr>
</table>
Run the following code to train your model on 15,000 iterations using zeros initialization.
End of explanation
print ("predictions_train = " + str(predictions_train))
print ("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
End of explanation
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * 10
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: The model is predicting 0 for every example.
In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression.
<font color='blue'>
What you should remember:
- The weights $W^{[l]}$ should be initialized randomly to break symmetry.
- It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly.
3 - Random initialization
To break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values.
Exercise: Implement the following function to initialize your weights to large random values (scaled by *10) and your biases to zeros. Use np.random.randn(..,..) * 10 for weights and np.zeros((.., ..)) for biases. We are using a fixed np.random.seed(..) to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
End of explanation
parameters = model(train_X, train_Y, initialization = "random")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
Explanation: Expected Output:
<table>
<tr>
<td>
**W1**
</td>
<td>
[[ 17.88628473 4.36509851 0.96497468]
[-18.63492703 -2.77388203 -3.54758979]]
</td>
</tr>
<tr>
<td>
**b1**
</td>
<td>
[[ 0.]
[ 0.]]
</td>
</tr>
<tr>
<td>
**W2**
</td>
<td>
[[-0.82741481 -6.27000677]]
</td>
</tr>
<tr>
<td>
**b2**
</td>
<td>
[[ 0.]]
</td>
</tr>
</table>
Run the following code to train your model on 15,000 iterations using random initialization.
End of explanation
print (predictions_train)
print (predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes.
Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
End of explanation
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * np.sqrt(2 / layers_dims[l-1])
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Observations:
- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.
- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm.
- If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.
<font color='blue'>
In summary:
- Initializing weights to very large random values does not work well.
- Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part!
4 - He initialization
Finally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of sqrt(1./layers_dims[l-1]) where He initialization would use sqrt(2./layers_dims[l-1]).)
Exercise: Implement the following function to initialize your parameters with He initialization.
Hint: This function is similar to the previous initialize_parameters_random(...). The only difference is that instead of multiplying np.random.randn(..,..) by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
End of explanation
parameters = model(train_X, train_Y, initialization = "he")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: Expected Output:
<table>
<tr>
<td>
**W1**
</td>
<td>
[[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
</td>
</tr>
<tr>
<td>
**b1**
</td>
<td>
[[ 0.]
[ 0.]
[ 0.]
[ 0.]]
</td>
</tr>
<tr>
<td>
**W2**
</td>
<td>
[[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
</td>
</tr>
<tr>
<td>
**b2**
</td>
<td>
[[ 0.]]
</td>
</tr>
</table>
Run the following code to train your model on 15,000 iterations using He initialization.
End of explanation |
4,162 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-2', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: SANDBOX-2
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
4,163 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csiro-bom', 'sandbox-2', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: CSIRO-BOM
Source ID: SANDBOX-2
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:56
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
4,164 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<figure>
<IMG SRC="gfx/Logo_norsk_pos.png" WIDTH=100 ALIGN="right">
</figure>
Operators and commutators
Roberto Di Remigio, Luca Frediani
We will be exercising our knowledge of operators and commutator algebra. These are extremely useful exercises, as you
will see these type of manipulations recurring throughout the rest of the course.
A note on notation
Step1: There is an extensive tutorial that you can refer to. Another useful example is the calculation
of definite and indefinite integrals using SymPy. Consider the following code snippet
Step2: This code snippet will instead calculate the definite integral of the same function
in a given interval | Python Code:
from sympy import *
# Define symbols
x, y, z = symbols('x y z')
# We want results to be printed to screen
init_printing(use_unicode=True)
# Calculate the derivative with respect to x
diff(exp(x**2), x)
Explanation: <figure>
<IMG SRC="gfx/Logo_norsk_pos.png" WIDTH=100 ALIGN="right">
</figure>
Operators and commutators
Roberto Di Remigio, Luca Frediani
We will be exercising our knowledge of operators and commutator algebra. These are extremely useful exercises, as you
will see these type of manipulations recurring throughout the rest of the course.
A note on notation:
an operator will be designed by putting an hat on top of any letter:
\begin{equation}
\hat{A},\,\hat{O},\,\hat{b},\,\hat{\gamma}
\end{equation}
the commutator of two operators is defined as:
\begin{equation}
[\hat{A}, \hat{B}] = \hat{A}\hat{B} - \hat{B}\hat{A}
\end{equation}
the position and momentum operator are defined as:
\begin{equation}
\hat{x}_i = x_i\cdot \quad\quad \hat{p}_i = -\mathrm{i}\hbar\frac{\partial}{\partial x_i}
\end{equation}
where $i$ refers to any of the three Cartesian components, i.e. $i = x, y, z$
the Canonical Commutation Relations (CCR) are:
\begin{alignat}{3}
[x_i, x_j] = 0; \quad& [p_i, p_j] = 0; \quad& [x_i, p_j] = \mathrm{i}\hbar \delta_{ij}
\end{alignat}
where the Kronecker $\delta$ symbol is defined as:
\begin{equation}
\delta_{ij} =
\begin{cases}
1 & \text{if } i = j \
0 & \text{if } i \neq j
\end{cases}
\end{equation}
Dirac braket notation. We will interpret the following symbols as:
\begin{equation}
\langle \psi | \phi \rangle = \int \mathrm{d} \mathbf{r} \psi^(\mathbf{r})\phi(\mathbf{r})
\end{equation}
\begin{equation}
\langle \psi | \hat{A} | \phi \rangle = \int\mathrm{d} \mathbf{r} \psi^(\mathbf{r})\hat{A}\phi(\mathbf{r})
\end{equation}
Using SymPy
SymPy is a Python library for symbolic mathematics. It can be used to evaluate derivatives, definite and indefinite integrals, differential equations and much more.
As an example, the following code will evaluate the derivative of $\exp(x^2)$ and print it to screen:
Python
from sympy import *
x, y, z = symbols('x y z')
init_printing(use_unicode=True)
diff(exp(x**2), x)
End of explanation
integrate(cos(x), x)
Explanation: There is an extensive tutorial that you can refer to. Another useful example is the calculation
of definite and indefinite integrals using SymPy. Consider the following code snippet:
```Python
An indefinite integral
integrate(cos(x), x)
```
This will calculate the primitive function of $\cos(x)$:
\begin{equation}
\int \cos(x)\mathrm{d}x = \sin(x) + C
\end{equation}
End of explanation
integrate(cos(x), (x, -pi/2., pi/2.))
Explanation: This code snippet will instead calculate the definite integral of the same function
in a given interval:
\begin{equation}
\int_{-\pi/2}^{\pi/2} \cos(x)\mathrm{d}x =[\sin(x)]_{-\pi/2}^{\pi/2} = 2
\end{equation}
```Python
A definite integral
integrate(cos(x), (x, -\pi/2., pi/2))
```
End of explanation |
4,165 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predict with pre-trained models
This is a demo for predicting with a pre-trained model on the full imagenet dataset, which contains over 10 million images and 10 thousands classes. For a more detailed explanation, please refer to predict.ipynb.
We first load the pre-trained model.
Step1: Create a model for this model on GPU 0.
Step2: Next we define the function to obtain an image by a given URL and the function for predicting.
Step3: We are able to classify an image and output the top predicted classes. | Python Code:
import os, urllib
import mxnet as mx
def download(url,prefix=''):
filename = prefix+url.split("/")[-1]
if not os.path.exists(filename):
urllib.urlretrieve(url, filename)
path='http://data.mxnet.io/models/imagenet-11k/'
download(path+'resnet-152/resnet-152-symbol.json', 'full-')
download(path+'resnet-152/resnet-152-0000.params', 'full-')
download(path+'synset.txt', 'full-')
with open('full-synset.txt', 'r') as f:
synsets = [l.rstrip() for l in f]
sym, arg_params, aux_params = mx.model.load_checkpoint('full-resnet-152', 0)
Explanation: Predict with pre-trained models
This is a demo for predicting with a pre-trained model on the full imagenet dataset, which contains over 10 million images and 10 thousands classes. For a more detailed explanation, please refer to predict.ipynb.
We first load the pre-trained model.
End of explanation
mod = mx.mod.Module(symbol=sym, label_names=None, context=mx.gpu())
mod.bind(for_training=False, data_shapes=[('data', (1,3,224,224))], label_shapes=mod._label_shapes)
mod.set_params(arg_params, aux_params, allow_missing=True)
Explanation: Create a model for this model on GPU 0.
End of explanation
%matplotlib inline
import matplotlib
matplotlib.rc("savefig", dpi=100)
import matplotlib.pyplot as plt
import cv2
import numpy as np
from collections import namedtuple
Batch = namedtuple('Batch', ['data'])
def get_image(url, show=True):
filename = url.split("/")[-1]
urllib.urlretrieve(url, filename)
img = cv2.imread(filename)
if img is None:
print('failed to download ' + url)
if show:
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
plt.axis('off')
return filename
def predict(filename, mod, synsets):
img = cv2.cvtColor(cv2.imread(filename), cv2.COLOR_BGR2RGB)
if img is None:
return None
img = cv2.resize(img, (224, 224))
img = np.swapaxes(img, 0, 2)
img = np.swapaxes(img, 1, 2)
img = img[np.newaxis, :]
mod.forward(Batch(data=[mx.nd.array(img)]))
prob = mod.get_outputs()[0].asnumpy()
prob = np.squeeze(prob)
a = np.argsort(prob)[::-1]
for i in a[0:5]:
print('probability=%f, class=%s' %(prob[i], synsets[i]))
Explanation: Next we define the function to obtain an image by a given URL and the function for predicting.
End of explanation
url = 'http://writm.com/wp-content/uploads/2016/08/Cat-hd-wallpapers.jpg'
predict(get_image(url), mod, synsets)
url = 'https://images-na.ssl-images-amazon.com/images/G/01/img15/pet-products/small-tiles/23695_pets_vertical_store_dogs_small_tile_8._CB312176604_.jpg'
predict(get_image(url), mod, synsets)
Explanation: We are able to classify an image and output the top predicted classes.
End of explanation |
4,166 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Gaussian Mixture Models and Expectation Maximisation in Shogun
By Heiko Strathmann - [email protected] - http
Step2: Set up the model in Shogun
Step3: Sampling from mixture models
Sampling is extremely easy since every instance of the <a href="http
Step4: Evaluating densities in mixture Models
Next, let us visualise the density of the joint model (which is a convex sum of the densities of the individual distributions). Note the similarity between the calls since all distributions implement the <a href="http
Step5: Density estimating with mixture models
Now let us draw samples from the mixture model itself rather than from individual components. This is the situation that usually occurs in practice
Step6: Imagine you did not know the true generating process of this data. What would you think just looking at it? There are clearly at least two components (or clusters) that might have generated this data, but three also looks reasonable. So let us try to learn a Gaussian mixture model on those.
Step7: So far so good, now lets plot the density of this GMM using the code from above
Step8: It is also possible to access the individual components of the mixture distribution. In our case, we can for example draw 95% ellipses for each of the Gaussians using the method from above. We will do this (and more) below.
On local minima of EM
It seems that three comonents give a density that is closest to the original one. While two components also do a reasonable job here, it might sometimes happen (<a href="http
Step9: Clustering with mixture models
Recall that our initial goal was not to visualise mixture models (although that is already pretty cool) but to find clusters in a given set of points. All we need to do for this is to evaluate the log-likelihood of every point under every learned component and then pick the largest one. Shogun can do both. Below, we will illustrate both cases, obtaining a cluster index, and evaluating the log-likelihood for every point under each component.
Step10: These are clusterings obtained via the true mixture model and the one learned via EM. There is a slight subtlety here
Step11: Note how the lower left and middle cluster are overlapping in the sense that points at their intersection have similar likelihoods. If you do not care at all about this and are just interested in a partitioning of the space, simply choose the maximum.
Below we plot the space partitioning for a hard clustering. | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
import shogun as sg
from matplotlib.patches import Ellipse
# a tool for visualisation
def get_gaussian_ellipse_artist(mean, cov, nstd=1.96, color="red", linewidth=3):
Returns an ellipse artist for nstd times the standard deviation of this
Gaussian, specified by mean and covariance
# compute eigenvalues (ordered)
vals, vecs = np.linalg.eigh(cov)
order = vals.argsort()[::-1]
vals, vecs = vals[order], vecs[:, order]
theta = np.degrees(np.arctan2(*vecs[:, 0][::-1]))
# width and height are "full" widths, not radius
width, height = 2 * nstd * np.sqrt(vals)
e = Ellipse(xy=mean, width=width, height=height, angle=theta, \
edgecolor=color, fill=False, linewidth=linewidth)
return e
Explanation: Gaussian Mixture Models and Expectation Maximisation in Shogun
By Heiko Strathmann - [email protected] - http://github.com/karlnapf - http://herrstrathmann.de.
Based on the GMM framework of the Google summer of code 2011 project of Alesis Novik - https://github.com/alesis
This notebook is about learning and using Gaussian <a href="https://en.wikipedia.org/wiki/Mixture_model">Mixture Models</a> (GMM) in Shogun. Below, we demonstrate how to use them for sampling, for density estimation via <a href="https://en.wikipedia.org/wiki/Expectation-maximization_algorithm">Expectation Maximisation (EM)</a>, and for <a href="https://en.wikipedia.org/wiki/Data_clustering">clustering</a>.
Note that Shogun's interfaces for mixture models are deprecated and are soon to be replace by more intuitive and efficient ones. This notebook contains some python magic at some places to compensate for this. However, all computations are done within Shogun itself.
Finite Mixture Models (skip if you just want code examples)
We begin by giving some intuition about mixture models. Consider an unobserved (or latent) discrete random variable taking $k$ states $s$ with probabilities $\text{Pr}(s=i)=\pi_i$ for $1\leq i \leq k$, and $k$ random variables $x_i|s_i$ with arbritary densities or distributions, which are conditionally independent of each other given the state of $s$. In the finite mixture model, we model the probability or density for a single point $x$ begin generated by the weighted mixture of the $x_i|s_i$
$$
p(x)=\sum_{i=1}^k\text{Pr}(s=i)p(x)=\sum_{i=1}^k \pi_i p(x|s)
$$
which is simply the marginalisation over the latent variable $s$. Note that $\sum_{i=1}^k\pi_i=1$.
For example, for the Gaussian mixture model (GMM), we get (adding a collection of parameters $\theta:={\boldsymbol{\mu}i, \Sigma_i}{i=1}^k$ that contains $k$ mean and covariance parameters of single Gaussian distributions)
$$
p(x|\theta)=\sum_{i=1}^k \pi_i \mathcal{N}(\boldsymbol{\mu}_i,\Sigma_i)
$$
Note that any set of probability distributions on the same domain can be combined to such a mixture model. Note again that $s$ is an unobserved discrete random variable, i.e. we model data being generated from some weighted combination of baseline distributions. Interesting problems now are
Learning the weights $\text{Pr}(s=i)=\pi_i$ from data
Learning the parameters $\theta$ from data for a fixed family of $x_i|s_i$, for example for the GMM
Using the learned model (which is a density estimate) for clustering or classification
All of these problems are in the context of unsupervised learning since the algorithm only sees the plain data and no information on its structure.
Expectation Maximisation
<a href="https://en.wikipedia.org/wiki/Expectation-maximization_algorithm">Expectation Maximisation (EM)</a> is a powerful method to learn any form of latent models and can be applied to the Gaussian mixture model case. Standard methods such as Maximum Likelihood are not straightforward for latent models in general, while EM can almost always be applied. However, it might converge to local optima and does not guarantee globally optimal solutions (this can be dealt with with some tricks as we will see later). While the general idea in EM stays the same for all models it can be used on, the individual steps depend on the particular model that is being used.
The basic idea in EM is to maximise a lower bound, typically called the free energy, on the log-likelihood of the model. It does so by repeatedly performing two steps
The E-step optimises the free energy with respect to the latent variables $s_i$, holding the parameters $\theta$ fixed. This is done via setting the distribution over $s$ to the posterior given the used observations.
The M-step optimises the free energy with respect to the paramters $\theta$, holding the distribution over the $s_i$ fixed. This is done via maximum likelihood.
It can be shown that this procedure never decreases the likelihood and that stationary points (i.e. neither E-step nor M-step produce changes) of it corresponds to local maxima in the model's likelihood. See references for more details on the procedure, and how to obtain a lower bound on the log-likelihood. There exist many different flavours of EM, including variants where only subsets of the model are iterated over at a time. There is no learning rate such as step size or similar, which is good and bad since convergence can be slow.
Mixtures of Gaussians in Shogun
The main class for GMM in Shogun is <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGMM.html">CGMM</a>, which contains an interface for setting up a model and sampling from it, but also to learn the model (the $\pi_i$ and parameters $\theta$) via EM. It inherits from the base class for distributions in Shogun, <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1Distribution.html">Distribution</a>, and combines multiple single distribution instances to a mixture.
We start by creating a GMM instance, sampling from it, and computing the log-likelihood of the model for some points, and the log-likelihood of each individual component for some points. All these things are done in two dimensions to be able to plot them, but they generalise to higher (or lower) dimensions easily.
Let's sample, and illustrate the difference of knowing the latent variable indicating the component or not.
End of explanation
# create mixture of three Gaussians
num_components=3
num_max_samples=100
gmm=sg.GMM(num_components)
dimension=2
# set means (TODO interface should be to construct mixture from individuals with set parameters)
means=np.zeros((num_components, dimension))
means[0]=[-5.0, -4.0]
means[1]=[7.0, 3.0]
means[2]=[0, 0.]
[gmm.set_nth_mean(means[i], i) for i in range(num_components)]
# set covariances
covs=np.zeros((num_components, dimension, dimension))
covs[0]=np.array([[2, 1.3],[.6, 3]])
covs[1]=np.array([[1.3, -0.8],[-0.8, 1.3]])
covs[2]=np.array([[2.5, .8],[0.8, 2.5]])
[gmm.set_nth_cov(covs[i],i) for i in range(num_components)]
# set mixture coefficients, these have to sum to one (TODO these should be initialised automatically)
weights=np.array([0.5, 0.3, 0.2])
gmm.put('coefficients', weights)
Explanation: Set up the model in Shogun
End of explanation
# now sample from each component seperately first, the from the joint model
colors=["red", "green", "blue"]
for i in range(num_components):
# draw a number of samples from current component and plot
num_samples=int(np.random.rand()*num_max_samples)+1
# emulate sampling from one component (TODO fix interface of GMM to handle this)
w=np.zeros(num_components)
w[i]=1.
gmm.put('coefficients', w)
# sample and plot (TODO fix interface to have loop within)
X=np.array([gmm.sample() for _ in range(num_samples)])
plt.plot(X[:,0], X[:,1], "o", color=colors[i])
# draw 95% elipsoid for current component
plt.gca().add_artist(get_gaussian_ellipse_artist(means[i], covs[i], color=colors[i]))
_=plt.title("%dD Gaussian Mixture Model with %d components" % (dimension, num_components))
# since we used a hack to sample from each component
gmm.put('coefficients', weights)
Explanation: Sampling from mixture models
Sampling is extremely easy since every instance of the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1Distribution.html">Distribution</a> class in Shogun allows to sample from it (if implemented)
End of explanation
# generate a grid over the full space and evaluate components PDF
resolution=100
Xs=np.linspace(-10,10, resolution)
Ys=np.linspace(-8,6, resolution)
pairs=np.asarray([(x,y) for x in Xs for y in Ys])
D=np.asarray([gmm.cluster(pairs[i])[3] for i in range(len(pairs))]).reshape(resolution,resolution)
plt.figure(figsize=(18,5))
plt.subplot(1,2,1)
plt.pcolor(Xs,Ys,D)
plt.xlim([-10,10])
plt.ylim([-8,6])
plt.title("Log-Likelihood of GMM")
plt.subplot(1,2,2)
plt.pcolor(Xs,Ys,np.exp(D))
plt.xlim([-10,10])
plt.ylim([-8,6])
_=plt.title("Likelihood of GMM")
Explanation: Evaluating densities in mixture Models
Next, let us visualise the density of the joint model (which is a convex sum of the densities of the individual distributions). Note the similarity between the calls since all distributions implement the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1Distribution.html">Distribution</a> interface, including the mixture.
End of explanation
# sample and plot (TODO fix interface to have loop within)
X=np.array([gmm.sample() for _ in range(num_max_samples)])
plt.plot(X[:,0], X[:,1], "o")
_=plt.title("Samples from GMM")
Explanation: Density estimating with mixture models
Now let us draw samples from the mixture model itself rather than from individual components. This is the situation that usually occurs in practice: Someone gives you a bunch of data with no labels attached to it all all. Our job is now to find structure in the data, which we will do with a GMM.
End of explanation
def estimate_gmm(X, num_components):
# bring data into shogun representation (note that Shogun data is in column vector form, so transpose)
feat=sg.create_features(X.T)
gmm_est=sg.GMM(num_components)
gmm_est.set_features(feat)
# learn GMM
gmm_est.train_em()
return gmm_est
Explanation: Imagine you did not know the true generating process of this data. What would you think just looking at it? There are clearly at least two components (or clusters) that might have generated this data, but three also looks reasonable. So let us try to learn a Gaussian mixture model on those.
End of explanation
component_numbers=[2,3]
# plot true likelihood
D_true=np.asarray([gmm.cluster(pairs[i])[num_components] for i in range(len(pairs))]).reshape(resolution,resolution)
plt.figure(figsize=(18,5))
plt.subplot(1,len(component_numbers)+1,1)
plt.pcolor(Xs,Ys,np.exp(D_true))
plt.xlim([-10,10])
plt.ylim([-8,6])
plt.title("True likelihood")
for n in range(len(component_numbers)):
# TODO get rid of these hacks and offer nice interface from Shogun
# learn GMM with EM
gmm_est=estimate_gmm(X, component_numbers[n])
# evaluate at a grid of points
D_est=np.asarray([gmm_est.cluster(pairs[i])[component_numbers[n]] for i in range(len(pairs))]).reshape(resolution,resolution)
# visualise densities
plt.subplot(1,len(component_numbers)+1,n+2)
plt.pcolor(Xs,Ys,np.exp(D_est))
plt.xlim([-10,10])
plt.ylim([-8,6])
_=plt.title("Estimated likelihood for EM with %d components"%component_numbers[n])
Explanation: So far so good, now lets plot the density of this GMM using the code from above
End of explanation
# function to draw ellipses for all components of a GMM
def visualise_gmm(gmm, color="blue"):
for i in range(gmm.get_num_components()):
component=sg.Gaussian.obtain_from_generic(gmm.get_component(i))
plt.gca().add_artist(get_gaussian_ellipse_artist(component.get_mean(), component.get_cov(), color=color))
# multiple runs to illustrate random initialisation matters
for _ in range(3):
plt.figure(figsize=(18,5))
plt.subplot(1, len(component_numbers)+1, 1)
plt.plot(X[:,0],X[:,1], 'o')
visualise_gmm(gmm_est, color="blue")
plt.title("True components")
for i in range(len(component_numbers)):
gmm_est=estimate_gmm(X, component_numbers[i])
plt.subplot(1, len(component_numbers)+1, i+2)
plt.plot(X[:,0],X[:,1], 'o')
visualise_gmm(gmm_est, color=colors[i])
# TODO add a method to get likelihood of full model, retraining is inefficient
likelihood=gmm_est.train_em()
_=plt.title("Estimated likelihood: %.2f (%d components)"%(likelihood,component_numbers[i]))
Explanation: It is also possible to access the individual components of the mixture distribution. In our case, we can for example draw 95% ellipses for each of the Gaussians using the method from above. We will do this (and more) below.
On local minima of EM
It seems that three comonents give a density that is closest to the original one. While two components also do a reasonable job here, it might sometimes happen (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1KMeans.html">KMeans</a> is used to initialise the cluster centres if not done by hand, using a random cluster initialisation) that the upper two Gaussians are grouped, re-run for a couple of times to see this. This illustrates how EM might get stuck in a local minimum. We will do this below, where it might well happen that all runs produce the same or different results - no guarantees.
Note that it is easily possible to initialise EM via specifying the parameters of the mixture components as did to create the original model above.
One way to decide which of multiple convergenced EM instances to use is to simply compute many of them (with different initialisations) and then choose the one with the largest likelihood. WARNING Do not select the number of components like this as the model will overfit.
End of explanation
def cluster_and_visualise(gmm_est):
# obtain cluster index for each point of the training data
# TODO another hack here: Shogun should allow to pass multiple points and only return the index
# as the likelihood can be done via the individual components
# In addition, argmax should be computed for us, although log-pdf for all components should also be possible
clusters=np.asarray([np.argmax(gmm_est.cluster(x)[:gmm.get_num_components()]) for x in X])
# visualise points by cluster
for i in range(gmm.get_num_components()):
indices=clusters==i
plt.plot(X[indices,0],X[indices,1], 'o', color=colors[i])
# learn gmm again
gmm_est=estimate_gmm(X, num_components)
plt.figure(figsize=(18,5))
plt.subplot(121)
cluster_and_visualise(gmm)
plt.title("Clustering under true GMM")
plt.subplot(122)
cluster_and_visualise(gmm_est)
_=plt.title("Clustering under estimated GMM")
Explanation: Clustering with mixture models
Recall that our initial goal was not to visualise mixture models (although that is already pretty cool) but to find clusters in a given set of points. All we need to do for this is to evaluate the log-likelihood of every point under every learned component and then pick the largest one. Shogun can do both. Below, we will illustrate both cases, obtaining a cluster index, and evaluating the log-likelihood for every point under each component.
End of explanation
plt.figure(figsize=(18,5))
for comp_idx in range(num_components):
plt.subplot(1,num_components,comp_idx+1)
# evaluated likelihood under current component
# TODO Shogun should do the loop and allow to specify component indices to evaluate pdf for
# TODO distribution interface should be the same everywhere
component=sg.Gaussian.obtain_from_generic(gmm.get_component(comp_idx))
cluster_likelihoods=np.asarray([component.compute_PDF(X[i]) for i in range(len(X))])
# normalise
cluster_likelihoods-=cluster_likelihoods.min()
cluster_likelihoods/=cluster_likelihoods.max()
# plot, coloured by likelihood value
cm=plt.get_cmap("jet")
for j in range(len(X)):
color = cm(cluster_likelihoods[j])
plt.plot(X[j,0], X[j,1] ,"o", color=color)
plt.title("Data coloured by likelihood for component %d" % comp_idx)
Explanation: These are clusterings obtained via the true mixture model and the one learned via EM. There is a slight subtlety here: even the model under which the data was generated will not cluster the data correctly if the data is overlapping. This is due to the fact that the cluster with the largest probability is chosen. This doesn't allow for any ambiguity. If you are interested in cases where data overlaps, you should always look at the log-likelihood of the point for each cluster and consider taking into acount "draws" in the decision, i.e. probabilities for two different clusters are equally large.
Below we plot all points, coloured by their likelihood under each component.
End of explanation
# compute cluster index for every point in space
D_est=np.asarray([gmm_est.cluster(pairs[i])[:num_components].argmax() for i in range(len(pairs))]).reshape(resolution,resolution)
# visualise clustering
cluster_and_visualise(gmm_est)
# visualise space partitioning
plt.pcolor(Xs,Ys,D_est)
Explanation: Note how the lower left and middle cluster are overlapping in the sense that points at their intersection have similar likelihoods. If you do not care at all about this and are just interested in a partitioning of the space, simply choose the maximum.
Below we plot the space partitioning for a hard clustering.
End of explanation |
4,167 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Adaptive Median Filter
Step1: Original Image (converted to grayscale)
Step2: Output with Python's native Median Filter function
Step3: As shown from the above print, AMF results in almost twice as higher deviation than the native median filter technique. | Python Code:
Image.fromarray(output)
Explanation: Using Adaptive Median Filter
End of explanation
Image.fromarray(grayscale_image)
Explanation: Original Image (converted to grayscale)
End of explanation
native_output = image_org.filter(ImageFilter.MedianFilter(size = 3))
native_output
deviation_native = np.sqrt(np.sum(np.square(grayscale_image-np.array(rgb2gray(np.array(native_output))))))
deviation_original = np.sum(np.square(grayscale_image-np.array(output)))
print("Deviation from the original salt and pepper images:")
print("Deviation via Median Filter (built-in): ", deviation_native)
print("Deviation via Adaptive Median Filter: ", deviation_original)
print(f"Percent difference b/w deviations: {100*(deviation_original - deviation_native)/deviation_original}%")
Explanation: Output with Python's native Median Filter function
End of explanation
### Thereofore, the built-in technique is nowhere as good as compared to the Adaptive Median Filter technique.
Explanation: As shown from the above print, AMF results in almost twice as higher deviation than the native median filter technique.
End of explanation |
4,168 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Divergences as a Function of $\mu_q$
Let us start by simply varying $\mu_q$ and seeing the result. We will hold $\sigma_q$ fixed to $\sigma_p$ and $\alpha = -0.5$.
Step1: Derivative of Divergences as a function of $\mu_q$
As before we will vary $\mu_q$ but this time we are evaluating the derivatives of the divergence. We will hold $\sigma_q$ fixed to $\sigma_p$ and $\alpha = -0.5$.
Step2: Finding the Zeros of the Derivatives
This is a little complicated becauase every step of the optimization algorithm requires the calculation of a quadrature. Error propagation could be an issue. To build confidence in the results we will find the location of the extrema as we increase the resolution around the extrema. If the value becomes more and more precise than we might believe that it is converging.
Step3: Examining the Divergences as a function of $\alpha$
This time we will fix both the means $\mu_q = \mu_p$ and variances $\sigma_q = \sigma_p$.
Step4: Derivatives of Divergences vs alpha
This is interesting but what we really care about is where $\alpha$ changes the extrema in anyway.
Step5: Cost Surface for the Convolution
Let us now examine the full function $f(\mu, \Sigma)$ for our given toy problem. We can explore how it changes as we modify the convolution with and without the entropy term. | Python Code:
a = -0.7
j_vals = []
kl_vals = []
mus = np.linspace(0,1,100)
for mu in mus:
j_vals.append(J(mu,p_sig,a)[0])
kl_vals.append(KL(mu,p_sig)[0])
fig = plt.figure(figsize=(15,5))
p_vals = p(mus)
plt.plot(mus, p_vals/p_vals.max(), label="$p(x)$")
#plt.plot(mus, j_vals/np.max(np.abs(j_vals)), label='$J$')
plt.plot(mus, j_vals, label='$J$')
plt.plot(mus, kl_vals/np.max(np.abs(kl_vals)), label='$KL$')
plt.title("Divergences with alpha = {}".format(a))
plt.xlabel('$\mu$')
plt.legend()
plt.show()
Explanation: Divergences as a Function of $\mu_q$
Let us start by simply varying $\mu_q$ and seeing the result. We will hold $\sigma_q$ fixed to $\sigma_p$ and $\alpha = -0.5$.
End of explanation
dj_vals = []
dkl_vals = []
mus = np.linspace(0,1.0,100)
for mu in mus:
dj_vals.append(dJ_dmu(mu,p_sig,a)[0])
dkl_vals.append(dKL_dmu(mu,p_sig)[0])
fig = plt.figure(figsize=(15,5))
p_vals = p(mus)
plt.plot(mus, p_vals/p_vals.max(), label="$p(x)$")
#plt.plot(mus, dj_vals/np.max(np.abs(dj_vals)), label='$\partial J/\partial \mu_q$')
plt.plot(mus, dj_vals, label='$\partial J/\partial \mu_q$')
plt.plot(mus, dkl_vals/np.max(np.abs(dkl_vals)), label='$\partial KL/\partial \mu_q$')
plt.title("Derivative of Divergences with alpha = {}".format(a))
plt.xlabel('$\mu$')
plt.legend()
plt.show()
Explanation: Derivative of Divergences as a function of $\mu_q$
As before we will vary $\mu_q$ but this time we are evaluating the derivatives of the divergence. We will hold $\sigma_q$ fixed to $\sigma_p$ and $\alpha = -0.5$.
End of explanation
a = -0.7
j_optims = []
j_maxErrs = []
kl_optims = []
kl_maxErrs = []
tot_mus_list = [1000,2000,3000,4000,5000]
for tot_mus in tot_mus_list:
print("Operating on {} mus...".format(tot_mus))
dj_vals = []
dj_errs = []
dkl_vals = []
dkl_errs = []
mus = np.linspace(0.4,0.6,tot_mus)
for mu in mus:
j_quad, j_err = dJ_dmu(mu,p_sig,a)
dj_vals.append(j_quad)
dj_errs.append(j_err)
kl_quad, kl_err = dKL_dmu(mu,p_sig)
dkl_vals.append(kl_quad)
dkl_errs.append(kl_err)
j_optims.append(mus[np.argmin(np.abs(dj_vals))])
j_maxErrs.append(np.max(dj_errs))
kl_optims.append(mus[np.argmin(np.abs(dkl_vals))])
kl_maxErrs.append(np.max(dkl_errs))
fig = plt.figure(figsize=(15,5))
ax1 = fig.add_subplot(121)
ax1.set_title("Error")
ax1.set_xlabel("Number of $\mu$s in Calcuation")
ax1.plot(tot_mus_list, j_maxErrs, label="max $J$-err")
ax1.plot(tot_mus_list, kl_maxErrs, label="max $KL$-err")
ax1.legend()
ax2 = fig.add_subplot(122)
ax2.set_title("$|\max{J}-\min{KL}|$")
ax2.set_xlabel("Number of $\mu$s in Calcuation")
ax2.plot(tot_mus_list, np.abs(np.array(j_optims)-np.array(kl_optims)), label="error")
ax2.legend()
Explanation: Finding the Zeros of the Derivatives
This is a little complicated becauase every step of the optimization algorithm requires the calculation of a quadrature. Error propagation could be an issue. To build confidence in the results we will find the location of the extrema as we increase the resolution around the extrema. If the value becomes more and more precise than we might believe that it is converging.
End of explanation
j_vals = []
kl_vals = []
alphas = np.linspace(-3,0.999,1000)
for a in alphas:
j_vals.append(J(p_mean,p_sig,a)[0])
kl_vals.append(KL(p_mean,p_sig)[0])
fig = plt.figure(figsize=(15,5))
plt.plot(alphas, j_vals, label='$J$')
plt.plot(alphas, kl_vals, label='$KL$')
plt.title("Divergences vs alpha")
plt.xlabel('alpha')
plt.legend()
plt.show()
Explanation: Examining the Divergences as a function of $\alpha$
This time we will fix both the means $\mu_q = \mu_p$ and variances $\sigma_q = \sigma_p$.
End of explanation
dj_vals = []
dkl_vals = []
alphas = np.linspace(-3,0.999,1000)
for a in alphas:
dj_vals.append(dJ_dmu(p_mean,p_sig,a)[0])
dkl_vals.append(dKL_dmu(p_mean,p_sig)[0])
fig = plt.figure(figsize=(15,5))
plt.plot(alphas, dj_vals, label='$\partial J/\partial \mu_q$')
plt.plot(alphas, dkl_vals, label='$\partial KL/\partial \mu_q$')
plt.title("Derivative of Divergences vs alpha")
plt.xlabel('alpha')
plt.legend()
plt.show()
Explanation: Derivatives of Divergences vs alpha
This is interesting but what we really care about is where $\alpha$ changes the extrema in anyway.
End of explanation
from mpl_toolkits.mplot3d import axes3d
from matplotlib import cm
mu_min = 0.4
mu_max = 0.6
num_mus = 1000
mus = np.linspace(mu_min, mu_max, num_mus)
sig_min = 0.0001
sig_max = 0.01
num_sigs = 1000
sigmas = np.linspace(sig_min, sig_max, num_sigs)
mu,sigma = np.meshgrid(mus,sigmas)
vals = np.array((mu,sigma))
z = np.ndarray(mu.shape)
for i in range(len(mu[0])):
for j in range(len(sigma[0])):
m,s = vals[:,i,j]
z[i,j] = J(m,s,-0.7)[0]
fig = plt.figure(figsize=(16,12))
ax = fig.gca(projection='3d')
ax.plot_surface(mu, sigma, z, rstride=5, cstride=5, alpha=0.3)
cset = ax.contour(mu, sigma, z, zdir='z', offset=-0.1, cmap=cm.coolwarm)
cset = ax.contour(mu, sigma, z, zdir='x', offset=mu_min, cmap=cm.coolwarm)
cset = ax.contour(mu, sigma, z, zdir='y', offset=sig_max, cmap=cm.coolwarm)
ax.set_xlabel('$\mu$')
ax.set_xlim(mu_min, mu_max)
ax.set_ylabel('$\sigma$')
ax.set_ylim(sig_min, sig_max)
ax.set_zlabel('$f$')
ax.set_zlim(-0.1,100)
plt.show()
z[0,0]
len(np.where(np.isnan(z))[0])
Explanation: Cost Surface for the Convolution
Let us now examine the full function $f(\mu, \Sigma)$ for our given toy problem. We can explore how it changes as we modify the convolution with and without the entropy term.
End of explanation |
4,169 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algorithms Exercise 3
Imports
Step2: Character counting and entropy
Write a function char_probs that takes a string and computes the probabilities of each character in the string
Step4: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as
Step5: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact
Explanation: Algorithms Exercise 3
Imports
End of explanation
def char_probs(s):
Find the probabilities of the unique characters in the string s.
Parameters
----------
s : str
A string of characters.
Returns
-------
probs : dict
A dictionary whose keys are the unique characters in s and whose values
are the probabilities of those characters.
# YOUR CODE HERE
c = len(s)
q={
}
for i in s:
q.append(s)
return c
print(char_probs('addee'))
test1 = char_probs('aaaa')
assert np.allclose(test1['a'], 1.0)
test2 = char_probs('aabb')
assert np.allclose(test2['a'], 0.5)
assert np.allclose(test2['b'], 0.5)
test3 = char_probs('abcd')
assert np.allclose(test3['a'], 0.25)
assert np.allclose(test3['b'], 0.25)
assert np.allclose(test3['c'], 0.25)
assert np.allclose(test3['d'], 0.25)
Explanation: Character counting and entropy
Write a function char_probs that takes a string and computes the probabilities of each character in the string:
First do a character count and store the result in a dictionary.
Then divide each character counts by the total number of character to compute the normalized probabilties.
Return the dictionary of characters (keys) and probabilities (values).
End of explanation
def entropy(d):
Compute the entropy of a dict d whose values are probabilities.
# YOUR CODE HERE
p = char_probs(s)
lo = np.log2(p)*p
h = -np.sum(lo)
assert np.allclose(entropy({'a': 0.5, 'b': 0.5}), 1.0)
assert np.allclose(entropy({'a': 1.0}), 0.0)
Explanation: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as:
$$H = - \Sigma_i P_i \log_2(P_i)$$
In this expression $\log_2$ is the base 2 log (np.log2), which is commonly used in information science. In Physics the natural log is often used in the definition of entropy.
Write a funtion entropy that computes the entropy of a probability distribution. The probability distribution will be passed as a Python dict: the values in the dict will be the probabilities.
To compute the entropy, you should:
First convert the values (probabilities) of the dict to a Numpy array of probabilities.
Then use other Numpy functions (np.log2, etc.) to compute the entropy.
Don't use any for or while loops in your code.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
assert True # use this for grading the pi digits histogram
Explanation: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string.
End of explanation |
4,170 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: The Landscape of the Major Food Staples in Ghana
The dataset used in this project is a CSV file of the Global Food Prices Database by WFP - World Food Programme.
Step2: The regions where there are market activities - alphabetical order
Step3: List of market locations per region
Step4: The main staples in Ghana | Python Code:
# Open the file and read its content.
raw_data = open('WFPVAM_FoodPrices_24-01-2017.csv', 'r').read()
# Split the raw_data on every newline.
raw_data = raw_data.split('\n')
# Take of the headers
raw_data_no_header = raw_data[1:]
# Make a list of lists of the raw_data_no_header
staples_data = []
for food_info in raw_data_no_header:
info = food_info.split(',')
staples_data.append(info)
def load_data_by_country(country):
Fetch data based on a specific country
country_data = []
for info in staples_data:
if country in info:
country_data.append(info)
return(country_data)
# Fetch all data on Ghana
ghana_food_stapes = load_data_by_country(country="Ghana")
# Regions where there are market activities
regions = []
for info in ghana_food_stapes:
current_region = info[3]
if current_region not in regions:
regions.append(current_region)
Explanation: The Landscape of the Major Food Staples in Ghana
The dataset used in this project is a CSV file of the Global Food Prices Database by WFP - World Food Programme.
End of explanation
for region in sorted(regions):
print("{} Region".format(region))
# Market locations across the country
markets = []
for info in ghana_food_stapes:
current_market = info[5]
if current_market not in markets:
markets.append(current_market)
# Markets per Regions
markets_in_regions = {}
for region in regions:
current_market = []
for market in markets:
for info in ghana_food_stapes:
if region == info[3] and market == info[5]:
if market not in current_market:
current_market.append(market)
markets_in_regions.update({region: current_market})
Explanation: The regions where there are market activities - alphabetical order:
End of explanation
# Loop through the markets_in_regions dictionary
for region, markets in sorted(markets_in_regions.items()):
print("\n{} REGION".format(region.upper()))
if len(markets) > 1:
# If market more than 1 make location plural
print(" {} Market Locations:".format(len(markets)))
elif len(markets) == 1:
# If only 1 market, make location singular
print(" {} Market Location:".format(len(markets)))
for market in sorted(markets):
print("\t{}".format(market))
Explanation: List of market locations per region
End of explanation
staples = []
for info in ghana_food_stapes:
current_staple = info[7]
if current_staple not in staples:
staples.append(current_staple)
for staple in sorted(staples):
print(staple)
Explanation: The main staples in Ghana
End of explanation |
4,171 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Morphological operations
Morphology is the study of shapes. In image processing, some simple operations can get you a long way. The first things to learn are erosion and dilation. In erosion, we look at a pixel’s local neighborhood and replace the value of that pixel with the minimum value of that neighborhood. In dilation, we instead choose the maximum.
Step1: The documentation for scikit-image's morphology module is
here.
Importantly, we must use a structuring element, which defines the local
neighborhood of each pixel. To get every neighbor (up, down, left, right, and
diagonals), use morphology.square; to avoid diagonals, use
morphology.diamond
Step2: The central value of the structuring element represents the pixel being considered, and the surrounding values are the neighbors
Step3: and
Step4: and
Step5: Erosion and dilation can be combined into two slightly more sophisticated operations, opening and closing. Here's an example
Step6: What happens when run an erosion followed by a dilation of this image?
What about the reverse?
Try to imagine the operations in your head before trying them out below.
Step7: Exercise
Step8: Remove the smaller objects to retrieve the large galaxy. | Python Code:
import numpy as np
from matplotlib import pyplot as plt, cm
import skdemo
plt.rcParams['image.cmap'] = 'cubehelix'
plt.rcParams['image.interpolation'] = 'none'
image = np.array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]], dtype=np.uint8)
plt.imshow(image)
Explanation: Morphological operations
Morphology is the study of shapes. In image processing, some simple operations can get you a long way. The first things to learn are erosion and dilation. In erosion, we look at a pixel’s local neighborhood and replace the value of that pixel with the minimum value of that neighborhood. In dilation, we instead choose the maximum.
End of explanation
from skimage import morphology
sq = morphology.square(width=3)
dia = morphology.diamond(radius=1)
disk = morphology.disk(radius=30)
skdemo.imshow_all(sq, dia, disk)
Explanation: The documentation for scikit-image's morphology module is
here.
Importantly, we must use a structuring element, which defines the local
neighborhood of each pixel. To get every neighbor (up, down, left, right, and
diagonals), use morphology.square; to avoid diagonals, use
morphology.diamond:
End of explanation
skdemo.imshow_all(image, morphology.erosion(image, sq), shape=(1, 2))
Explanation: The central value of the structuring element represents the pixel being considered, and the surrounding values are the neighbors: a 1 value means that pixel counts as a neighbor, while a 0 value does not. So:
End of explanation
skdemo.imshow_all(image, morphology.dilation(image, sq))
Explanation: and
End of explanation
skdemo.imshow_all(image, morphology.dilation(image, dia))
Explanation: and
End of explanation
image = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], np.uint8)
plt.imshow(image)
Explanation: Erosion and dilation can be combined into two slightly more sophisticated operations, opening and closing. Here's an example:
End of explanation
skdemo.imshow_all(image, morphology.opening(image, sq)) # erosion -> dilation
skdemo.imshow_all(image, morphology.closing(image, sq)) # dilation -> erosion
Explanation: What happens when run an erosion followed by a dilation of this image?
What about the reverse?
Try to imagine the operations in your head before trying them out below.
End of explanation
from skimage import data, color
hub = color.rgb2gray(data.hubble_deep_field()[350:450, 90:190])
plt.imshow(hub)
Explanation: Exercise: use morphological operations to remove noise from a binary image.
End of explanation
disk = morphology.disk(radius=8)
gal = morphology.opening(hub, disk)
plt.imshow(gal)
skdemo.imshow_with_histogram(gal)
gal_selector = gal > 50
# use numpy boolean indexing to estimate histogram
# of brightness intensity on isolated galaxy
skdemo.imshow_all(hub, gal_selector)
f = plt.figure()
plt.hist(hub[gal_selector], bins=50);
Explanation: Remove the smaller objects to retrieve the large galaxy.
End of explanation |
4,172 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Breakable Commitments...
Code to generate figures
Karna Basu and Jonathan Conning
Department of Economics, Hunter College and The Graduate Center, City University of New York
Step1: Abstract
Step2: The model
Consider the following simple workhorse three-period consumption smoothing model where consumers' preferences are summarized by constant relative risk aversion (CRRA) utility. In any period the consumer's instantaneous utility is given by $u(c)=c^{1-ρ}/(1-ρ)$. Over three period the agent maximizes utility
$$ U(c_0, c_1, c_2) =u(c_0) + \beta [\delta u(c_1) + \delta^2 u(c_2)]$$
This is a version of the classic $\beta-\delta$ quasi-hyperbolic discounting model. We assume the consumer has an autarky income stream ${y}={y_{0},y_{1},y_{2}}$ which defines autarky or reservation utility $ \overline{u}(y) = U(y₀,y₁,y₂)$ but in general will prefer a smoother consumption profile from contracting on financial markets.
Consumption smoothing with and without commitment services
Competitive full-commitment
Assume at first that financial intermediaries compete to offer contracts to a client.
Let's assume at first that a financial intermediary can offer a multiperiod contract and can -- at zero cost -- credibly commit to not renegotiating the terms of that contract. For the moment as well we will assume that this contract can also be made exclusive in the sense that we can stop a new bank from offering a more attractive additional or alternative contract to the period 1 self. We'll relax both assumptions shortly.
The offered contract will maximize the period-0 self's present value of utility $$ U(c_{0},c_{1},c_{2})=u(c_{0})+\beta \left[ \delta u(c_{1})+\delta ^{2}u(c_{2})\right] $$
subject to the bank's zero profit condition or, same thing, consumer budget constraint
Step3: Continuation utility
Step4: Why is the utility penalty so small for widening the variance this much?
Step5: For the CRRA case it's easy to find closed form solutions
Step6: The following function solves for period zero's optimal 'full commitment contract' using the equations above
Step7: The optimal contract for these parameters is
Step8: If the consumer had an income stream $ y =(100, 100, 100)$ then we'd interpret this as a borrowing contract, as the period 0 consumer would want to borrow 50 in period zero andthen balance repayments between period 1 and 2.
Saving/repayments (positives) and borrowing/dissaving (negatives) in each period would be written
Step9: If on the other hand the consumer had an income stream $ y =(200, 50, 50)$ then we'd interpret this as a savings contrac, with the consumer saving 50 in period zero to be shared equally between period 1 and 2 consumption.
refinance and self-control
We recast this slightly to focus on the role of savings. Period 0 self (henceforth 'zero-self') chooses period zero savings $s_0$ (and by implication period 0 consumption $c_0 = y_0 - s_0$). In period 1 his later 'one-self' reacts by choosing her own preferred period 1 savings $s_1$ (or, same thing $c_1$ and by implication $c_2$).
We need to find one-self's 'reaction function'. They choose $c_1$ to maximize
$$u(c_{1})+\beta \delta u(c_{2})$$
subject to
$$c_1(1+r)+c_2 =y_1 (1+r) +y_2+s_0 (1+r)^2$$
The FOC give us
$$u'(c_{1})=\beta \delta(1+r) {u'(c_2)} $$
which for this CRRA case give us
$$c_{2} = [\beta \delta (1+r) ]^\frac{1}{\rho} c_1$$
Substituting this into the intertemporal budget constraint above we can solve for the reaction function
Step10: Best response function, and Stackelberg
Step11: Two-quadrant plot
Step12: Now the definition of the Contract class.
Step15: Plot of Zero self's utility as a function of $s_0$
Step18: Just like above above but with $s_0$ argument
Step20: $$ \frac{s_0+y_1 +y_2}{1 + \beta ^\frac{1}{\rho} }
= \Lambda^\frac{1}{\rho}
(y_0-s_0) $$
where $\Lambda = \frac{\beta (1+\beta^\frac{1-\rho}{\rho})}{1+\beta^\frac{1}{\rho}}$
So we can solve for $s_0$ as
Step21: Let's plot an indifference curve in c1-c2 space. For example if the agent in autarky has income ${y_{0},y_{1},y_{2}}$ and no access to saving or borrowing then (from period 0 self's perspective) entering period 1 they have reservation utility $u(y_{1})+\delta u(y_{2})=\overline{u}{0}$. But when period 1 rolls around their preferences change. From period 1 self's perspective they have reservation utility $u(y{1})+\beta \delta u(y_{2})=\overline{u}_{1}$.
Exclusive competitive contracts
The contract class defines a generic contract which holds consumption stream objects of the form $\left( c_{0},c_{1},c_{2}\right)$ and allows a few manipulations. Now comes the job of solving for optimal contracts and we do this with a CompetitiveContract class which inherits the attributes and methods of the more generic contract class and then adds a few methods such as calculating the optimal contract full commitment and renegotiation-proof contracts in the competitive lender case. Note that the methods have the same names as the mon_contract class but some behave differently, reflecting the reversed objective and constraint.
Full-commitment contracts
When the competitive lender can commit to not-renegotiating the contract (i.e. to not pandering to the period-1 self's desire to renegotiate period-0's contract) and the contracts are exclusive (so no third party lender will enter to offer such renegotiation either) the contract solves
$$\max \ u\left( c_{0}\right) +\beta \left[ \delta u\left( c_{1}\right) +\delta ^{2}u\left( c_{2}\right) \right] $$
subject to the zero profit constraint
$$s.t. (y_{0}-c_{0})+\frac{(y_{1}-c_{1})}{(1+r)}+\frac{(y_{2}-c_{2})}{(1+r)^{2}} \geq 0$$
When $\delta =\frac{1}{(1+r)}$ for the CRRA case an optimum will set $c_{1}=c_{2}=\overline{c}$ and $\overline{c}=\beta ^{\frac{1}{\rho }}c_{0}$ from which a closed form solution can be easily found (see fcommit() function below for formulas).
Note that we are here assuming that the consumer has no choice but to consume their income stream $y$ under autarky. This would be true if the agent does not have acess to any 'own savings' technologies. Later below we see how things change only slightly when we allow them to use own savings to create a slightly more smooth autarky consumption stream (not perfectly smooth because they cannot overcome their self-control problems on their own).
Renegotiaton-proof contracts
[THIS EXPLANATION HAS NOT BEEN UPDATED YET]
The agent's period-1-self's preferences differ from those of his period 0 self so they will often want to renegotiate any contract their period 0 self contracted, and the bank can profit from this renegotiation so long as its renegotiaton cost $\kappa $ is low. In particular if the period-0-self agreed to contract $\left( \bar{c}{0},\bar{c}{1},\bar{c}{2}\right) $ a competitive firm would offer to renegotiate the remaining $(\bar{c}{1},\bar{c}{2})$ to contract $\left( c{1}^{r},c_{2}^{r}\right) $ chosen to maximize
$$\max \ \ u(c_{1})+\beta (\delta u(c_{1}) +\delta^{2} u(c_{2})) $$
subject to $$(y_{1}-c_{1})+\frac{(y_{2}-c_{2})}{(1+r)} \geq 0$$
We can show from the agent's first order conditions for the CRRA case that a renegotiated contract will always satisfy $c_{2}=\beta ^{\frac{1}{\rho }}c_{1}$ and indeed for CRRA we get the closed form
Step22: Full commitment contract
Step23: The bank does not profit from this type of opportunistic renegotiation, if we assume 'competition' at time of renegotiation although one might argue that the relation is ex-ante competitive but ex-post favors the bank.
A sophisticated consumer will however anticipate this type of opporunistic renegotiation and only agree to a renegotiation-proof contract.
As expected the bank's profits are lowered due to its inability to commit to not renegotiate.
Here's a plot.
Step24: Optimal contract when renegotiation cost $\kappa $ >0
Plot to explore how the renegotiation cost $\kappa $ affects the terms of the contract and firm profits
Step25: At lower renegotiation costs the bank is forced to offer less consumption smoothing in periods 1 and 2 as a way to credibly commit to limit their gains to renegotiation with a period 1 self. Hence bank profits rise with their ability to commit to incur a renegotiation cost $\kappa$
We haven't plotted $c_{0}$ for each $\kappa$ but that's because it varies less relative to $c_{1}, c_{2}$ and way above the full commitment consumption smoothing. The following shows a non-monotonic relation though ws should remember this is varying very little.
Step26: The choice to become a commercial non-profit
Modeling the non-profit
The no-renegotiation constraint has two parts. A pure for-profit captures fraction $\alpha = 1$ of profits and faces renegotiation cost
not-for-profit of type $\alpha$ and faces renegotiation cost $h(\alpha) = h(1)$. More generally a non-profit of type $\alpha$ has a no-renegotiation constraint of the form
$$\alpha \left[ \Pi ^{R}-\Pi \right] \geq h(\alpha )$$
To be specific here let's model this as
$$h(\alpha )=\kappa \left( 1-\alpha \right) $$
So that at $\alpha =1$ there is no cost to renegotiation and at $0< \alpha <1$ there is a non-negative non-pecuniary cost of up to $\kappa$. The constraint can then be written as
$$\left[ \Pi ^{R}-\Pi \right] \geq C(\alpha )=\frac{h(\alpha )}{\alpha }$$
Step27: 'Commercial' non-profits
A 'pure' for profit (with $\alpha$=1.0) earns a reduced (possibly negative) profit due to it's inability to commit. Seen in the plot as profits the height of the horizontal line.
Any non-profit with $\alpha$ above about 0.4 and below 1.0 can better commit to not renegotiate a larger set of contracts and therefore can offer a more profitable renegotiation-proof contract. Even though they capture only fraction $\alpha$ of those profits, the take home profits exceed the profits of the pure for-profit.
Step28: The figure above compares what the customer can get (present discounted utility of period 0 self) from autarky compared to what he could get contracting in a situation with competition and exclusive contracts.
In the particular example ($\beta = 0.5, \rho=0.75, y=[130,85, 85]$) the autarky consumption bundle is rather close to what could be offered via consumption smothing so the total surplus to be divided is not that large. The pure for profit firm offers a renegotiation proof contract that does such a poor smoothing job that the consumer prefers to stay in authaky. However a commercial non-profit with alpha below ~ 0.8 offers a smoother contract and hence gains to trade.
Now as presently modeld that non-profit will of course get zero profits (80% of zero!). We can model instead situations where at any period 1 renegotiation it's the consumer who gives up all surplus since the assumption of exclusive contracts means the period 1 self will be willing to give up quite a bit. Or maybe they Nash bargain. THese cases might be more realistic.
We'll get to these in a moment but first lets look at how the above situation depends on the initial y vector.
Step29: Loan, repayment and PVU breakdown by periods as function of alpha
(to be completed...results below are from monopoly case)
Step30: The inability to commit means the renegotiation proof contract doesn't smooth consumption very well for the consumer. This ends up hurting the bank, since they must now 'compensate' the consumer for the higher variance of consumption if the participation constraint is still to be met.
The code that follows produces a grid of subplots to illustrate how the results (the relation between $\alpha$ and retained profits) depends on the initial y vector, which in turn also determines whether this will be borrowing or saving.
The role of y
Gains to consumer with diferent firms $\alpha$
Even though it earns zero profits a pure for-profit firm's renegotiation proof contract will offer less consumption smoothing that a firm that due to its non-profit status has higher renegotiation costs.
NOTE
Step31: INTERPRETATION
Step32: Modifications when consumer has a home savings option
The above ana
Step33: Other Results
$\beta$ and loan size
Let's plot the the relationship between period 0 loan size in a full-commitment contract and the extent of present-bias captured by $\beta$
Step34: Example full commitment contract (and renegotiation with a naive consumer)
Here is an example of the full-commitment contracts a monopolist offers and the contract a monopolist and a naive consumer would renegotiate to from that same full commitment contract (but note that a intermediary who knows they are dealing with a naive consumer would bait them with a different initial contract).
Step35: Scratch play area
3D plots
Step36: Is $c_0$ (and hence net borrowing) higher or lower in renegotiation-proof contracts?
It's going to depend on $\rho$ | Python Code:
%reload_ext watermark
%watermark -u -n -t
Explanation: Breakable Commitments...
Code to generate figures
Karna Basu and Jonathan Conning
Department of Economics, Hunter College and The Graduate Center, City University of New York
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import gridspec
from ipywidgets import interact,fixed
plt.rcParams['figure.figsize'] = 10, 8
np.set_printoptions(precision=2)
Explanation: Abstract: Important empirical and theoretical literaturea have developed around models of procrastination and the struggle for self-control or resistance to present-bias or temptation. A popular modeling strategy is to follow Laibson (1997) in assuming that consumers are present-biased and have time inconsistent $\beta-\delta$ quasi-hyperbolic and preferences. While several papers have analyzed different properties of this model have variations and extensions have even been employed in calibrated numerical macro models, we are not aware of any papers that explain the model in simple graphical terms.
This note describes the relatively simple mathematical and graphical analysis of the challenges facing a time-inconsistent in consumer attempting to smooth consumption over time. Because the sophisticated present-biased quasi-hyperbolic discounter anticipates the ways in which her future self will attempt to renegotiate or refinance the terms of a contract, she acts to choose the terms of the contract anticipating her latter-period self's best reaction. The equilibrium contract is found as the sub-game perfect Nash equilibrium of a Stackelberg game. The equilibrium that the time-inconsistent consumer can achieve on her own will in general deliver less utility than if the period zero consumer could commit their latter selves to sticking to the the terms of the contract that the period zero self prefers. This gives rise to the demand fr commitment services.
Python Preliminaries
The simulations and diagrams below were written in python. The following code block just imports various libraries and sets a few global parameters.
End of explanation
bb = np.linspace(0.1,0.9, 20)
def BP(p):
bp = bb**(1/p)
bbp =( (bb+bp)/(1+bp)) #**(1/p)
plt.plot(bb, bb,"--")
plt.plot(bb, bbp)
plt.ylim(0,1)
def foo(p):
bp = bb**(1/p)
bbp =( (1+bb**((1-p)/p))/(1+bp))
plt.plot(bb, bbp)
plt.ylim(1,4)
def BBP(p):
bp = bb**(1/p)
bbp =(bb+bp)/(1+bp)
plt.ylim(0,0.3)
plt.axhline(0)
plt.plot(bb,bbp-bb);
interact(BP, p=(0.1,3,0.1))
Explanation: The model
Consider the following simple workhorse three-period consumption smoothing model where consumers' preferences are summarized by constant relative risk aversion (CRRA) utility. In any period the consumer's instantaneous utility is given by $u(c)=c^{1-ρ}/(1-ρ)$. Over three period the agent maximizes utility
$$ U(c_0, c_1, c_2) =u(c_0) + \beta [\delta u(c_1) + \delta^2 u(c_2)]$$
This is a version of the classic $\beta-\delta$ quasi-hyperbolic discounting model. We assume the consumer has an autarky income stream ${y}={y_{0},y_{1},y_{2}}$ which defines autarky or reservation utility $ \overline{u}(y) = U(y₀,y₁,y₂)$ but in general will prefer a smoother consumption profile from contracting on financial markets.
Consumption smoothing with and without commitment services
Competitive full-commitment
Assume at first that financial intermediaries compete to offer contracts to a client.
Let's assume at first that a financial intermediary can offer a multiperiod contract and can -- at zero cost -- credibly commit to not renegotiating the terms of that contract. For the moment as well we will assume that this contract can also be made exclusive in the sense that we can stop a new bank from offering a more attractive additional or alternative contract to the period 1 self. We'll relax both assumptions shortly.
The offered contract will maximize the period-0 self's present value of utility $$ U(c_{0},c_{1},c_{2})=u(c_{0})+\beta \left[ \delta u(c_{1})+\delta ^{2}u(c_{2})\right] $$
subject to the bank's zero profit condition or, same thing, consumer budget constraint:
$$\sum\limits_{t=0}^{2}\frac{\left( y_{t}-c_{t}\right) }{\left( 1+r\right) ^{t}} = 0$$
At the optimal contract $C^fc$ the consumer may save or borrow, depending on their initial income stream and preferred/feasible smoothed consumption stream available from contracting.
The first order conditions for an optimum are:
$$u'(c_0) = \beta \delta (1+r) u'(c_1)$$
$$u'(c_1) = \delta (1+r) u'(c_2)$$
The optimal contract will be the three period consumption profile that brings the consumer to the highest feasible iso-utility surface (analagous to an indifference curve except in 3 dimensins), and that will be at a point where the iso-utility surface is tangent to the zero-profit hyperplane that cuts through endowment point $y$
Rather than try to depict the optimal contract om three-dimensional space, we will employ a simple trick to depict the optimal contract in two-dimensional figures. Since the optimal contract must satisfy the consumer budget or zero-profit constraint, if we know the $c_0$ and $c_1$ the value of $c_2$ is determined from the budget constraint.
For the CRRA case these can be rewritten as:
$$c_1 = c_0 [ \beta \delta (1+r) ]^\frac{1}{\rho}$$
$$c_1 = c_2$$
In what follows we'll assume for simplicity and without loss of generality that $\delta = \frac{1}{1+r}$ and furthermore that $r=0$ and hence $\delta = 1$. This simplifies the expressions without changing the essential tradeoffs.
If we substitute the FOC $c_1=c_2$ into the consumer's binding budget constraint (the bank's zero profit condition) the problem can be reduced from three equation (two FOC and the zero profit condition) to two:
$$c_1 = \beta^\frac{1}{\rho} c_0$$
$$ c_1 = \frac{\sum y - c_0}{2}$$
The first equation highlight's the period-zero self's present bias --they want to consume more in period zer than in period one-- while the second summarizes that hey want to smooth whatever resources are left to future consumption equally between periods 1 and 2.
Figure 1 below illustrates how the equilibrium contract is determined, drawn for the CRRA case where $\beta=0.5$ and $\rho = 1$ and $\sum y =300$. The first of these two lines (that the MRS between period 0 and period 1 equal the price ratio or interest rate) can be seen as the upward sloping income-expansion income-expansion line in the rightmost quadrant diagram in $c_0$ and $c_1$ space. The second line which combines the second FOC and zero profit condition is seen as the downward sloping dashed line.
The two dashed lines meet at point $(c_0^{fc}, c_1^{fc})$ in the rightmost quadrant.
The leftmost quadrant is in $c_1$ and $c_2$ space, turned on its side, 90 degrees counterclockwise. The FOC condition ($c_1 = c_2$) is represented by a 45 degree line. We can simply read off $c_2 = c_1$ from this line and the value of $c_1$ determined in the other quadrant, but we should also note that the point of intersection must also satisfy the budget constraint, namely that consumption in periods 1 and 2 cannot exceed the value of the endowment less period zero consumption.
Note on FOC
$$u'(c_0) = \beta u'(c_1)$$
or
$$u'(c_0) = \frac{\beta + \beta^\frac{1}{\rho}}{1+\beta^\frac{1}{\rho}} u'(c_1)$$
The second is always larger for any $\beta$ or $\rho$ which implies that $c_1$
End of explanation
β = 0.9
ρ = 0.98
y = 300
def u(c, b = β, p = ρ):
return (c**(1-p))/(1-p)
y/(1+2*β**(1/ρ))
def UR(c0, b = β, p = ρ):
'''Utility if renegotiated'''
bp = b**(1/p)
c11 = (y-c0)/(1+bp)
c12 = bp * c11
return u(c0) + β*u(c11) + β*u(c12)
def UN(c0, b = β, p = ρ):
'''Utility of committed'''
c01 = (y-c0)/2
c02 = c11
return u(c0) + β*u(c01) + β*u(c02)
def CC(c0, b = β, p = ρ):
'''Utility of committed'''
c01 = c02 = (y-c0)/2
c11 = (y-c0)/(1+bp)
c12 = bp * c11
return c01, c02, c11, c12
cc = np.linspace(100,180,100)
def compare(b=β, p=ρ):
plt.figure(1, figsize=(10,10))
bp = b**(1/p)
c0e = y/(1+2*bp)
c0p = y/(1+ (1+bp) * ((b+bp)/(1+bp))**(1/p) )
print(c0e,c0p)
plt.subplot(311)
plt.plot(cc,UR(cc, b,p),"--")
plt.plot(cc,UN(cc, b, p))
plt.axvline(c0e)
plt.axvline(c0p)
plt.grid()
plt.subplot(312)
plt.plot(cc,UN(cc,b, p) - UR(cc, b, p))
plt.grid()
c01 = c02 = (y-cc)/2
c11 = (y-cc)/(1+bp)
c12 = bp * c11
plt.subplot(313)
plt.grid()
plt.plot(cc,c01,'b', cc, c02, 'b--')
plt.plot(cc,c11,'r', cc, c12, 'r--')
interact(compare, b=(0.5, 1.5, 0.1), p=(0.5,1.5, 0.1))
Explanation: Continuation utility
End of explanation
from IPython.display import Image, display
i = Image(filename='Figure1.jpg')
display(i)
Explanation: Why is the utility penalty so small for widening the variance this much?
End of explanation
beta = 0.5
rho = 1
Y = 300
Explanation: For the CRRA case it's easy to find closed form solutions:
$$c_0^{fc} = \frac{\sum y}{1+2\beta^\frac{1}{\rho}}$$
$$c_1^{fc} = c_2^{fc} = \beta^\frac{1}{\rho} c_0^{fc} $$
A simple numerical example
Suppose the model parameters were as follows (and as all along $r=0$ and $\delta=1$)
End of explanation
def c0fc(beta=beta, rho=rho):
'''Full Commitment contract'''
btr = beta**(1/rho)
Y = 300
c0 = Y/(1+2*btr)
return c0, btr*c0, btr*c0
Explanation: The following function solves for period zero's optimal 'full commitment contract' using the equations above:
End of explanation
c0fc()
Explanation: The optimal contract for these parameters is
End of explanation
[100, 100, 100] - np.array(c0fc())
Explanation: If the consumer had an income stream $ y =(100, 100, 100)$ then we'd interpret this as a borrowing contract, as the period 0 consumer would want to borrow 50 in period zero andthen balance repayments between period 1 and 2.
Saving/repayments (positives) and borrowing/dissaving (negatives) in each period would be written:
End of explanation
%matplotlib inline
import numpy as np
#import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib import gridspec
from ipywidgets import interact,fixed
plt.rcParams['figure.figsize'] = 10, 8
np.set_printoptions(precision=2)
def c0own(beta=beta, rho=rho):
'''Own-smoothing contract'''
btr = beta**(1/rho)
lm = (beta + btr)/(1+btr)
c0 = Y/(1+(1+btr)*lm**(1/rho))
c1 = (Y-c0)/(1+btr)
c2 = btr*c1
return c0, c1, c2
def plotC(rho=rho):
bt = np.linspace(0,1, 100)
fig, ax = plt.subplots(figsize=(7,6))
c0F,c1F,c2F = c0fc(bt, rho)
c0o,c1o,c2o = c0own(bt, rho)
ax.plot(bt, c0F)
ax.plot(bt, c1F)
ax.plot(bt, c0F+c1F,'r')
ax.plot(bt, c0o,'--')
ax.plot(bt, c1o,'--')
ax.plot(bt, c0o+c1o,'r--')
ax.plot(bt, c2o,'--')
fig.suptitle(r'$\rho$ = {}'.format(rho),fontsize=18)
ax.set_xlabel(r'$\beta$', fontsize=16)
plt.grid()
plt.show()
return
c0fc(beta, rho)
c0own()
interact(plotC,y0=(1,150,1),y1=(1,150,1),rho=(0.1,3,0.05))
Explanation: If on the other hand the consumer had an income stream $ y =(200, 50, 50)$ then we'd interpret this as a savings contrac, with the consumer saving 50 in period zero to be shared equally between period 1 and 2 consumption.
refinance and self-control
We recast this slightly to focus on the role of savings. Period 0 self (henceforth 'zero-self') chooses period zero savings $s_0$ (and by implication period 0 consumption $c_0 = y_0 - s_0$). In period 1 his later 'one-self' reacts by choosing her own preferred period 1 savings $s_1$ (or, same thing $c_1$ and by implication $c_2$).
We need to find one-self's 'reaction function'. They choose $c_1$ to maximize
$$u(c_{1})+\beta \delta u(c_{2})$$
subject to
$$c_1(1+r)+c_2 =y_1 (1+r) +y_2+s_0 (1+r)^2$$
The FOC give us
$$u'(c_{1})=\beta \delta(1+r) {u'(c_2)} $$
which for this CRRA case give us
$$c_{2} = [\beta \delta (1+r) ]^\frac{1}{\rho} c_1$$
Substituting this into the intertemporal budget constraint above we can solve for the reaction function:
$$ c_1(s_{0} )= \frac{s_0 (1+r)^2+y_1 (1+r) +y_2}
{(1+r)+[ \beta \delta(1+r)]^\frac{1}{\rho} }
$$
Note that if $\delta=\frac{1}{1+r}$ and $r=0$ then this last expression simplifies to:
$$ c^1_1(s_{0} )= \frac{s_0+y_1 +y_2}{1 + \beta ^\frac{1}{\rho} } $$
Without loss of generality we will focus on this stripped down version of the expression.
Note that the zero-self wants each extra dollar of saving (or debt) $s_0$ that they pass on to period one that 1/2 of that dollar be for period 1 and the other half for period 2. In other words they want
$$\frac{dc^0_1}{ds_0} = \frac{dc_2}{ds_0} =\frac{1}{2}$$
But One-self instead prefers
$$\frac{dc^1_1}{ds_0} =\frac{1}{1+\beta^\frac{1}{\rho}} > \frac{1}{2}$$
and
$$\frac{dc^1_2}{ds_0} =\frac{\beta^\frac{1}{\rho}}{1+\beta^\frac{1}{\rho}}<\frac{1}{2}$$
Zero-self will therefore act to strategically control how much savings is passed on, behaving much like a Stackelberg leader.
They choose $s_0$ to:
$$\max u(y_0-s_{0})+\beta \left[ u(c^1_1(s_0))+u(c^1_2(s_0))\right] $$
Recall that One-self will always have $c_2^1 =\beta^\frac{1}{\rho} c_1^1$ and also note that for the CRRA case we can write
$u(\beta^\frac{1}{\rho}c_1^1)=\beta^\frac{1-\rho}{\rho}u(c_1^1)$
so we can rewrite the objective as:
$$\max u(y_0-s_{0})+\beta (1+\beta^\frac{1-\rho}{\rho}) u(c^1_1(s_0))$$
The FOC will therefore be:
$$u'(y_0-s_0) = \beta (1+\beta^\frac{1-\rho}{\rho}) u'(c_1^1(s_0)) \frac{dc_1^1}{ds_0}$$
$$u'(y_0-s_0) = \frac{\beta +\beta^\frac{1}{\rho}}{1+\beta^\frac{1}{\rho}} u'(c_1^1(s_0)) $$
and after some substitutions and simplifications:
$$(y_0-s_0)^{-\rho}
= \frac{ \beta+\beta^\frac{1}{\rho}}{1+\beta^\frac{1}{\rho}}
(\frac{s_0+y_1 +y_2}{1 + \beta ^\frac{1}{\rho} })^{-\rho} $$
$$ \frac{s_0+y_1 +y_2}{1 + \beta ^\frac{1}{\rho} }
= \Lambda^\frac{1}{\rho}
(y_0-s_0) $$
where $\Lambda = \frac{\beta +\beta^\frac{1}{\rho}}{1+\beta^\frac{1}{\rho}}$
Or solving for $c_0$ :
$$c_0 = \frac{\sum y}{1+\Lambda^\frac{1}{\rho}(1+\beta^\frac{1}{\rho})} $$
Note that we can compare period 0 consumption under this 'own smoothing' situation to the full commitment situation where we have shown that:
$$c_0 = \frac{\sum y}{1+2\beta^\frac{1}{\rho}} $$
From which it's clear that savings is higher or lower depending on a comparison of the two denominators... Empirically however the difference in period 0 consumption seems very small... Most of the action is in terms of period 1 and 2 as the follownig shows.
Visualized
As in the other notebooks we import a module that gives us a generic 'Contract' class that defines a few attributes (e.g. default parameters of the utility function, initial endowments, etc.) and useful methods to calculate profits, utility, etc.
End of explanation
def c1br(c0, beta=beta, rho=rho):
'''One Selfs best response to Zero-self contract '''
btr = beta**(1/rho)
c11 = (Y - c0)/(1+btr)
c12 = btr*c11
return c0,c11,c12
def c0rp(beta=beta, rho=rho):
'''Zero's Stackelberg contract '''
btr = beta**(1/rho)
lam = (beta + btr)/(1+btr)
lmr = lam**(1/rho)
c00rp = Y/(1+(1+btr)*lmr)
c01rp = lmr*c00rp
c02rp = btr*c01rp
return c00rp, c01rp, c02rp
c0fc()
cc = np.linspace(0,300,300)
cc = np.linspace(0,300, 100)
btr = beta**(1/rho)
lam = (beta + btr)/(1+btr)
lmr = lam**(1/rho)
Explanation: Best response function, and Stackelberg
End of explanation
def bdplot(beta=beta, rho=rho, fc = True, rp = True, figname='Figure'):
'''Plot two quadrant diagram representation. The flag fc and rp allow us to turn on
or supress full commit or '''
ymax = 200
aspect = 1
cfc = c0fc(beta=beta, rho=rho)
crp = c0rp(beta=beta, rho=rho)
fontsize = 18
btr = beta**(1/rho)
lam = (beta + btr)/(1+btr)
lmr = lam**(1/rho)
fig = plt.figure()
gs = gridspec.GridSpec(1, 2, width_ratios=[1, 2])
ax0 = plt.subplot(gs[1])
ax0.set_title(r'$\beta=$ {:2.1f} $\rho=$ {:2.1f}'.format(beta, rho))
ax0.set_ylim(0, ymax)
ax0.yaxis.set_label_position("right")
ax0.yaxis.tick_right()
ax0.set_xlabel(r'$c_0$', fontsize=fontsize)
ax0.set_ylabel(r'$c_1$', fontsize=fontsize)
ax1 = plt.subplot(gs[0])
ax1.yaxis.set_ticks_position('left')
ax1.xaxis.set_ticks_position('bottom')
ax1.yaxis.set_label_position('left')
ax1.set_ylabel(r'$c_1$', fontsize=fontsize)
ax1.set_title(r'$cfc=$ ({:3.1f}, {:3.1f}, {:3.1f})'.format(cfc[0],cfc[1],cfc[2]))
if fc:
fcstyle = '--'
fccolor = 'r'
if fc and not rp:
linestyle = '-'
ax0.plot(cc, 0.5*(Y-cc),'r--', label='Zero FC future')
ax0.plot(cc, btr*cc, linestyle=fcstyle, color = fccolor, label='FC smooth')
ax0.plot(cfc[0],cfc[1], marker='o')
ax0.plot(cc, Y-cc, ':', label = 'Future net income')
ax1.plot(cc, (Y-cfc[0])-cc,'k-')
ax1.plot(cc, cc*btr**(-1),'b-')
ax1.plot(cc, cc,'r--')
ax1.plot(cfc[2], cfc[1],marker='o')
ax0.text(250, btr*230, r'$\beta^\frac{1}{\rho}$', fontsize=15)
xx = [cfc[0]]
yy = [cfc[1]]
zz = [cfc[2]]
[ax0.plot([dot_c0, dot_c0], [0, dot_c1],':',linewidth = 1,color='black' ) for dot_c0, dot_c1 in zip(xx,yy) ]
[ax0.plot([0, dot_c0], [dot_c1, dot_c1],':',linewidth = 1,color='black' ) for dot_c0, dot_c1 in zip(xx,yy) ]
[ax0.plot([0, dot_c0], [dot_c1, dot_c1],':',linewidth = 1,color='black' ) for dot_c0, dot_c1 in zip(xx,yy) ]
[ax0.plot([dot_c0, dot_c0], [dot_c1, Y-dot_c0],':',linewidth = 1,color='black' ) for dot_c0, dot_c1 in zip(xx,yy) ]
[ax0.plot([dot_c0, 0], [Y-dot_c0, Y-dot_c0],':',linewidth = 1,color='black' ) for dot_c0, dot_c1 in zip(xx,yy) ]
[ax1.plot([dot_c2, dot_c2], [0, dot_c1],':',linewidth = 1,color='black' ) for dot_c1, dot_c2 in zip(yy,zz) ]
[ax1.plot([dot_c2,0], [dot_c1, dot_c1],':',linewidth = 1,color='black' ) for dot_c1, dot_c2 in zip(yy,zz) ]
if rp:
ax0.plot(cc, c1br(cc, beta, rho)[1],'b-', label = 'One BR')
ax0.plot(cc, lmr*cc,'b-', label='Stackelberg')
ax0.plot(crp[0],crp[1],marker='o')
ax0.text(250, lmr*235, r'$\Lambda^\frac{1}{\rho}$', fontsize=15)
ax1.plot(crp[2], crp[1],marker='o')
xx = [crp[0]]
yy = [crp[1]]
zz = [crp[2]]
[ax0.plot([dot_c0, dot_c0], [0, dot_c1],':',linewidth = 1,color='black' ) for dot_c0, dot_c1 in zip(xx,yy) ]
[ax0.plot([0, dot_c0], [dot_c1, dot_c1],':',linewidth = 1,color='black' ) for dot_c0, dot_c1 in zip(xx,yy) ]
[ax0.plot([0, dot_c0], [dot_c1, dot_c1],':',linewidth = 1,color='black' ) for dot_c0, dot_c1 in zip(xx,yy) ]
[ax0.plot([dot_c0, dot_c0], [dot_c1, Y-dot_c0],':',linewidth = 1,color='black' ) for dot_c0, dot_c1 in zip(xx,yy) ]
[ax0.plot([dot_c0, 0], [Y-dot_c0, Y-dot_c0],':',linewidth = 1,color='black' ) for dot_c0, dot_c1 in zip(xx,yy) ]
[ax1.plot([dot_c2, dot_c2], [0, dot_c1],':',linewidth = 1,color='black' ) for dot_c1, dot_c2 in zip(yy,zz) ]
[ax1.plot([dot_c2,0], [dot_c1, dot_c1],':',linewidth = 1,color='black' ) for dot_c1, dot_c2 in zip(yy,zz) ]
ax1.set_ylim(0,ymax)
ax1.set_xlim(0,150)
ax1.invert_xaxis()
ax1.set_xlabel('$c_2$', fontsize=fontsize)
for side in ['right','top']:
ax0.spines[side].set_visible(False)
ax1.spines[side].set_visible(False)
#scaling and grid
ax0.set_aspect(aspect)
ax1.set_aspect(1)
#ax0.grid()
#ax1.grid()
#ax0.text(20, 0.5*(Y-50), r'$\frac{1}{2}\sum (y-c_0)$', fontsize=14)
#ax0.text(20, (1/(1+btr))*(Y-30), r'$\frac{1}{1+\beta^\frac{1}{\rho}}\sum (y-c_0)$', fontsize=14)
ax1.text(btr*150, 150, r'$\beta^\frac{1}{\rho}$', fontsize=15, rotation='vertical')
fig.subplots_adjust(wspace=0)
plt.show()
fig.savefig(figname+'.jpg', dpi=fig.dpi)
return
bdplot(fc=True, rp=False, figname='Figure1')
interact(bdplot,beta=(0.1,1,0.1),rho=(0.1,3,0.05))
import Contract
cC = Contract.Competitive(beta=0.7)
c.print_params()
c0FC(cC.beta, cC.rho)
cC = Contract.Competitive(beta = cC.beta)
cCF = cC.fcommit()
c0own(cC.beta, cC.rho)
cC.reneg_proof().x
cCRP = cC.ownsmooth()
plt.rcParams["figure.figsize"] = (10, 8)
c1min = 0
c1max = 160
c1 = np.arange(0,c1max,c1max/20)
c1_ = np.arange(40,c1max,c1max/20)
y = cC.y
#cCRP = cCRPa
#indifference curves functions
ubar0 = cC.PVU(cCF[1:3], 1.0)
idc0 = cC.indif(ubar0, 1.0)
ubar1 = cC.PVU(cCF[1:3],cC.beta)
idc1 = cC.indif(ubar1,cC.beta)
ubar0RP = cC.PVU(cCRP[1:3], 1.0)
idc0RP = cC.indif(ubar0RP,1.0)
ubar1RP = cC.PVU(cCRP[1:3], cC.beta)
idc1RP = cC.indif(ubar1RP,cC.beta)
fig, ax = plt.subplots()
# trick to display contract points and coordinate lines http://bit.ly/1CaTMDX
xx = [cCF[1], cCRP[1]]
yy = [cCF[2], cCRP[2]]
plt.scatter(xx,yy, s=50, marker='o',color='b')
[plt.plot([dot_x, dot_x] ,[0, dot_y],':',linewidth = 1,color='black' ) for dot_x, dot_y in zip(xx,yy) ]
[plt.plot([0, dot_x] ,[dot_y, dot_y],':',linewidth = 1,color='black' ) for dot_x, dot_y in zip(xx,yy) ]
# indifference curves
plt.plot(c1_,idc0(c1_),color='blue')
#plt.plot(c1_,idc1(c1_),color='red')
plt.plot(c1_,idc0RP(c1_),color='blue')
plt.plot(c1_,idc1RP(c1_),color='red')
# rays
plt.plot(c1, c1,':',color='black')
plt.plot(c1, cC.beta**(1/cC.rho)*c1,':',color='black')
# isoprofit line(s)
isoprofline = cC.isoprofit(cC.profit(cCF,cC.y)-(y[0]-cCF[0]), y)
plt.plot(c1, isoprofline(c1),':' )
ax.spines['right'].set_color('none'), ax.spines['top'].set_color('none')
plt.ylim((c1min, c1max*0.9)), plt.xlim((c1min, c1max*0.9))
ax.xaxis.tick_bottom(),ax.yaxis.tick_left()
plt.xlabel('$c_{1}$'); plt.ylabel('$c_{2}$')
# label the points
ax.text(cCF[1]-1, cCF[2]+3, r'$F$', fontsize=15)
ax.text(cCRP[1]-3, cCRP[2]-5, r'$P$', fontsize=15)
ax.text(cCRP[1], -6, r'$c^{cp}_{1}$', fontsize=15)
ax.text(-8, cCRP[2], r'$c^{cp}_{2}$', fontsize=15)
ax.text(cCF[1], -6, r'$c^{cf}_{1}$', fontsize=15)
ax.text(-8, cCF[2], r'$c^{cf}_{2}$', fontsize=15)
#ax.text(0, -10, r'Competitive $\kappa = {}$'
# .format(cC.kappa), fontsize=12)
#ax.text(0, -15, r'$\beta = {}, \ \rho = {}$'
# .format(cC.beta, cC.rho), fontsize=12)
# isoprofit lines could be plotted like so
#isop = cC.isoprofit( cC.kappa, cCRP) # returns a function of c1
#plt.plot(c1_, isop(c1_),':')
#turn off the axis numbers
ax.axes.get_xaxis().set_visible(False)
ax.axes.get_yaxis().set_visible(False)
plt.savefig('figs\CompetitiveFig.eps', format='eps')
plt.show()
%matplotlib inline
import sys
import numpy as np
from scipy.optimize import minimize
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (10, 8)
plt.rcParams['axes.formatter.useoffset'] = 'False'
np.set_printoptions(precision=2) # array printing format
Explanation: Two-quadrant plot
End of explanation
import Contract
c = Contract.Competitive(beta = 0.7)
c.rho = 0.5
c.y = [200,50,50]
c.print_params()
Explanation: Now the definition of the Contract class.
End of explanation
def C_opt(c0):
Return contract from consuming y0-s0 and splitting rest equally across c1 and c2
s0 = c.y[0] - c0
ce = (np.sum(c.y[1:])+s0)/2
C = [c.y[0] - s0, ce, ce]
return C
def C_bias(c0):
Return contract from consuming y0-s0 and then having One self allocate across c1 and c2
B1p = c.beta**(1/c.rho)
s0 = c.y[0] - c0
c1 = (np.sum(c.y[1:])+s0)/(1+B1p)
c2 = B1p * c1
C = [c.y[0] - s0, c1, c2]
return C
Explanation: Plot of Zero self's utility as a function of $s_0$
End of explanation
def C_opt(s0):
Return discounted utility from consuming y0-s0 and splitting rest equally across c1 and c2
ce = (np.sum(c.y[1:])+s0)/2
C = [c.y[0] - s0, ce, ce]
return C
def C_bias(s0):
Return discounted utility from consuming y0-s0 and splitting rest equally across c1 and c2
B1p = c.beta**(1/c.rho)
c1 = (np.sum(c.y[1:])+s0)/(1+B1p)
c2 = B1p * c1
C = [c.y[0] - s0, c1, c2]
return C
Explanation: Just like above above but with $s_0$ argument:
End of explanation
C_bias(10)
def C_own(y):
Return discounted utility from consuming y0-s0 and splitting rest equally across c1 and c2
b, rh= c.beta,c.rho
B1p = b**(1/rh)
Lp = b*(1+b**(1-rh)/rh)/(1+B1p)
s0 = (y[0]*Lp -y[1]-y[2])/(1+Lp*(1+B1p))
c0 = y[0]-s0
c1, c2 = C_bias(s0)[1],C_bias(s0)[2]
C = [c0, c1, c2]
return C
c.y = [180,60,60]
C_own(c.y) , c.fcommit()
c.ownsmooth()
sum(C_own(c.y))
sz=np.arange(-50,50)
C_opt(sz)[1]
C_bias(sz)[1]
c.beta**(1/c.rho)
plt.plot(sz,C_opt(sz)[1],label='copt[1]')
plt.plot(sz,C_bias(sz)[1],label='cbias[1]')
plt.plot(sz,C_opt(sz)[2],label='copt[2]')
plt.plot(sz,C_bias(sz)[2],label='cbias[2]')
plt.legend()
C_opt(10)
C_bias(10)
cF = c.fcommit()
cF0=cF[0]
sF = c.y[0]-cF[0]
sF
Plot Zero self utility under each
cz=np.arange(cF0-50,cF0+50)
plt.plot(cz,c.PVU(C_opt(cz),c.beta))
plt.plot(cz,c.PVU(C_bias(cz),c.beta))
plt.xlim(cF0-50,cF0+50)
plt.axvline(cF[0], color='k', linestyle='dashed')
plt.axvline(0, color='k', linestyle='solid')
U_opt(50)[0]
s=np.arange(1,sum(c.y))
Explanation: $$ \frac{s_0+y_1 +y_2}{1 + \beta ^\frac{1}{\rho} }
= \Lambda^\frac{1}{\rho}
(y_0-s_0) $$
where $\Lambda = \frac{\beta (1+\beta^\frac{1-\rho}{\rho})}{1+\beta^\frac{1}{\rho}}$
So we can solve for $s_0$ as:
$$s_0 = \frac{y_0 \Lambda^\frac{1}{\rho} -y_1 -y_2}{1+\Lambda^\frac{1}{\rho}(1+\beta^\frac{1}{\rho})} $$
End of explanation
cC = Contract.Competitive(beta = 0.5)
cC.rho = 1.25
cC.y = [200,50,50]
cC.print_params()
cC.beta**(1/cC.rho)
Explanation: Let's plot an indifference curve in c1-c2 space. For example if the agent in autarky has income ${y_{0},y_{1},y_{2}}$ and no access to saving or borrowing then (from period 0 self's perspective) entering period 1 they have reservation utility $u(y_{1})+\delta u(y_{2})=\overline{u}{0}$. But when period 1 rolls around their preferences change. From period 1 self's perspective they have reservation utility $u(y{1})+\beta \delta u(y_{2})=\overline{u}_{1}$.
Exclusive competitive contracts
The contract class defines a generic contract which holds consumption stream objects of the form $\left( c_{0},c_{1},c_{2}\right)$ and allows a few manipulations. Now comes the job of solving for optimal contracts and we do this with a CompetitiveContract class which inherits the attributes and methods of the more generic contract class and then adds a few methods such as calculating the optimal contract full commitment and renegotiation-proof contracts in the competitive lender case. Note that the methods have the same names as the mon_contract class but some behave differently, reflecting the reversed objective and constraint.
Full-commitment contracts
When the competitive lender can commit to not-renegotiating the contract (i.e. to not pandering to the period-1 self's desire to renegotiate period-0's contract) and the contracts are exclusive (so no third party lender will enter to offer such renegotiation either) the contract solves
$$\max \ u\left( c_{0}\right) +\beta \left[ \delta u\left( c_{1}\right) +\delta ^{2}u\left( c_{2}\right) \right] $$
subject to the zero profit constraint
$$s.t. (y_{0}-c_{0})+\frac{(y_{1}-c_{1})}{(1+r)}+\frac{(y_{2}-c_{2})}{(1+r)^{2}} \geq 0$$
When $\delta =\frac{1}{(1+r)}$ for the CRRA case an optimum will set $c_{1}=c_{2}=\overline{c}$ and $\overline{c}=\beta ^{\frac{1}{\rho }}c_{0}$ from which a closed form solution can be easily found (see fcommit() function below for formulas).
Note that we are here assuming that the consumer has no choice but to consume their income stream $y$ under autarky. This would be true if the agent does not have acess to any 'own savings' technologies. Later below we see how things change only slightly when we allow them to use own savings to create a slightly more smooth autarky consumption stream (not perfectly smooth because they cannot overcome their self-control problems on their own).
Renegotiaton-proof contracts
[THIS EXPLANATION HAS NOT BEEN UPDATED YET]
The agent's period-1-self's preferences differ from those of his period 0 self so they will often want to renegotiate any contract their period 0 self contracted, and the bank can profit from this renegotiation so long as its renegotiaton cost $\kappa $ is low. In particular if the period-0-self agreed to contract $\left( \bar{c}{0},\bar{c}{1},\bar{c}{2}\right) $ a competitive firm would offer to renegotiate the remaining $(\bar{c}{1},\bar{c}{2})$ to contract $\left( c{1}^{r},c_{2}^{r}\right) $ chosen to maximize
$$\max \ \ u(c_{1})+\beta (\delta u(c_{1}) +\delta^{2} u(c_{2})) $$
subject to $$(y_{1}-c_{1})+\frac{(y_{2}-c_{2})}{(1+r)} \geq 0$$
We can show from the agent's first order conditions for the CRRA case that a renegotiated contract will always satisfy $c_{2}=\beta ^{\frac{1}{\rho }}c_{1}$ and indeed for CRRA we get the closed form:
$$ \hat{c}{0} =\frac{\sum y{i}}{1+2\beta^{1/\rho}}$$
and $c_{2}^{r}(\bar{c}{1},\bar{c}{2})=\beta ^{\frac{1}{\rho }}c_{1}^{r}(\bar{c}{1},\bar{c}{2})$. See the reneg(c) function.
A sophisticated present-biased consumer anticipates that this type of renegotiation may happen and will only agree to renegotiation-proof contracts that do not renegotiate to favor their period 1 selves. The profit-maximizing renegotiation-proof contract solves
$$\max_{c_{0},c_{1},c_{2}}\Pi \left( c_{0},c_{1},c_{2}\right) $$
$$U(c_{0},c_{1},c_{2})\geq U_{0}(y_{0},y_{1},y_{2})$$
$$\Pi \left( c_{1}^{r},c_{2}^{r}\right) -\Pi \left( c_{1},c_{2}\right) \leq \overline{\kappa }$$
The first constraint is the period 0 self's participation constraint and the second is the no-renegotiation proof constraint that the bank not find it profitable to offer to renegotiate to the contract that period-1 self will demand.
Let's create an object instance which we will call cM, printout the parameters associated with this instance and then run a few checks to make sure the cM.reneg function works right:
End of explanation
#Analytically calculated renegotiation proof when kappa=0
def ccrpa(C):
B = C.beta**(1/C.rho)
D = 1/(1+(1+B)*((C.beta+B)/(1+B))**(1/C.rho))
c0 = sum(C.y)*D
c1 = (sum(C.y)-c0)/(1+B)
c2 = B* c1
return np.array([c0, c1, c2])
cCRPa =ccrpa(cC)
print(cCRPa, cC.PVU(cCRPa,1))
# Let's find reneg-proof contract for pure profit with zero reneg. cost
cCF=cC.fcommit()
cC.kappa = 0
cC.guess = cCRPa
cCR = cC.reneg(cCF)
cCRP = cC.reneg_proof().x
cCRP
# compare three contracts (string label into var name)
print('kappa = ',cC.kappa)
print('y =',cC.y)
print("consumption and net saving in each period")
for con in ['cCF ', 'cCR ', 'cCRP','cCRPa']:
C = eval(con)
y = cC.y
print(con + " : {} sum : {:4.0f}"
.format(C, C.sum()))
print(con + "(net s): {} profit: {:4.2f}"
.format(y - C, cC.profit(C,cC.y)))
print("PVU0: {:4.3f} {} b*[]: {:4.3f}"
.format(cC.PVU(C,cC.beta),cC.u(C),
cC.beta*cC.u(C)[1:].sum() ))
print("PVU(1): {:4.4f}"
.format(cC.PVU(C[1:],cC.beta)))
print("rate: {:4.2f}%".format(-100*(C[1:].sum()-sum(y[1:]))/C[0] ))
print()
Explanation: Full commitment contract: closed form solution
Case 1: where potential renegotiation surplus goes to consumer
End of explanation
c1min, c1max = np.min(cCR)*0.6, np.max(cC.y)
c1min = 0
c1max = 160
c1 = np.arange(0,c1max,c1max/20)
c1_ = np.arange(40,c1max,c1max/20)
y = cC.y
#cCRP = cCRPa
#indifference curves functions
ubar0 = cC.PVU(cCF[1:3], 1.0)
idc0 = cC.indif(ubar0, 1.0)
ubar1 = cC.PVU(cCF[1:3],cC.beta)
idc1 = cC.indif(ubar1,cC.beta)
ubar0RP = cC.PVU(cCRP[1:3], 1.0)
idc0RP = cC.indif(ubar0RP,1.0)
ubar1RP = cC.PVU(cCRP[1:3], cC.beta)
idc1RP = cC.indif(ubar1RP,cC.beta)
fig, ax = plt.subplots()
# trick to display contract points and coordinate lines http://bit.ly/1CaTMDX
xx = [cCF[1], cCRP[1]]
yy = [cCF[2], cCRP[2]]
plt.scatter(xx,yy, s=50, marker='o',color='b')
[plt.plot([dot_x, dot_x] ,[0, dot_y],':',linewidth = 1,color='black' ) for dot_x, dot_y in zip(xx,yy) ]
[plt.plot([0, dot_x] ,[dot_y, dot_y],':',linewidth = 1,color='black' ) for dot_x, dot_y in zip(xx,yy) ]
# indifference curves
plt.plot(c1_,idc0(c1_),color='blue')
#plt.plot(c1_,idc1(c1_),color='red')
plt.plot(c1_,idc0RP(c1_),color='blue')
plt.plot(c1_,idc1RP(c1_),color='red')
# rays
plt.plot(c1, c1,':',color='black')
plt.plot(c1, cC.beta**(1/cC.rho)*c1,':',color='black')
# isoprofit line(s)
#isoprofline = cC.isoprofit(cC.profit(cMF,cC.y)-(y[0]-cCF[0]), y)
#plt.plot(c1, isoprofline(c1),':' )
ax.spines['right'].set_color('none'), ax.spines['top'].set_color('none')
plt.ylim((c1min, c1max*0.9)), plt.xlim((c1min, c1max*0.9))
ax.xaxis.tick_bottom(),ax.yaxis.tick_left()
plt.xlabel('$c_{1}$'); plt.ylabel('$c_{2}$')
# label the points
ax.text(cCF[1]-1, cCF[2]+3, r'$F$', fontsize=15)
ax.text(cCRP[1]-3, cCRP[2]-5, r'$P$', fontsize=15)
ax.text(cCRP[1], -6, r'$c^{cp}_{1}$', fontsize=15)
ax.text(-8, cCRP[2], r'$c^{cp}_{2}$', fontsize=15)
ax.text(cCF[1], -6, r'$c^{cf}_{1}$', fontsize=15)
ax.text(-8, cCF[2], r'$c^{cf}_{2}$', fontsize=15)
#ax.text(0, -10, r'Competitive $\kappa = {}$'
# .format(cC.kappa), fontsize=12)
#ax.text(0, -15, r'$\beta = {}, \ \rho = {}$'
# .format(cC.beta, cC.rho), fontsize=12)
# isoprofit lines could be plotted like so
#isop = cC.isoprofit( cC.kappa, cCRP) # returns a function of c1
#plt.plot(c1_, isop(c1_),':')
#turn off the axis numbers
ax.axes.get_xaxis().set_visible(False)
ax.axes.get_yaxis().set_visible(False)
plt.savefig('figs\CompetitiveFig.eps', format='eps')
plt.show()
# isoprofit lines could be plotted like so
# isop = cM.isoprofit( 0.0, cM.y) # returns a function of c1
# plt.plot(c1, isop(c1))
for cont in ['cCF ', 'cCR ', 'cCRP', 'cCRPa']:
print(cont +":", eval(cont))
Explanation: The bank does not profit from this type of opportunistic renegotiation, if we assume 'competition' at time of renegotiation although one might argue that the relation is ex-ante competitive but ex-post favors the bank.
A sophisticated consumer will however anticipate this type of opporunistic renegotiation and only agree to a renegotiation-proof contract.
As expected the bank's profits are lowered due to its inability to commit to not renegotiate.
Here's a plot.
End of explanation
# Note: re-run all cells above if the plot seems wrong
cC.y = np.array([100,100,100])
cCF = cC.fcommit()
num_pts = 21
kaps = np.linspace(0, 10, num_pts) # different renegotiation cost values
cCRP, pvu0RP = np.zeros((3,num_pts)), np.zeros(num_pts) # init (c0,c1,c2) and profits at each kappa
for i in range(0,num_pts): # look through kappa recalculating optimal contract each time
cC.kappa = kaps[i]
cCRP[:,i] = cC.reneg_proof().x
pvu0RP[i] = cC.PVU(cCRP[:,i],cC.beta)
c0,c1,c2 = cCRP[0,:], cCRP[1,:],cCRP[2,:] # save results for plotting
fig, (ax0, ax1) = plt.subplots(nrows = 2)
#ax0.plot(kaps, c0, label='$c_{0}$')
ax0.plot(kaps, c1, label='$c_{1}$')
ax0.plot(kaps, c2, label='$c_{2}$')
ax0.plot(kaps, np.ones(num_pts)*cCF[1], '--', label='$c_{F}$')
ax0.grid()
ax0.set_title('Reneg-Proof Contract terms, PVU and $\kappa$'), ax0.set_ylabel('consumption')
ax0.legend(loc=9,bbox_to_anchor=(0.5, -1.25), ncol = 3)
ax1.plot(kaps, pvu0RP)
ax1.set_ylabel('PVU0')
ax1.grid()
ax1.set_xlabel('renegotiation cost $\kappa$')
pvumin,pvumax = min(pvu0RP), max(pvu0RP)
plt.ylim((pvumin, pvumax))
plt.tight_layout()
plt.show()
Explanation: Optimal contract when renegotiation cost $\kappa $ >0
Plot to explore how the renegotiation cost $\kappa $ affects the terms of the contract and firm profits
End of explanation
plt.plot(kaps, c0)
plt.ylim((min(c0), max(c0)))
plt.xlabel('renegotiation cost $\kappa$')
plt.show()
Explanation: At lower renegotiation costs the bank is forced to offer less consumption smoothing in periods 1 and 2 as a way to credibly commit to limit their gains to renegotiation with a period 1 self. Hence bank profits rise with their ability to commit to incur a renegotiation cost $\kappa$
We haven't plotted $c_{0}$ for each $\kappa$ but that's because it varies less relative to $c_{1}, c_{2}$ and way above the full commitment consumption smoothing. The following shows a non-monotonic relation though ws should remember this is varying very little.
End of explanation
# Similar to above but solve for contract as a function of firm type ALPHA
y = np.array([100,100,100]) # To see how endowment affects contract
cC.y = y
cCF = cC.fcommit()
num_pts = 10
alphs = np.linspace(0.0,1.0,num_pts) # iterate over different values of beta
HA = 10*(np.ones(num_pts) - alphs) # h(alpha)/alpha or cost of renegotiaton
cCRP = np.zeros((3,num_pts)) # matrix for (c0,c1,c2) at each kappa
pvu0RP = np.zeros(num_pts) #PVU0 when contracting with alpha=1 firm
for i in range(0,num_pts):
cC.kappa = HA[i] # change optimal contract
cCRP[:,i] = cC.reneg_proof().x
cC.guess = cCRP[:,i] # use this sol as guess for next optimum
pvu0RP[i] = cC.PVU(cCRP[:,i],cC.beta)
#last entry is 'pure profit' pvu0RP[-1]
pvu0RP_pure = pvu0RP[-1]
c0,c1,c2 = cCRP[0,:], cCRP[1,:],cCRP[2,:] # save results for plotting
fig3 = plt.figure()
plt.plot(alphs,c1,'--',label='$c_{1}$')
plt.plot(alphs,c2,label='$c_{2}$')
plt.plot(alphs,np.ones(num_pts)*cCF[1],label='$c_{1}$ commit')
plt.grid()
plt.title('Renegotiation Proof Contract and alpha' )
plt.xlabel('alpha ')
plt.ylabel('consumption')
plt.legend(loc='upper left')
plt.show()
Explanation: The choice to become a commercial non-profit
Modeling the non-profit
The no-renegotiation constraint has two parts. A pure for-profit captures fraction $\alpha = 1$ of profits and faces renegotiation cost
not-for-profit of type $\alpha$ and faces renegotiation cost $h(\alpha) = h(1)$. More generally a non-profit of type $\alpha$ has a no-renegotiation constraint of the form
$$\alpha \left[ \Pi ^{R}-\Pi \right] \geq h(\alpha )$$
To be specific here let's model this as
$$h(\alpha )=\kappa \left( 1-\alpha \right) $$
So that at $\alpha =1$ there is no cost to renegotiation and at $0< \alpha <1$ there is a non-negative non-pecuniary cost of up to $\kappa$. The constraint can then be written as
$$\left[ \Pi ^{R}-\Pi \right] \geq C(\alpha )=\frac{h(\alpha )}{\alpha }$$
End of explanation
cC.y = [130,85,85]
#last entry is 'pure profit' pvu0RP[-1]
pvu0RP_full = pvu0RP[-1]*np.ones(num_pts)
pvu0_aut = cC.PVU(cC.y,cC.beta)*np.ones(num_pts)
fig = plt.figure()
ax = fig.add_subplot(111)
plt.title('Renegotiation-Proof PVU0 vs. alpha')
plt.xlabel(r'type of firm $ \alpha$')
plt.ylabel('0-self present discounted utility')
plt.plot(alphs,pvu0RP_full,'--',label='PVU from pure-profit')
plt.plot(alphs,pvu0RP,label='PVU from non-profit')
plt.plot(alphs,pvu0_aut,label='PVU from autarky')
ax.fill_between(alphs, np.fmax(pvu0RP,pvu0_aut), pvu0_aut,hatch='/')
plt.legend(loc='upper center', bbox_to_anchor=(0.5, -0.1),
fancybox=None, ncol=5)
plt.show()
Explanation: 'Commercial' non-profits
A 'pure' for profit (with $\alpha$=1.0) earns a reduced (possibly negative) profit due to it's inability to commit. Seen in the plot as profits the height of the horizontal line.
Any non-profit with $\alpha$ above about 0.4 and below 1.0 can better commit to not renegotiate a larger set of contracts and therefore can offer a more profitable renegotiation-proof contract. Even though they capture only fraction $\alpha$ of those profits, the take home profits exceed the profits of the pure for-profit.
End of explanation
cC.print_params()
#plot(alphs,cMRP[0,:],label='$c_{0}$')
fig = plt.figure()
plt.plot(alphs,cCRP[0,:]-cC.y[0],label='$-c_{0}$')
plt.plot(alphs,cC.y[1]-cCRP[1,:],'--',label='$c_{1}$')
plt.plot(alphs,cC.y[2]-cCRP[2,:],label='$c_{2}$')
plt.title('Consumption profile as a function of alpha')
plt.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05),
fancybox=None, ncol=5)
plt.show()
Explanation: The figure above compares what the customer can get (present discounted utility of period 0 self) from autarky compared to what he could get contracting in a situation with competition and exclusive contracts.
In the particular example ($\beta = 0.5, \rho=0.75, y=[130,85, 85]$) the autarky consumption bundle is rather close to what could be offered via consumption smothing so the total surplus to be divided is not that large. The pure for profit firm offers a renegotiation proof contract that does such a poor smoothing job that the consumer prefers to stay in authaky. However a commercial non-profit with alpha below ~ 0.8 offers a smoother contract and hence gains to trade.
Now as presently modeld that non-profit will of course get zero profits (80% of zero!). We can model instead situations where at any period 1 renegotiation it's the consumer who gives up all surplus since the assumption of exclusive contracts means the period 1 self will be willing to give up quite a bit. Or maybe they Nash bargain. THese cases might be more realistic.
We'll get to these in a moment but first lets look at how the above situation depends on the initial y vector.
End of explanation
#print("alpha c0 c1 c2 profit = (y0-cMRP0) + (y1-cMRP1) + (y2-cMRP2)")
#print("-"*79)
#for i,a in enumerate(alphs):
# print("{:5.2f}: {:6.2f} {:6.2f} {:5.2f}, {:8.2f} = {:8.2f} + {:8.2f} + {:8.2f} "
# .format(a, cMRP[0,i], cMRP[1,i], cMRP[2,i],profitRP[i],y[0]-cMRP[0,i],y[1]-cMRP[1,i],y[2]-cMRP[2,i],))
#print()
Explanation: Loan, repayment and PVU breakdown by periods as function of alpha
(to be completed...results below are from monopoly case)
End of explanation
print("Left: present discounted U (shaded = NP dominates). Right: net saving in each period as function of α :")
num_pts = 21
alphs = np.linspace(0,1,num_pts) # iterate over different alphas
HA = 10*(np.ones(num_pts)-alphs) # h(alpha)/alpha or cost of renegotiaton
cCRP = np.zeros((3,num_pts)) # to store (c0,c1,c2) for each alpha
pvu0RP = np.zeros(num_pts) #PVU0 when contracting with alpha=1 firm
pvu0_aut = cC.PVU(cC.y,cC.beta)*np.ones(num_pts)
fig, ax = plt.subplots(10,sharex=True)
numy0 = 3 # rows of subplots
ax = plt.subplot(numy0,2,1)
# Vary y contracts (maintaining PV at 300 in zero interest rate setting)
for j in range(1, numy0 + 1):
y0 = 100 + j*20
y = np.array([y0,100,100])
y = np.array([y0,(300-y0)/2,(300-y0)/2])
cC.y = y
pvu0_aut = cC.PVU(cC.y,cC.beta)*np.ones(num_pts)
ax1 = plt.subplot(numy0, 2, j*2-1, sharex=ax)
for i in range(0, num_pts):
cC.kappa = HA[i] # change reneg cost
cCRP[:,i] = cC.reneg_proof().x
cC.guess = cCRP[:,i] # store sol as guess for next search
pvu0RP[i] = cC.PVU(cCRP[:,i],cC.beta)
#last entry is 'pure profit' pvu0RP[-1]
#pvu0RP_pure = pvu0RP[-1]
pvu0RP_full = pvu0RP[-1]*np.ones(num_pts)
# I HAVE NOT YET AUTOMATED THE AXIS BOUNDS
pumin = min(pvu0RP[-1],min(pvu0_aut))
pumax = max(pvu0RP)
ax1.set_ylim([50.25, 50.6])
print(y,pumin,pumax,min(pvu0_aut),pvu0RP[-1])
print("cCF : ",cCF)
pvu0RP_full = pvu0RP[-1]*np.ones(num_pts)
ax1.set_title(r'$y=( %2.0f, %2.0f, %2.0f)$' %(y0,y[1],y[2]))
ax1.plot(alphs, pvu0_aut,label='aut')
ax1.plot(alphs, pvu0RP,label='NP')
ax1.plot(alphs, pvu0RP_full,label='FP')
ax1.fill_between(alphs, np.fmax(pvu0RP,pvu0_aut), pvu0_aut,hatch='/')
plt.grid()
ax2 = plt.subplot(numy0,2,j*2, sharex=ax, sharey=ax) # Plot contract terms in right column plot
#ax1.set_ylim([0, 25])
ax2.plot(alphs, y0 - cCRP[0,:],"d--",label='$y_0-c_0$')
ax2.plot(alphs, y[1] - cCRP[1,:],label='$y_1-c_1$')
ax2.plot(alphs, y[2] - cCRP[2,:],"x-",label='$y_2-c_2$')
#ax2.axhline(y=0, color ='k')
#ax2.plot(alphs, y[0]*np.ones(num_pts))
#ax2.plot(alphs, y[1]*np.ones(num_pts))
plt.grid()
ax1.legend(loc='lower center', fancybox=None, ncol=5)
ax2.legend(loc='lower center', fancybox=None, ncol=5)
plt.tight_layout()
plt.savefig('figs\Comp_excl.pdf', format='pdf')
plt.show()
plt.close('all')
Explanation: The inability to commit means the renegotiation proof contract doesn't smooth consumption very well for the consumer. This ends up hurting the bank, since they must now 'compensate' the consumer for the higher variance of consumption if the participation constraint is still to be met.
The code that follows produces a grid of subplots to illustrate how the results (the relation between $\alpha$ and retained profits) depends on the initial y vector, which in turn also determines whether this will be borrowing or saving.
The role of y
Gains to consumer with diferent firms $\alpha$
Even though it earns zero profits a pure for-profit firm's renegotiation proof contract will offer less consumption smoothing that a firm that due to its non-profit status has higher renegotiation costs.
NOTE: some parts of this script need manual adjustment
End of explanation
cC = Contract.Competitive(beta = 0.9)
cC.rho = 0.5
cC.y = [110,95,95]
cC.print_params()
def saving(c,y):
return c-y
print(cC.y)
print(cC.ownsmooth())
print(cC.fcommit())
saving(cC.ownsmooth(),cC.y)
PDV =300
y0_step = 2
y0_start = 50
y0_end = PDV -50
Y0 = range(y0_start,y0_end,y0_step)
n = len(Y0)
profity0 = np.zeros(n)
profity0B = np.zeros(n)
i=0
for y0 in Y0:
ybar = (PDV-y0)/2
cM.y =([y0,ybar,ybar])
cMF = cM.fcommit()
cMRP = cM.reneg_proof().x
cM.guess = cMRP
profity0[i] = cM.profit(cMRP,cM.y)
profity0B[i] = cM.profit(cMF,cM.y)
i += 1
plt.plot(profity0,Y0,'b-',label="reneg. proof")
plt.plot(profity0B,Y0,'r',label="full commit")
plt.xlim([-2,6])
plt.ylim([80,160])c
plt.title("Profits as a function of y0")
plt.xlabel("profits $\pi$")
plt.ylabel("y0")
plt.legend(loc='center right')
plt.grid()
plt.axvline()
Explanation: INTERPRETATION: The left column of plots above shows renegotiation-proof profits as a function of $\alpha$ where $\alpha$ affects both the share of profits that are captured as well as the cost of renegotiation as described above. The blue shaded area indicates where commercial non-profits (that choose an $\alpha <1$) capture more profit than a pure for-profit.
The right column of plots shows the terms of the associated contract displayed as 'net savings' (y0-c0), (y1-c1), and (y2-c2). When these are positive the client is saving or repaying, when negative they are borrowing.
When we keep the PV of y constant but change the ratio of period zero income y0 to later period income, y vectors that lead to borrowing (lower y0, higher y1,y2) deliver higher full-commitment (and renegotiation-proof) profits at any level of alpha.
Since most of the profits are in the 0 to 1 period, they weigh more heavily in total profits. Turning non-profit is only attractive at relatively high values of alpha (since at lower alpha they're forfeiting the period 0-1 profits). At higher y0 (tilted more toward savings) full commitment (and renegotiation-proof) profits are lower The pattern seems to be that as we move toward first period savings...
NOT FINISHED
Profitability as a function of y0
Own-savings strategies
End of explanation
cM = Monopoly(0.8)
cM.kappa =0
cM.guess = cMF
cMRP = cM.reneg_proof()
cMRP.x
plot(alphs,C)
# Three subplots sharing both x/y axes
f, (ax1, ax2, ax3) = plt.subplots(3, sharex=True, sharey=True)
ax1.plot(alphs, profitRP)
ax1.plot(alphs, NprofitRP)
ax1.plot(alphs,cM.profit(cMF,y)*ones(num_pts))
ax1.grid(True)
ax1.set_title('Sharing both axes')
ax2.plot(alphs, NprofitRP)
ax3.plot(alphs,cM.profit(cMF,y)*ones(num_pts))
show()
Explanation: Modifications when consumer has a home savings option
The above ana
End of explanation
cM = Contract.Monopoly(0.8) # create an instance m
num_pts = 21
betas = np.linspace(0.1,1,num_pts) # iterate over different values of beta
CMF = np.zeros((3,num_pts)) # a three row matrix to store (c0,c1,c2) for each beta
for i in range(0,num_pts):
cM.beta = betas[i] # change beta before recalculating optimal contract
CMF[:,i] = cM.fcommit()
loan = CMF[0,:] - cM.y[0] # save results for plotting
repay1 = CMF[1,:] - cM.y[1]
repay2 = CMF[2,:] - cM.y[2]
plt.plot(betas,loan,'--') # plot
plt.plot(betas,repay1)
plt.plot(betas,repay2)
plt.grid()
plt.title('Monopoly Commitment Contract as function of beta')
plt.xlabel('beta')
plt.ylabel('net repayment')
plt.legend(['loan','repay1','repay2'])
Explanation: Other Results
$\beta$ and loan size
Let's plot the the relationship between period 0 loan size in a full-commitment contract and the extent of present-bias captured by $\beta$
End of explanation
cM.beta = 0.8 # Reset to beta = 0.8 case and print out other parameters
cM.print_params()
cMF = cM.fcommit()
cMr = cM.reneg(cMF)
y = cM.y
print('0-Discounted utility full commit: {0:4.3f}'.format(cM.PVU(cMF,cM.beta)))
print('and Naive renegotiate: {1:4.3f}'.format(cM.PVU(cMr,cM.beta)))
#print('Profits from full commit: {0:4.3f} and Naive renegotiate:{1:4.3f}'.format(cM.profit(y,cMF), cM.profit(y,cMr)))
Explanation: Example full commitment contract (and renegotiation with a naive consumer)
Here is an example of the full-commitment contracts a monopolist offers and the contract a monopolist and a naive consumer would renegotiate to from that same full commitment contract (but note that a intermediary who knows they are dealing with a naive consumer would bait them with a different initial contract).
End of explanation
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
from matplotlib import cm
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
X, Y, Z = axes3d.get_test_data(0.05)
cset = ax.contour(X, Y, Z)
ax.clabel(cset, fontsize=9, inline=1)
plt.show()
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
from matplotlib import cm
fig = plt.figure()
ax = fig.gca(projection='3d')
c0, c1 = np.arange(0,150,1), np.arange(0,150,1)
c2 =
ax.plot_surface(X, Y, Z, rstride=8, cstride=8, alpha=0.3)
cset = ax.contour(X, Y, Z, zdir='z', offset=-100, cmap=cm.coolwarm)
cset = ax.contour(X, Y, Z, zdir='x', offset=-40, cmap=cm.coolwarm)
cset = ax.contour(X, Y, Z, zdir='y', offset=40, cmap=cm.coolwarm)
ax.set_xlabel('X')
ax.set_xlim(-40, 40)
ax.set_ylabel('Y')
ax.set_ylim(-40, 40)
ax.set_zlabel('Z')
ax.set_zlim(-100, 100)
plt.show()
Explanation: Scratch play area
3D plots
End of explanation
bb = np.arange(0,1,0.05)
bb
for rh in np.arange(0.2,2,0.2):
c0RP = 300/(1+bb+bb**(1/rh))
c0F = 300/(1+2*bb**(1/rh) )
rat = c0RP/c0F
plt.plot(bb,rat)
plt.annotate('{:3.1f}'.format(rh),xy=(0.5,rat[len(rat)/2]))
plt.title(r'Ratio $\frac{c_0^{RP}}{c_0^F}$')
plt.xlabel(r'$\beta $')
plt.show()
for rh in np.arange(0.2,2,0.2):
c0RP = 300/(1+bb+bb**(1/rh))
c0F = 300/(1+2*bb**(1/rh) )
rat = c0RP/c0F
plt.plot(bb,c0F)
plt.annotate('{:3.1f}'.format(rh),xy=(0.5,rat[len(c0RP)/2]))
plt.title(r'consumption $c_0^{RP}$')
plt.xlabel(r'$\beta$')
plt.show()
len('BDDBCBDBDBCBAA BBAAABCBCCBCABBCABBBAAAADD')
DBDDBCBDBDBCBAA BBAAABCBCCBCABBCABBBAAAADD
len('BDDBCCDCDBCBBAABDDAACCDBBBCABCDABBCAxACDA')
Explanation: Is $c_0$ (and hence net borrowing) higher or lower in renegotiation-proof contracts?
It's going to depend on $\rho$
End of explanation |
4,173 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demo of resizing
Run the cells one at a time. Initially you'll see a squashed up chart. The next cell will let you programatically set it's height.
NOTE
Step1: Set height of 'figure'
Step2: Set height on window resize
Step4: After running the following cell, the chart will not instantly change it's height. But if you change the size of your browser window, the chart will resize to a plot height of 400.
Step5: Responsive plots
Step7: After running the cell below, you will be able to resize the browser window and see the plot scale to fit it.
Step8: Note, in your useage, you will probably have a different way of getting the "containingDivWidth." You just need a way to get the width of the div that the plot is living in. In an ipython notebook, it always lives in an output_subarea, so we can just use that. | Python Code:
line = Line(index=[1990, 1991, 1993, 1994], values=[1, 2, 3, 4], height=100)
show(line)
print(line.ref['id'])
plot_height = 300
HTML("<script>Bokeh.index['%s'].model.set('plot_height', %d);</script>" % (line.ref['id'], plot_height))
Explanation: Demo of resizing
Run the cells one at a time. Initially you'll see a squashed up chart. The next cell will let you programatically set it's height.
NOTE: This does not work on vplot, hplot, or grid plot. Currently you can only resize individual plots.
Set height of 'chart'
End of explanation
p = figure(height=100)
p.line(x=[1990, 1991, 1993, 1994], y=[1, 2, 3, 4])
show(p)
print(p.ref['id'])
plot_height = 200
HTML("<script>Bokeh.index['%s'].model.set('plot_height', %d);</script>" % (p.ref['id'], plot_height))
Explanation: Set height of 'figure'
End of explanation
line2 = Line(index=[1990, 1991, 1993, 1994], values=[1, 2, 3, 4], height=100)
show(line2)
print(line2.ref['id'])
Explanation: Set height on window resize
End of explanation
HTML(
<script>
Bokeh.$(window).resize(
function(){
Bokeh.index["%s"].model.set('plot_height', 400);
}
);
</script>
% (line2.ref['id']))
Explanation: After running the following cell, the chart will not instantly change it's height. But if you change the size of your browser window, the chart will resize to a plot height of 400.
End of explanation
responsive = Line(index=[1990, 1991, 1993, 1994], values=[1, 2, 3, 4], height=100, width=200)
show(responsive)
print(responsive.ref['id'])
Explanation: Responsive plots
End of explanation
HTML(
<script>
function resize_plot() {
var containingDivWidth = Bokeh.$('.output_subarea').last().width(),
plot = Bokeh.index["%s"].model,
curWidth = plot.get('plot_width'),
curHeight = plot.get('plot_height');
aspectRatio = curWidth / curHeight;
plotWidth = Math.max(containingDivWidth, 300); // This prevents the chart from getting too small.
plotHeight = parseInt(plotWidth / aspectRatio);
plot.set('plot_width', plotWidth);
plot.set('plot_height', plotHeight);
}
Bokeh.$(window).resize(resize_plot);
resize_plot();
</script>
% (responsive.ref['id']))
Explanation: After running the cell below, you will be able to resize the browser window and see the plot scale to fit it.
End of explanation
# A little kicker to get nbviewer to run resize_plot
HTML("<script>Bokeh.$(window).ready(resize_plot);</script>")
Explanation: Note, in your useage, you will probably have a different way of getting the "containingDivWidth." You just need a way to get the width of the div that the plot is living in. In an ipython notebook, it always lives in an output_subarea, so we can just use that.
End of explanation |
4,174 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Martín Noblía
Tp3
<img src="files/copy_left.png" style="float
Step2: Recordemos que el espacio de trabajo alcanzable es la región espacial a la que el efector final puede llegar, con al menos una orientación. Vamos a desarrollar primero la cinemática directa(como en el tp2) para luego evaluar variando los angulos de articulaciones en los rangos dados y asi obtener el espacio de trabajo alcanzable.
Step3: Ejercicio 2
En el manipulador 2R de la figura siguiente, $L_{1}=2L_{2}$ y los rangos límites para las juntas son
Step5: Sabemos que los puntos $(x,y)$ de la trama {3} los podemos obtener facilmente en función de los ángulos $\theta_{1}$ y $\theta_{2}$. Vamos a implementar la parametrización en la siguiente función
Step7: Ejercicio 3
Utilizando la substitución geométrica ‘tangente del semiángulo’, convertir la ecuación trascendental
Step8: Vamos a crear las matrices del enunciado para poder evaluarlas, además tenemos que tener en cuenta que las transformaciones que nos dan en el enunciado son las que van de la trama base a la Herramienta, por ello debemos transformarla para que nos quede $^{0}{3}T = (^{0}{H}T)(^{H}_{3}T)^{-1}$
Donde
Step9: Ahora vamos a realizar una funcion para verificar circularmente los resultados.
Primero generamos la cinemática directa como en el tp2 simbólicamente y luego la convertimos a numérica gracias a las bondades del lenguaje. | Python Code:
from IPython.core.display import Image
Image(filename='Imagenes/copy_left.png')
Image(filename='Imagenes/dibujo_robot2_tp2.png')
#imports
from sympy import *
import numpy as np
#Con esto las salidas van a ser en LaTeX
init_printing(use_latex=True)
Explanation: Martín Noblía
Tp3
<img src="files/copy_left.png" style="float: left;"/>
<div style="clear: both;">
##Control de Robots 2013
###Ingeniería en Automatización y Control
###Universidad Nacional de Quilmes
##Ejercicio 1
#### Determinar el espacio de trabajo alcanzable para el manipulador de 3 brazos de la figura siguiente con $L_1=15.0$ (cm), $L_2=10.0$(cm), $L_3=3.0$(cm), $0º < \theta_{1} < 360º$, $0º < \theta_{2} < 180º$, $0º < \theta_{3} < 180º$
End of explanation
#Funcion simbólica para una rotación(transformacion homogenea) sobre el eje X
def Rot_X(angle):
rad = angle*pi/180
M = Matrix([[1,0,0,0],[ 0,cos(rad),-sin(rad),0],[0,sin(rad), cos(rad),0],[0,0,0,1]])
return M
#Funcion simbólica para una rotación(transformacion homogenea) sobre el eje Y
def Rot_Y(angle):
rad = angle*pi/180
M = Matrix([[cos(rad),0,sin(rad),0],[ 0,1,0,0],[-sin(rad), 0,cos(rad),0],[0,0,0,1]])
return M
#Funcion simbólica para una rotación(transformacion homogenea) sobre el eje Z
def Rot_Z(angle):
rad = angle*pi/180
M = Matrix([[cos(rad),- sin(rad),0,0],[ sin(rad), cos(rad), 0,0],[0,0,1,0],[0,0,0,1]])
return M
#Funcion simbolica para una traslacion en el eje X
def Traslacion_X(num):
D = Matrix([[1,0,0,num],[0,1,0,0],[0,0,1,0],[0,0,0,1]])
return D
#Funcion simbolica para una traslacion en el eje Y
def Traslacion_Y(num):
D = Matrix([[1,0,0,0],[0,1,0,num],[0,0,1,0],[0,0,0,1]])
return D
#Funcion simbolica para una traslacion en el eje Z
def Traslacion_Z(num):
D = Matrix([[1,0,0,0],[0,1,0,0],[0,0,1,num],[0,0,0,1]])
return D
#estos son simbolos especiales que los toma como letras griegas directamente(muuy groso)
alpha, beta , gamma, phi, theta, a, d =symbols('alpha beta gamma phi theta a d')
#Generamos la transformacion
T = Rot_X(alpha) * Traslacion_X(a) * Rot_Z(theta) * Traslacion_Z(d)
T
#Creamos los nuevos simbolos
theta_1, theta_2, theta_3, L_1, L_2, L_3 =symbols('theta_1, theta_2, theta_3, L_1, L_2 L_3')
T_0_1 = T.subs([(alpha,0),(a,0),(d,0),(theta,theta_1)])
T_0_1
T_1_2 = T.subs([(alpha,90),(a,L_1),(d,0),(theta,theta_2)])
T_1_2
T_2_3 = T.subs([(alpha,0),(a,L_2),(d,0),(theta,theta_3)])
T_2_3
#Agregamos la ultima trama
T_w = Matrix([[1,0,0,L_3],[0,1,0,0],[0,0,1,0],[0,0,0,1]])
T_w
T_B_W = T_0_1 * T_1_2 * T_2_3 * T_w
T_B_W.simplify()
T_B_W
T_real = T_B_W.subs([(L_1,15),(L_2,10),(L_3,3)])
T_real
#generamos una funcion numerica() a partir de la expresion simbolica
func = lambdify((theta_1,theta_2,theta_3),T_real,'numpy')
#verificamos si funciona
func(10,30,10)
def get_position(q_1,q_2,q_3):
Funcion para extraer la posicion cartesiana de la transformacion
homogenea que describe la cinematica directa del manipulador RRR
espacial(ver ejercicio 2 tp2)
Inputs:
q_1 (angulo del link 1)
q_2 (angulo del link 2)
q_3 (angulo del link 3)
Outputs:
M = func(q_1,q_2,q_3)
arr = np.asarray(M)
x = arr[0,3]
y = arr[1,3]
z = arr[2,3]
return x,y,z
#probamos si funciona
L=get_position(10,10,10)
L
%pylab inline
plt.rcParams['figure.figsize'] = 12,10
import mpl_toolkits.mplot3d.axes3d as axes3d
#TODO vectorizar
fig, ax = plt.subplots(subplot_kw=dict(projection='3d'))
#generamos los rangos de los angulos y evaluamos su posicion cartesiana
for i in xrange(0,360,8):
for j in xrange(0,180,8):
for k in xrange(0,180,8):
x,y,z = get_position(i,j,k)
ax.scatter(x,y,z,alpha=.2)
ax.view_init(elev=10., azim=10.)
plt.title('Espacio de trabajo alcanzable',fontsize=17)
plt.xlabel(r'$x$',fontsize=17)
plt.ylabel(r'$y$',fontsize=17)
#ax.set_aspect('equal')
plt.show()
#TODO vectorizar
fig, ax = plt.subplots(subplot_kw=dict(projection='3d'))
#generamos los rangos de los angulos y evaluamos su posicion cartesiana
for i in xrange(0,360,8):
for j in xrange(0,180,8):
for k in xrange(0,180,8):
x,y,z = get_position(i,j,k)
ax.scatter(x,y,z,alpha=.2)
#ax.view_init(elev=10., azim=10.)
plt.title('Espacio de trabajo alcanzable',fontsize=17)
plt.xlabel(r'$x$',fontsize=17)
plt.ylabel(r'$y$',fontsize=17)
#ax.set_aspect('equal')
plt.show()
Explanation: Recordemos que el espacio de trabajo alcanzable es la región espacial a la que el efector final puede llegar, con al menos una orientación. Vamos a desarrollar primero la cinemática directa(como en el tp2) para luego evaluar variando los angulos de articulaciones en los rangos dados y asi obtener el espacio de trabajo alcanzable.
End of explanation
Image(filename='Imagenes/robot2_tp3.png')
Explanation: Ejercicio 2
En el manipulador 2R de la figura siguiente, $L_{1}=2L_{2}$ y los rangos límites para las juntas son: $0º < \theta_{1} < 180º$, $-90º < \theta_{2} < 180º$. Determinar el espacio de trabajo alcanzable.
End of explanation
def brazo_RR(theta_1, theta_2, L_1, L_2):
Posicion cartesiana del efector final de un brazo RR
Inputs:
theta_1(angulo del link 1)
theta_2(angulo del link 2)
L_1(Longitud del link 1)
L_2(longitud del link 2)
Outputs:
x(posicion cartesiana x del efector final)
y(posicion cartesiana y del efector final)
x = L_1 * np.cos(theta_1) + L_2 * np.cos(theta_1 + theta_2)
y = L_1 * np.sin(theta_1) + L_2 * np.sin(theta_1 + theta_2)
return x, y
theta_1_vec = np.linspace(0,np.pi,100) #vector de 100 muestras en el intervalo[0,pi]
theta_2_vec = np.linspace(-np.pi/2,np.pi,100) #vector de 100 muestras en el intervalo[-pi/2,pi]
#evaluamos a la funcion con varias combinaciones de vectores
x,y = brazo_RR(theta_1_vec,theta_2_vec,2,1)
x1,y1 = brazo_RR(0,theta_2_vec,2,1)
x2,y2 = brazo_RR(theta_1_vec,0,2,1)
#evaluamos con puntos aleatorios del rango
x3,y3 = brazo_RR(np.random.choice(theta_1_vec,2000),np.random.choice(theta_2_vec,2000),2,1)
plt.plot(x,y , 'ro')
plt.plot(x1,y1 , 'go')
plt.plot(x2,y2, 'ko')
plt.plot(x3,y3,'yo')
plt.title('Espacio de trabajo alcanzable',fontsize=20)
plt.axis([-4,4,-2,5])
plt.legend([r'$0 < \theta_{1} < \pi$ ; $ -\pi/2 < \theta_{2} < \pi$ ',r'$\theta_{1}=0$ ; $ -\pi/2 < \theta_{2} < \pi$ ',r'$0 < \theta_{1} < \pi$ ; $\theta_{2}=0$','random'],fontsize=17)
plt.grid()
plt.show()
Explanation: Sabemos que los puntos $(x,y)$ de la trama {3} los podemos obtener facilmente en función de los ángulos $\theta_{1}$ y $\theta_{2}$. Vamos a implementar la parametrización en la siguiente función:
End of explanation
def inverse_kin(T, L_1, L_2):
Funcion para resolver la cinematica inversa de un manipulador planar RRR
Inputs:
T(Matriz de tranformacion homogenea)
L_1(Longitud del link 1)
L_2(Longitud del link 2)
Outputs: Una tupla de 6 elementos con los angulos de las dos configuraciones
codo arriba o codo abajo
(theta_1,theta_2,theta_3,theta_1_up,theta_2_up,theta_3_up)
x = T[0,3]
y = T[1,3]
#calculamos si el punto es alcanzable
es_alc = (x**2 + y**2 - L_1**2 - L_2**2)/(2*L_1*L_2)
if (-1 <= es_alc <= 1):
print 'es alcanzable'
c_2 = es_alc
#Hay dos soluciones para elegir
s_2_elbow_up = np.sqrt(1-c_2**2)
s_2_elbow_down = -np.sqrt(1-c_2**2)
theta_2_up = np.arctan2(s_2_elbow_up,c_2)
theta_2_down = np.arctan2(s_2_elbow_down,c_2)
#cambio de variables
k_1 = L_1 + L_2*c_2
k_2_up = L_2*s_2_elbow_up
k_2_down = L_2*s_2_elbow_down
gamma_up = np.arctan2(k_2_up,k_1)
gamma_down = np.arctan2(k_2_down,k_1)
r_up = np.sqrt(k_1**2+k_2_up**2)
r_down= np.sqrt(k_1**2+k_2_down**2)
k_1_1 = r_up*np.cos(gamma_up)
k_1_2 = r_down*np.cos(gamma_down)
k_2_1 = r_up*np.sin(gamma_up)
k_2_2 = r_down*np.sin(gamma_down)
theta_1_up = np.arctan2(y,x) - np.arctan2(k_2_1,k_1_1)
theta_1_down = np.arctan2(y,x) - np.arctan2(k_2_2,k_1_2)
c_phi = T[0,0]
s_phi = T[1,0]
phi = np.arctan2(s_phi,c_phi)
theta_3_up = phi - theta_1_up - theta_2_up
theta_3_down = phi - theta_1_down - theta_2_down
fac = 180/np.pi #para pasar a grados
return theta_1_up*fac,theta_2_up*fac,theta_3_up*fac,theta_1_down*fac,theta_2_down*fac,theta_3_down*fac
else:
print 'No es alcanzable'
Explanation: Ejercicio 3
Utilizando la substitución geométrica ‘tangente del semiángulo’, convertir la ecuación trascendental: $acos(\theta)+bsin(\theta)=c$, esto es hallar $\theta$ en función de $a$, $b$ y $c$
La sustitución por la tangente de semiangulo es la siguiente:
$u=tg(\frac{\theta}{2})$
$cos(\theta)=\frac{1-u^{2}}{1+u^{2}}$
$sin(\theta)=\frac{2u}{1+u^2}$
para nuetro caso sustituimos en la ecuación trascendental $acos(\theta)+bsin(\theta)=c$ las expresiones de $cos(\theta)$ y $sin(\theta)$
$a(\frac{1-u^{2}}{1+u^{2}})+b(\frac{2u}{1+u^2})=c$ entonces
$a(1-u^{2})+b(2u)=c(1+u^{2})$, luego expresamos la ecuación como un polinomio en $u$
$u^{2}(a+c)-2bu+c-a=0$ el siguiente paso es resolver la cuadrática:
$u= \frac{b \pm \sqrt{b^{2}+a^{2}-c^{2}}}{a+c}$
Por lo tanto:
$\theta=2tg^{-1}(\frac{b \pm \sqrt{b^{2}+a^{2}-c^{2}}}{a+c})$
Ejercicio 4
Derive la cinemática inversa del robot RRR del ejercicio 2 de la práctica 2.
Si la transformación $^{S}{W}T$ esta dada entonces hacemos: $^{B}{W}T = (^{B}{S}T)(^{S}{T}T)(^{W}_{T}T^{-1})$
y como $^{B}{W}T = (^{0}{3}T)$, podemos escribir:
$$^{0}{3}T = \begin{bmatrix}
r{11} & r_{12} & r_{13} & x \\
r_{21} & r_{22} & r_{23} & y \\
r_{31} & r_{32} & r_{33} & z \\
0 & 0 & 0 & 1
\end{bmatrix}$$
Además como sabemos del ejercicio 2 de la practica 2:
$$^{0}{3}T = \begin{bmatrix}
c{1}c_{23} & -c_{1}c_{23} & s_{1} & c_{1}(c_{2}L_{2}+L_{1}) \\
s_{1}c_{23} & -s_{1}s_{23} & -c_{1} & s_{1}(c_{2}L_{2}+L_{1}) \\
s_{23} & c_{23} & 0 & s_{2}L_{2} \\
0 & 0 & 0 & 1
\end{bmatrix}$$
luego igualamos las componentes $(1,3)$ de ambas matrices, entonces:
$s_{1}=r_{13}$
luego igualamos los elementos $(2,3)$, entonces:
$-c_{1}=r_{23}$, como vemos podemos estimar el valor de $\theta_{1}$ como:
$\theta_{1}=Atan2(r_{13},-r_{23})$
Continuamos igualando los elementos $(1,4)$ y $(2,4)$:
$x=c_{1}(c_{2}L_{2}+L_{1})$
$y=s_{1}(c_{2}L_{2}+L_{1})$
Entonces vemos que si $c_{1} \neq 0$ $\therefore$ $c_{2}=\frac{1}{L_{2}}(\frac{x}{c_{1}}-L_{1})$
o $c_{2}=\frac{1}{L_{2}}(\frac{y}{s_{1}}-L_{1})$
Luego igualando los elementos $(3,4)$ $z=s_{2}L_{2}$ $\therefore$ $\theta_{2}=Atan2(\frac{z}{L_{2}};c_{2})$
Por último igualando los elementos $(3,1)$ y $(3,2)$
$s_{23}=r_{31}$
$c_{23}=r_{32}$
Por lo tanto:
$\theta_{3}=Atan2(r_{31};r_{32})-\theta_{2}$
Ejercicio 5
Este ejercicio se enfoca en la solución de la cinemática de planteamiento inverso para el robot planar 3-DOF(tres grados de libertad)(ver ejercicio 1 ). Se proporcionan los siguientes parámetros de longitud fija: $L_1=4$, $L_2=3$, $L_3=2$
a) Derive en forma analítica y a mano, la solución de planteamiento inverso para este robot. Dado $(^{0}{H}T)$, calcule todas las múltiples soluciones posibles para $[ \theta{1},\theta_{2},\theta_{3} ]$
b) Desarrolle un programa para resolver por completo este problema de cinemática de planteamiento inverso para el robot $3R$ planar (es decir, proporcione todas las múltiples soluciones). Pruebe su programa utilizando los siguientes casos de entrada:
i) $$^{0}{H}T = \begin{bmatrix}
1 & 0 & 0 & 9 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}$$
ii) $$^{0}{H}T = \begin{bmatrix}
0.5 & -0.866 & 0 & 7.5373 \\
0.866 & 0.6 & 0 & 3.9266 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}$$
iii)$$^{0}_{H}T = \begin{bmatrix}
0 & 1 & 0 & -3 \\
-1 & 0 & 0 & 2 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}$$
iv)$$^{0}_{H}T = \begin{bmatrix}
0.866 & 0.5 & 0 & -3.1245 \\
-0.5 & 0.866 & 0 & 9.1674 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}$$
Para todos los casos emplee una comprobación circular para validar sus resultados: introduzca cada conjunto de ángulos de articulación(para cada una de las múltiples soluciones ) de vuelta en el programa de planteamiento directo para demostrar que obtiene las matrices $^{0}_{H}T$
a)
Como sabemos las ecuaciones cinemáticas de este brazo son :
$$(^{B}{W}T) = (^{0}{3}T) = \begin{bmatrix}
c_{123} & -s_{123} & 0 & L_{1}c_{1}+L_{2}c_{12} \\
s_{123} & c_{123} & 0 & L_{1}s_{1}+L_{2}s_{12} \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}$$
Vamos a suponer una configuración genérica del brazo relativa a la trama base, la cual es $(^{B}_{W}T)$. Como estamos trabajando con un manipulador planar, puede lograrse especificando tres números $[x,y,\phi]$, en donde $\phi$ es la orientación del vínculo 3 en el plano(relativo al eje $\hat{X}$). Por ello nuestra transformación genérica es:
$$(^{B}{W}T) = \begin{bmatrix}
c{\phi} & -s_{\phi} & 0 & x \\
s_{\phi} & c_{\phi} & 0 & y \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}$$
Todos los destinos alcanzables deben encontrarse en el subespacio implicado por la estructura de la ecuación anterior. Si igualamos las dos matrices llegamos a las siguientes ecuaciones:
$c_{\phi}=c_{123}$
$s_{\phi}=s_{123}$
$x = L_{1}c_{1}+L_{2}c_{12}$
$y = L_{1}s_{1}+ L_{2}s_{12}$
Asi si elevamos al cuadrado las últimas dos ecuaciones y las sumamos:
$x^{2}+y^{2}=L_{1}^{2}+L_{2}^{2}+2L_{1}L_{2}c_{2}$
despejando $c_{2}$
$c_{2}=\frac{x^{2}+y^{2}-L_{1}^{2}-L_{2}^{2}}{2L_{1}L_{2}}$
Entonces, vemos que para que pueda existir una solución el lado derecho de la ecuación anterior debe estar en el intervalo $[-1,1]$
Luego suponiendo que se cumple esa condición, podemos hallar el valor del $s_{2}$ como:
$s_{2}=\pm \sqrt{1-c_{2}^{2}}$
Por último calculamos $\theta_{2}$ con la rutina de arco tangente de dos argumentos:
$\theta_{2}=Atan2(s_{2},c_{2})$
Dependiendo que signo hallamos elegido en la ecuación del $s_{2}$ corresponderá a una de las dos suluciones múltiples "codo hacia arriba" o "codo hacia abajo"
Luego podemos resolver para $\theta_{1}$ de la siguiente manera:
sean :
$x=k_{1}c_{1}-k_{2}s_{1}$
$y=k_{1}s_{1}+k_{2}c_{1}$
en donde:
$k_{1}=L_{1}+L_{2}c_{2}$
$k_{2}=L_{2}s_{2}$
si llamamos $r=\pm \sqrt{k_{1}^{2}+k_{2}^{2}}$ y a $\gamma = Atan2(k_{2},k_{1})$ entonces podemos escribir:
$\frac{x}{r}=cos(\gamma)cos(\theta_{1})-sin(\gamma)sin(\theta_{1})$
$\frac{y}{r}=cos(\gamma)sin(\theta_{1})+sin(\gamma)cos(\theta_{1})$
por lo tanto:
$cos(\gamma+\theta_{1})=\frac{x}{r}$
$sin(\gamma+\theta_{1})=\frac{y}{r}$
Usando el arreglo de dos elementos:
$\gamma + \theta_{1}= Atan2(\frac{y}{r},\frac{x}{r})=Atan2(k_{2},k_{1})$
y por lo tanto :
$\theta_{1}= Atan2(y,x)-Atan2(k_{2},k_{1})$
Finalmente podemos resolver para la suma de $\theta_{1}$ a $\theta_{3}$
$\theta_{1}+\theta_{2}+\theta_{3}=Atan2(s_{\phi},c_{\phi})=\phi$
De este último resultado podemos despejar $\theta_{3}$ ya que conocemos el valor de los otros ángulos.
b)
A continuación desarrolamos una implementación que resuelve la cinemática inversa anterior
End of explanation
#Matriz de transformacion del insiso i
T_0_H_i = np.array([[1,0,0,9],[0,1,0,0],[0,0,1,0],[0,0,0,1]])
T_0_H_i
#Matriz de la transformacion de la trama 3 a la herramienta
T_H_3 = np.array([[1,0,0,2],[0,1,0,0],[0,0,1,0],[0,0,0,1]])
T_H_3
#Inversa de la matriz que representa la transformacion de la trama 3 a la herramienta
T_3_H=np.linalg.inv(T_H_3)
T_3_H
#Obtenemos la transformacion que necesitamos
T_0_3_i=np.dot(T_0_H_i,T_3_H)
T_0_3_i
#calculamos los angulos(deberia dar cero(ya que en x tenemos la suma de los L_1, L_2))
angulos_i = inverse_kin(T_0_3_i,4,3)
angulos_i
#punto ii)
#Cargamos la matriz y repetimos el procedimiento anterior
T_0_H_ii = np.array([[.5,-0.866,0,7.5373],[0.866,0.6,0,3.9266],[0,0,1,0],[0,0,0,1]])
T_0_H_ii
T_0_3_ii = np.dot(T_0_H_ii,T_3_H)
T_0_3_ii
#Calculamos los angulos (son 6 tres para una configuracion y tres para la otra)
angulos_ii = inverse_kin(T_0_3_ii,4,3)
angulos_ii
#punto iii)
#Cargamos la matriz y repetimos el procedimiento anterior
T_0_H_iii = np.array([[0,1,0,-3],[-1,0,0,2],[0,0,1,0],[0,0,0,1]])
T_0_H_iii
T_0_3_iii = np.dot(T_0_H_iii,T_3_H)
T_0_3_iii
#Calculamos los angulos (son 6 tres para una configuracion y tres para la otra)
angulos_iii = inverse_kin(T_0_3_iii,4,3)
angulos_iii
#punto iv)
#Cargamos la matriz y repetimos el procedimiento anterior
T_0_H_iv = np.array([[0.866,.5,0,-3.1245],[-.5,0.866,0,9.1674],[0,0,1,0],[0,0,0,1]])
T_0_H_iv
T_0_3_iv = np.dot(T_0_H_iv,T_3_H)
T_0_3_iv
angulos_iv = inverse_kin(T_0_3_iv,4,3)
angulos_iv
Explanation: Vamos a crear las matrices del enunciado para poder evaluarlas, además tenemos que tener en cuenta que las transformaciones que nos dan en el enunciado son las que van de la trama base a la Herramienta, por ello debemos transformarla para que nos quede $^{0}{3}T = (^{0}{H}T)(^{H}_{3}T)^{-1}$
Donde :
$$^{H}_{3}T = \begin{bmatrix}
1 & 0 & 0 & 2 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}$$
End of explanation
#Generamos la cinematica directa del brazo planar RRR
T_0_1 = T.subs([(alpha,0),(a,0),(d,0),(theta,theta_1)])
T_1_2 = T.subs([(alpha,0),(a,L_1),(d,0),(theta,theta_2)])
T_2_3 = T.subs([(alpha,0),(a,L_2),(d,0),(theta,theta_3)])
T_0_3 = T_0_1 * T_1_2 * T_2_3
#Reemplazamos los valores de longitudes de link
T_0_3_real = T_0_3.subs([(L_1,4),(L_2,3)])
#generamos una funcion numerica a partir de la simbolica
func_kin = lambdify((theta_1,theta_2,theta_3),T_0_3_real,'numpy')
#evaluamos para los primeros tres elementos de la tupla que contiene los angulos de una configuracion
func_kin(angulos_i[0],angulos_i[1],angulos_i[2])
#evaluamos para los ultimos tres elementos de la tupla que contiene los angulos de una configuracion
func_kin(angulos_i[3],angulos_i[4],angulos_i[5])
#evaluamos para los primeros tres elementos de la tupla que contiene los angulos de una configuracion
func_kin(angulos_ii[0],angulos_ii[1],angulos_ii[2])
#evaluamos para los ultimos tres elementos de la tupla que contiene los angulos de una configuracion
func_kin(angulos_ii[3],angulos_ii[4],angulos_ii[5])
#evaluamos para los primeros tres elementos de la tupla que contiene los angulos de una configuracion
func_kin(angulos_iii[0],angulos_iii[1],angulos_iii[2])
#evaluamos para los ultimos tres elementos de la tupla que contiene los angulos de una configuracion
func_kin(angulos_iii[3],angulos_iii[4],angulos_iii[5])
Explanation: Ahora vamos a realizar una funcion para verificar circularmente los resultados.
Primero generamos la cinemática directa como en el tp2 simbólicamente y luego la convertimos a numérica gracias a las bondades del lenguaje.
End of explanation |
4,175 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
To do for 07282017
Step1: Try TSNE and time it
It turns out that TSNE is too time consuming even for small set of data. It is also because of how I transformed the data. Thus, in the PCA, I used list in the beginning and then transform all data into numpy array at once, which is much faster.
Step2: Try PCA instead
PCA looks resonable. We can process 300k data around 30 secs if it does not blow up my RAM. I will proceed with this setting for first try
Step3: Append all view_items for PCA processing
Step4: Append all buy_items for PCA processing
Step5: Save the file for further processing | Python Code:
import pandas as pd
import numpy as np
import os
from sklearn.manifold import TSNE
from sklearn.decomposition import PCA
os.chdir('/Users/Walkon302/Desktop/deep-learning-models-master/view2buy')
# Read the preprocessed file, containing the user profile and item features from view2buy folder
df = pd.read_pickle('user_fea_for_eval.pkl')
# Drop the first column, which is the original data format.
df.drop('0', axis = 1, inplace = True)
# Check the data
df.head()
# Slice the data into 100k items
df = df.iloc[0:100000, :]
# Calculate the average view sec for all view items per user
avg_view_sec = pd.DataFrame(df.groupby(['user_id', 'buy_spu'])['view_secondes'].mean())
# Reset the index and rename the column
avg_view_sec.reset_index(inplace=True)
avg_view_sec.rename(columns = {'view_secondes':'avg_view_sec'}, inplace=True)
# Check the data
avg_view_sec.head()
# Merge avg item view into data
df = pd.merge(df, avg_view_sec, on=['user_id', 'buy_spu'])
# Calculate the weights for view item vec
df['weight_of_view'] = df['view_secondes']/df['avg_view_sec']
df.head()
# Generate view_item_vec and buy_item_vec
view_item_vec = df['view_features']
buy_item_vec = df['buy_features']
print 'view_item', len(view_item_vec), 'buy_item', len(buy_item_vec)
Explanation: To do for 07282017:
User_filter:
Filter user based on certain features, e.g.,
consistent with theme, certain time of viewing,
or certain time interval before each item viewing.
Recommendation core:
It will basically be the collaborative filter (CF),
but instead of using real items, I'd like to use
features extracted from CNN and dimension-reduced
by tSNE to maybe 20 D.
Processor:
Input are
a. log of user history
b. item features
Output are
a. Top N rank of recommendation item for each user
Evaluator:
Evaluate whether the user buy the item within the top
N rank of recommended items.
After trial run:
tSNE for this amount of sample and the dimension we want may not be feasible. Need to try small portion and time it or try PCA instead
End of explanation
# Generate TSNE model
model = TSNE(n_components=10, random_state=0)
# Time the tSNE with 250 samples
%%time
a = pd.DataFrame()
for i, j in enumerate(view_item_vec.iloc[0:250]):
a = pd.concat([a, pd.DataFrame(j).transpose()], axis = 0)
vt = model.fit_transform(a)
# Time the tSNE with 500 samples
%%time
a = pd.DataFrame()
for i, j in enumerate(view_item_vec.iloc[0:500]):
a = pd.concat([a, pd.DataFrame(j).transpose()], axis = 0)
vt = model.fit_transform(a)
# Time the tSNE with 1000 samples
%%time
a = pd.DataFrame()
for i, j in enumerate(view_item_vec.iloc[0:1000]):
a = pd.concat([a, pd.DataFrame(j).transpose()], axis = 0)
vt = model.fit_transform(a)
Explanation: Try TSNE and time it
It turns out that TSNE is too time consuming even for small set of data. It is also because of how I transformed the data. Thus, in the PCA, I used list in the beginning and then transform all data into numpy array at once, which is much faster.
End of explanation
# Generate TSNE model
model = PCA(n_components=200, random_state=0)
Explanation: Try PCA instead
PCA looks resonable. We can process 300k data around 30 secs if it does not blow up my RAM. I will proceed with this setting for first try
End of explanation
%%time
view_item = []
for i in view_item_vec:
view_item.append(i)
view_item= np.array(view_item)
%%time
pca_view_vec = model.fit_transform(view_item)
# 200 dimensions of PCA can explain 85% of variables. Beyond that, e.g., 300 D, my computer will run out of memory (8g)
sum(model.explained_variance_ratio_)
Explanation: Append all view_items for PCA processing
End of explanation
%%time
buy_item = []
for i in buy_item_vec:
buy_item.append(i)
buy_item= np.array(buy_item)
%%time
pca_buy_vec = model.fit_transform(buy_item)
# Incert pca result to data
df['pca_view'] = pca_view_vec.tolist()
df['pca_buy'] = pca_buy_vec.tolist()
# Check the data
df.head()
df = pd.read_pickle('df_weighted.pkl')
# Calculate the weighted pca_view
df['weighted_view_pca'] = df.apply(lambda x: [y*x['weight_of_view'] for y in x['pca_view']], axis=1)
# Calculate the weighted pca_buy
df['weighted_buy_pca'] = df.apply(lambda x: [y*x['weight_of_view'] for y in x['pca_buy']], axis=1)
# Check the data
df.head()
Explanation: Append all buy_items for PCA processing
End of explanation
df.to_pickle('top100k_user_pca.pkl')
Explanation: Save the file for further processing
End of explanation |
4,176 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hidden Markov Models
author
Step1: Note
Step2: This seems far more reasonable. There is a single CG island surrounded by background sequence, and something at the end. If we knew that CG islands cannot occur at the end of sequences, we need only modify the underlying structure of the HMM in order to say that the sequence must end from the background state.
Step3: Looks like we managed to get rid of that pesky end (again, the numbers may have flipped, look at the indices). Modifying transition probabilities and using non-dense graphical structures are two major ways in which HMMs account for data in a sequence not being independent and identically distributed (i.i.d.). In fact, in most applications, the graphical structure of a HMM is very sparse.
If we want a more probabilistic view of what's going on, we can get the probability of each symbol in the sequence being in each of the states in the model easily. This is useful to get a soft estimate of classification, which allows us to include confidence as well as prediction. Values close to 50-50 get masked when you make hard classifications, but this uncertainty can be passed to future applications if you use soft assignments. Each row in the matrix is one symbol in the sequence, and the columns correspond to the two states identified above (CG island or background).
Step4: There is a corresponding hmm.predict_log_proba method present if you want to get the log values. These are the emission probability values calculated by the forward backward algorithm, and can also be retrieved by calling hmm.forward_backward( seq ), which returns both the emission and the transition probability tables.
Lets take a look at these tables!
Step5: This is the transition table, which has the soft count of the number of transitions across an edge in the model given a single sequence. It is a square matrix of size equal to the number of states (including start and end state), with number of transitions from (row_id) to (column_id). This is exemplified by the 1.0 in the first row, indicating that there is one transition from background state to the end state, as that's the only way to reach the end state. However, the third (or fourth, depending on ordering) row is the transitions from the start state, and it only slightly favors the background state. These counts are not normalized to the length of the input sequence, but can easily be done so by dividing by row sums, column sums, or entire table sums, depending on your application.
A possible reason not to normalize is to run several sequences through and add up their tables, because normalizing in the end and extracting some domain knowledge. It is extremely useful in practice. For example, we can see that there is an expectation of 2.8956 transitions from CG island to background, and 2.4 from background to CG island. This could be used to infer that there are ~2-3 edges, which makes sense if you consider that the start and end of the sequence seem like they might be part of the CG island states except for the strict transition probabilities used (look at the first few rows of the emission table above.)
We've been using the forward backward algorithm and maximum a posteriori for decoding thus far, however maximum a posteriori decoding has the side effect that it is possible that it predicts impossible sequences in some edge cases. An alternative is Viterbi decoding, which at each step takes the most likely path, instead of sum of all paths, to produce hard assignments.
Step6: We see here a case in which it does not do too well. The Viterbi path can be more conservative in its transitions due to the hard assignments it makes. In essence, if multiple possibile paths are possible at a given point, it takes the most likely path, even if the sum of all other paths is greater than the sum of that path. In problems with a lower signal to noise ratio, this can mask the signal. As a side note, we can use the following to get the maximum a posteriori and Viterbi paths
Step7: The sequence predicted is a tuple of (state id, state object) for every state in the predicted path. The predict method simply takes the state ids from this path and returns those as an array.
Sequence Alignment
Lets move on to a more complicated structure, that of a profile HMM. A profile HMM is used to align a sequence to a reference 'profile', where the reference profile can either be a single sequence, or an alignment of many sequences (such as a reference genome). In essence, this profile has a 'match' state for every position in the reference profile, and 'insert' state, and a 'delete' state. The insert state allows the external sequence to have an insertion into the sequence without throwing off the entire alignment, such as the following
Step8: Now lets try to align some sequences to it and see what happens!
Step9: The first and last sequence are entirely matches, meaning that it thinks the most likely alignment between the profile ACT and ACT is A-A, C-C, and T-T, which makes sense, and the most likely alignment between ACT and ACC is A-A, C-C, and T-C, which includes a mismatch. Essentially, it's more likely that there's a T-C mismatch at the end then that there was a deletion of a T at the end of the sequence, and a separate insertion of a C.
The two middle sequences don't match very well, as expected! G's are not very likely in this profile at all. It predicts that the two G's are inserts, and that the C matches the C in the profile, before hitting the delete state because it can't emit a T. The third sequence thinks that the G is an insert, as expected, and then aligns the A and T in the sequence to the A and T in the master sequence, missing the middle C in the profile.
By using deletes, we can handle other sequences which are shorter than three characters. Lets look at some more sequences of different lengths.
Step11: Again, more of the same expected. You'll notice most of the use of insertion states are at I0, because most of the insertions are at the beginning of the sequence. It's more probable to simply stay in I0 at the beginning instead of go from I0 to D1 to I1, or going to another insert state along there. You'll see other insert states used when insertions occur in other places in the sequence, like 'ATTT' and 'ACGTG'.
Now that we have the path, we need to convert it into an alignment, which is significantly more informative to look at.
Step12: In addition to getting this alignment, we can do some interesting things with this model! Lets score every sequence of length 5 of less and see what the distribution looks like.
Step13: Training Hidden Markov Models
There are two main algorithms for training hidden Markov models-- Baum Welch (structured version of Expectation Maximization), and Viterbi training. Since we don't start off with labels on the data, these are both unsupervised training algorithms. In order to assign labels, Baum Welch uses EM to assign soft labels (weights in this case) to each point belonging to each state, and then using weighted MLE estimates to update the distributions. Viterbi assigns hard labels to each observation using the Viterbi algorithm, and then updates the distributions based on these hard labels.
pomegranate is extremely well featured when it comes to regularization methods for training, supporting tied emissions and edges, edge and emission inertia, freezing nodes or edges, edge pseudocounts, and multithreaded training. Lets look at some examples of the following
Step14: You have now indicated that these two states are tied, and when training, the weights of all points going to s2 will be added to the weights of all points going to s1 when updating d. As a side note, this is implemented in a computationally efficient manner such that d will only be updated once, not twice (but giving the same result). s3 and s4 are not tied together, because while they have the same distribution, it is not the same python object.
Tied Edges
Edges can be tied together for the same reason. If you have a modular structure to your HMM, perhaps you believe this repeating structure doesn't (or shouldn't) have a position specific edge structure. You can do this simply by adding a group when you add transitions.
Step15: The above model doesn't necessarily make sense, but it shows how simple it is to tie edges as well. You can go ahead and train normally from this point, without needing to change any code.
Inertia
The next options are inertia on edges or on distributions. This simply means that you update your parameters as (previous_parameter * inertia) + (new_parameter * (1-inertia) ). It is a way to prevent your updates from overfitting immediately. You can specify this in the train function using either edge_inertia or distribution_inertia. These default to 0, with 1 being the maximum, meaning that you don't update based on new evidence, the same as freezing a distribution or the edges.
Step16: Pseudocounts
Another way of regularizing your model is to add pseudocounts to your edges (which have non-zero probabilities). When updating your edges in the future, you add this pseudocount to the count of transitions across that edge in the future. This gives a more Bayesian estimate of the edge probability, and is useful if you have a large model and don't expect to cross most of the edges with your training data. An example might be a complicated profile HMM, where you don't expect to see deletes or inserts at all in your training data, but don't want to change from the default values.
In pomegranate, pseudocounts default to the initial probabilities, so that if you don't see data, the edge values simply aren't updated. You can define both edge specific pseudocounts when you define the transition. When you train, you must define use_pseudocount=True.
Step17: The other way is to put a blanket pseudocount on all edges.
Step18: We can see that there isn't as much of an improvement. This is part of regularization, though. We sacrifice fitting the data exactly in order for our model to generalize better to future data. The majority of the training improvement is likely coming from the emissions better fitting the data, though.
Multithreaded Training
Since pomegranate is implemented in cython, the majority of functions are written with the GIL released. A benefit of doing this is that we can use multithreading in order to make some computationally intensive tasks take less time. However, a downside is that python doesn't play nicely with multithreading, and so there are some cases where training using multithreading can make your model training take significantly longer. I investigate this in an early multithreading pull request <a href="https
Step19: Serialization
General Mixture Models support serialization to JSONs using to_json() and from_json( json ). This is useful is you want to train a GMM on large amounts of data, taking a significant amount of time, and then use this model in the future without having to repeat this computationally intensive step (sounds familiar by now). Lets look at the original CG island model, since it's significantly smaller. | Python Code:
from pomegranate import *
import numpy as np
%pylab inline
seq = list('CGACTACTGACTACTCGCCGACGCGACTGCCGTCTATACTGCGCATACGGC')
d1 = DiscreteDistribution({'A': 0.25, 'C': 0.25, 'G': 0.25, 'T': 0.25})
d2 = DiscreteDistribution({'A': 0.10, 'C': 0.40, 'G': 0.40, 'T': 0.10})
s1 = State( d1, name='background' )
s2 = State( d2, name='CG island' )
gmm = GeneralMixtureModel( [d1, d2] )
hmm = HiddenMarkovModel()
hmm.add_states(s1, s2)
hmm.add_transition( hmm.start, s1, 0.5 )
hmm.add_transition( hmm.start, s2, 0.5 )
hmm.add_transition( s1, s1, 0.5 )
hmm.add_transition( s1, s2, 0.5 )
hmm.add_transition( s2, s1, 0.5 )
hmm.add_transition( s2, s2, 0.5 )
hmm.bake()
gmm_predictions = gmm.predict( np.array(seq) )
hmm_predictions = hmm.predict( seq )
print "sequence: {}".format( ''.join( seq ) )
print "gmm pred: {}".format( ''.join( map( str, gmm_predictions ) ) )
print "hmm pred: {}".format( ''.join( map( str, hmm_predictions ) ) )
Explanation: Hidden Markov Models
author: Jacob Schreiber <br>
contact: [email protected]
Hidden Markov models (HMMs) are the flagship of the pomegranate package, in that most time is spent improving their implementation, and these improvements sometimes trickle down into the other algorithms. Lets delve into the features which pomegranate offers.
Hidden Markov models are a form of structured prediction method which extend general mixture models to sequences of data, where position in the sequence is relevant. If each point in this sequence is completely independent of the other points, then HMMs are not the right tools and GMMs (or more complicated Bayesian networks) may be a better tool.
The most common examples of HMMs come from bioinformatics and natural language processing. Since I am a bioinformatician, I will predominately use examples from bioinformatics.
GMMs vs HMMs in the CG rich region example
Lets take the simplified example of CG island detection on a sequence of DNA. DNA is made up of the four canonical nucleotides, abbreviated 'A', 'C', 'G', and 'T'. Specific organizations of these nucleotides encode enough information to build you, a human being. One simple region in the genome is called the 'CG' island, where the nucleotides 'C' and 'G' are enriched. Lets compare the predictions of a GMM with the predictions of a HMM, to both understand conceptually the differences between the two, and to see how easy it is to use pomegranate.
End of explanation
hmm = HiddenMarkovModel()
hmm.add_states(s1, s2)
hmm.add_transition( hmm.start, s1, 0.5 )
hmm.add_transition( hmm.start, s2, 0.5 )
hmm.add_transition( s1, s1, 0.9 )
hmm.add_transition( s1, s2, 0.1 )
hmm.add_transition( s2, s1, 0.1 )
hmm.add_transition( s2, s2, 0.9 )
hmm.bake()
hmm_predictions = hmm.predict( seq )
print "sequence: {}".format( ''.join( seq ) )
print "gmm pred: {}".format( ''.join( map( str, gmm_predictions ) ) )
print "hmm pred: {}".format( ''.join( map( str, hmm_predictions ) ) )
print
print "hmm state 0: {}".format( hmm.states[0].name )
print "hmm state 1: {}".format( hmm.states[1].name )
Explanation: Note: The HMM and GMM predictions may be the inverse of each other, because HMM states undergo a topological sort in order to properly handle silent states (more later), which can cause the order they were inserted into the model.
Your first reaction may to say "But Jacob, you just said that HMMs and GMMs are different. Why shold I make a HMM when making a GMM is so easy?".
My point in showing you this is that a dense HMM with equal probabilities between each state is ~equivalent~ to a GMM. However, this framework gives us great flexibility to add prior knowledge, whereas a GMM doesn't. If we look at the predictions, we see that it's bifurcating between "background" and "CG island" very quickly--in essence, calling every C or G a 'CG island'. This is not likely to be true. We know that CG islands have some As and Ts in them, and background sequence has Cs and Gs. We can change the transition probabilities to account for this, and prevent switching from occurring too rapidly.
End of explanation
hmm = HiddenMarkovModel()
hmm.add_states(s1, s2)
hmm.add_transition( hmm.start, s1, 0.5 )
hmm.add_transition( hmm.start, s2, 0.5 )
hmm.add_transition( s1, s1, 0.89 )
hmm.add_transition( s1, s2, 0.10 )
hmm.add_transition( s1, hmm.end, 0.01 )
hmm.add_transition( s2, s1, 0.1 )
hmm.add_transition( s2, s2, 0.9 )
hmm.bake()
hmm_predictions = hmm.predict( seq )
print "sequence: {}".format( ''.join( seq ) )
print "gmm pred: {}".format( ''.join( map( str, gmm_predictions ) ) )
print "hmm pred: {}".format( ''.join( map( str, hmm_predictions ) ) )
print
print "hmm state 0: {}".format( hmm.states[0].name )
print "hmm state 1: {}".format( hmm.states[1].name )
Explanation: This seems far more reasonable. There is a single CG island surrounded by background sequence, and something at the end. If we knew that CG islands cannot occur at the end of sequences, we need only modify the underlying structure of the HMM in order to say that the sequence must end from the background state.
End of explanation
print hmm.predict_proba( seq )
Explanation: Looks like we managed to get rid of that pesky end (again, the numbers may have flipped, look at the indices). Modifying transition probabilities and using non-dense graphical structures are two major ways in which HMMs account for data in a sequence not being independent and identically distributed (i.i.d.). In fact, in most applications, the graphical structure of a HMM is very sparse.
If we want a more probabilistic view of what's going on, we can get the probability of each symbol in the sequence being in each of the states in the model easily. This is useful to get a soft estimate of classification, which allows us to include confidence as well as prediction. Values close to 50-50 get masked when you make hard classifications, but this uncertainty can be passed to future applications if you use soft assignments. Each row in the matrix is one symbol in the sequence, and the columns correspond to the two states identified above (CG island or background).
End of explanation
trans, ems = hmm.forward_backward( seq )
print trans
Explanation: There is a corresponding hmm.predict_log_proba method present if you want to get the log values. These are the emission probability values calculated by the forward backward algorithm, and can also be retrieved by calling hmm.forward_backward( seq ), which returns both the emission and the transition probability tables.
Lets take a look at these tables!
End of explanation
hmm_predictions = hmm.predict( seq, algorithm='viterbi' )[1:-1]
print "sequence: {}".format( ''.join( seq ) )
print "gmm pred: {}".format( ''.join( map( str, gmm_predictions ) ) )
print "hmm pred: {}".format( ''.join( map( str, hmm_predictions ) ) )
print
print "hmm state 0: {}".format( hmm.states[0].name )
print "hmm state 1: {}".format( hmm.states[1].name )
Explanation: This is the transition table, which has the soft count of the number of transitions across an edge in the model given a single sequence. It is a square matrix of size equal to the number of states (including start and end state), with number of transitions from (row_id) to (column_id). This is exemplified by the 1.0 in the first row, indicating that there is one transition from background state to the end state, as that's the only way to reach the end state. However, the third (or fourth, depending on ordering) row is the transitions from the start state, and it only slightly favors the background state. These counts are not normalized to the length of the input sequence, but can easily be done so by dividing by row sums, column sums, or entire table sums, depending on your application.
A possible reason not to normalize is to run several sequences through and add up their tables, because normalizing in the end and extracting some domain knowledge. It is extremely useful in practice. For example, we can see that there is an expectation of 2.8956 transitions from CG island to background, and 2.4 from background to CG island. This could be used to infer that there are ~2-3 edges, which makes sense if you consider that the start and end of the sequence seem like they might be part of the CG island states except for the strict transition probabilities used (look at the first few rows of the emission table above.)
We've been using the forward backward algorithm and maximum a posteriori for decoding thus far, however maximum a posteriori decoding has the side effect that it is possible that it predicts impossible sequences in some edge cases. An alternative is Viterbi decoding, which at each step takes the most likely path, instead of sum of all paths, to produce hard assignments.
End of explanation
v_logp, v_seq = hmm.viterbi( seq )
m_logp, m_seq = hmm.maximum_a_posteriori( seq )
Explanation: We see here a case in which it does not do too well. The Viterbi path can be more conservative in its transitions due to the hard assignments it makes. In essence, if multiple possibile paths are possible at a given point, it takes the most likely path, even if the sum of all other paths is greater than the sum of that path. In problems with a lower signal to noise ratio, this can mask the signal. As a side note, we can use the following to get the maximum a posteriori and Viterbi paths:
End of explanation
model = HiddenMarkovModel( "Global Alignment")
# Define the distribution for insertions
i_d = DiscreteDistribution( { 'A': 0.25, 'C': 0.25, 'G': 0.25, 'T': 0.25 } )
# Create the insert states
i0 = State( i_d, name="I0" )
i1 = State( i_d, name="I1" )
i2 = State( i_d, name="I2" )
i3 = State( i_d, name="I3" )
# Create the match states
m1 = State( DiscreteDistribution({ "A": 0.95, 'C': 0.01, 'G': 0.01, 'T': 0.02 }) , name="M1" )
m2 = State( DiscreteDistribution({ "A": 0.003, 'C': 0.99, 'G': 0.003, 'T': 0.004 }) , name="M2" )
m3 = State( DiscreteDistribution({ "A": 0.01, 'C': 0.01, 'G': 0.01, 'T': 0.97 }) , name="M3" )
# Create the delete states
d1 = State( None, name="D1" )
d2 = State( None, name="D2" )
d3 = State( None, name="D3" )
# Add all the states to the model
model.add_states( [i0, i1, i2, i3, m1, m2, m3, d1, d2, d3 ] )
# Create transitions from match states
model.add_transition( model.start, m1, 0.9 )
model.add_transition( model.start, i0, 0.1 )
model.add_transition( m1, m2, 0.9 )
model.add_transition( m1, i1, 0.05 )
model.add_transition( m1, d2, 0.05 )
model.add_transition( m2, m3, 0.9 )
model.add_transition( m2, i2, 0.05 )
model.add_transition( m2, d3, 0.05 )
model.add_transition( m3, model.end, 0.9 )
model.add_transition( m3, i3, 0.1 )
# Create transitions from insert states
model.add_transition( i0, i0, 0.70 )
model.add_transition( i0, d1, 0.15 )
model.add_transition( i0, m1, 0.15 )
model.add_transition( i1, i1, 0.70 )
model.add_transition( i1, d2, 0.15 )
model.add_transition( i1, m2, 0.15 )
model.add_transition( i2, i2, 0.70 )
model.add_transition( i2, d3, 0.15 )
model.add_transition( i2, m3, 0.15 )
model.add_transition( i3, i3, 0.85 )
model.add_transition( i3, model.end, 0.15 )
# Create transitions from delete states
model.add_transition( d1, d2, 0.15 )
model.add_transition( d1, i1, 0.15 )
model.add_transition( d1, m2, 0.70 )
model.add_transition( d2, d3, 0.15 )
model.add_transition( d2, i2, 0.15 )
model.add_transition( d2, m3, 0.70 )
model.add_transition( d3, i3, 0.30 )
model.add_transition( d3, model.end, 0.70 )
# Call bake to finalize the structure of the model.
model.bake()
Explanation: The sequence predicted is a tuple of (state id, state object) for every state in the predicted path. The predict method simply takes the state ids from this path and returns those as an array.
Sequence Alignment
Lets move on to a more complicated structure, that of a profile HMM. A profile HMM is used to align a sequence to a reference 'profile', where the reference profile can either be a single sequence, or an alignment of many sequences (such as a reference genome). In essence, this profile has a 'match' state for every position in the reference profile, and 'insert' state, and a 'delete' state. The insert state allows the external sequence to have an insertion into the sequence without throwing off the entire alignment, such as the following:
ACCG : Sequence <br>
|| | <br>
AC-G : Reference
or a deletion, which is the opposite:
A-G : Sequence <br>
| | <br>
ACG : Reference
The bars in the middle refer to a perfect match, whereas the lack of a bar means either a deletion/insertion, or a mismatch. A mismatch is where two positions are aligned together, but do not match. This models the biological phenomena of mutation, where one nucleotide can convert to another over time. It is usually more likely in biological sequences that this type of mutation occurs than that the nucleotide was deleted from the sequence (shifting all nucleotides over by one) and then another was inserted at the exact location (moving all nucleotides over again). Since we are using a probabilistic model, we get to define these probabilities through the use of distributions! If we want to model mismatches, we can just set our 'match' state to have an appropriate distribution with non-zero probabilities over mismatches.
Lets now create a three nucleotide profile HMM, which models the sequence 'ACT'. We will fuzz this a little bit in the match states, pretending to have some prior information about what mutations occur at each position. If you don't have any information, setting a uniform, small, value over the other values is usually okay.
End of explanation
for sequence in map( list, ('ACT', 'GGC', 'GAT', 'ACC') ):
logp, path = model.viterbi( sequence )
print "Sequence: '{}' -- Log Probability: {} -- Path: {}".format(
''.join( sequence ), logp, " ".join( state.name for idx, state in path[1:-1] ) )
Explanation: Now lets try to align some sequences to it and see what happens!
End of explanation
for sequence in map( list, ('A', 'GA', 'AC', 'AT', 'ATCC', 'ACGTG', 'ATTT', 'TACCCTC', 'TGTCAACACT') ):
logp, path = model.viterbi( sequence )
print "Sequence: '{}' -- Log Probability: {} -- Path: {}".format(
''.join( sequence ), logp, " ".join( state.name for idx, state in path[1:-1] ) )
Explanation: The first and last sequence are entirely matches, meaning that it thinks the most likely alignment between the profile ACT and ACT is A-A, C-C, and T-T, which makes sense, and the most likely alignment between ACT and ACC is A-A, C-C, and T-C, which includes a mismatch. Essentially, it's more likely that there's a T-C mismatch at the end then that there was a deletion of a T at the end of the sequence, and a separate insertion of a C.
The two middle sequences don't match very well, as expected! G's are not very likely in this profile at all. It predicts that the two G's are inserts, and that the C matches the C in the profile, before hitting the delete state because it can't emit a T. The third sequence thinks that the G is an insert, as expected, and then aligns the A and T in the sequence to the A and T in the master sequence, missing the middle C in the profile.
By using deletes, we can handle other sequences which are shorter than three characters. Lets look at some more sequences of different lengths.
End of explanation
def path_to_alignment( x, y, path ):
This function will take in two sequences, and the ML path which is their alignment,
and insert dashes appropriately to make them appear aligned. This consists only of
adding a dash to the model sequence for every insert in the path appropriately, and
a dash in the observed sequence for every delete in the path appropriately.
for i, (index, state) in enumerate( path[1:-1] ):
name = state.name
if name.startswith( 'D' ):
y = y[:i] + '-' + y[i:]
elif name.startswith( 'I' ):
x = x[:i] + '-' + x[i:]
return x, y
for sequence in map( list, ('A', 'GA', 'AC', 'AT', 'ATCC', 'ACGTG', 'ATTT', 'TACCCTC', 'TGTCAACACT') ):
logp, path = model.viterbi( sequence )
x, y = path_to_alignment( 'ACT', ''.join(sequence), path )
print "Sequence: {}, Log Probability: {}".format( ''.join(sequence), logp )
print "{}\n{}".format( x, y )
print
Explanation: Again, more of the same expected. You'll notice most of the use of insertion states are at I0, because most of the insertions are at the beginning of the sequence. It's more probable to simply stay in I0 at the beginning instead of go from I0 to D1 to I1, or going to another insert state along there. You'll see other insert states used when insertions occur in other places in the sequence, like 'ATTT' and 'ACGTG'.
Now that we have the path, we need to convert it into an alignment, which is significantly more informative to look at.
End of explanation
import itertools as it
import seaborn as sns
sequences = reduce( lambda x, y: x+y, [[ seq for seq in it.product( 'ACGT', repeat=i ) ] for i in xrange( 1,6 )] )
scores = map( model.log_probability, sequences )
plt.figure( figsize=(10,5) )
sns.kdeplot( numpy.array( scores ), shade=True )
plt.ylabel('Density')
plt.xlabel('Log Probability')
plt.show()
Explanation: In addition to getting this alignment, we can do some interesting things with this model! Lets score every sequence of length 5 of less and see what the distribution looks like.
End of explanation
d = NormalDistribution( 5, 2 )
s1 = State( d, name="Tied1" )
s2 = State( d, name="Tied2" )
s3 = State( NormalDistribution( 5, 2 ), name="NotTied1" )
s4 = State( NormalDistribution( 5, 2 ), name="NotTied2" )
Explanation: Training Hidden Markov Models
There are two main algorithms for training hidden Markov models-- Baum Welch (structured version of Expectation Maximization), and Viterbi training. Since we don't start off with labels on the data, these are both unsupervised training algorithms. In order to assign labels, Baum Welch uses EM to assign soft labels (weights in this case) to each point belonging to each state, and then using weighted MLE estimates to update the distributions. Viterbi assigns hard labels to each observation using the Viterbi algorithm, and then updates the distributions based on these hard labels.
pomegranate is extremely well featured when it comes to regularization methods for training, supporting tied emissions and edges, edge and emission inertia, freezing nodes or edges, edge pseudocounts, and multithreaded training. Lets look at some examples of the following:
Tied Emissions
Sometimes we want to say that multiple states model the same phenomena, but are simply at different points in the graph because we are utilizing complicated edge structure. An example is in the example of the global alignment HMM we saw. All insert states represent the same phenomena, which is nature randomly inserting a nucleotide, and this probability should be the same regardless of position. However, we can't simply have a single insert state, or we'd be allowed to transition from any match state to any other match state.
You can tie emissions together simply by passing the same distribution object to multiple states. That's it.
End of explanation
model = HiddenMarkovModel()
model.add_states( [s1, s2] )
model.add_transition( model.start, s1, 0.5, group='a' )
model.add_transition( model.start, s2, 0.5, group='b' )
model.add_transition( s1, s2, 0.5, group='a' )
model.add_transition( s2, s1, 0.5, group='b' )
model.bake()
Explanation: You have now indicated that these two states are tied, and when training, the weights of all points going to s2 will be added to the weights of all points going to s1 when updating d. As a side note, this is implemented in a computationally efficient manner such that d will only be updated once, not twice (but giving the same result). s3 and s4 are not tied together, because while they have the same distribution, it is not the same python object.
Tied Edges
Edges can be tied together for the same reason. If you have a modular structure to your HMM, perhaps you believe this repeating structure doesn't (or shouldn't) have a position specific edge structure. You can do this simply by adding a group when you add transitions.
End of explanation
model.fit( [[5, 2, 3, 4], [5, 7, 2, 3, 5]], distribution_inertia=0.3, edge_inertia=0.25 )
Explanation: The above model doesn't necessarily make sense, but it shows how simple it is to tie edges as well. You can go ahead and train normally from this point, without needing to change any code.
Inertia
The next options are inertia on edges or on distributions. This simply means that you update your parameters as (previous_parameter * inertia) + (new_parameter * (1-inertia) ). It is a way to prevent your updates from overfitting immediately. You can specify this in the train function using either edge_inertia or distribution_inertia. These default to 0, with 1 being the maximum, meaning that you don't update based on new evidence, the same as freezing a distribution or the edges.
End of explanation
s1 = State( NormalDistribution( 3, 1 ), name="s1" )
s2 = State( NormalDistribution( 6, 2 ), name="s2" )
model = HiddenMarkovModel()
model.add_states( [s1, s2] )
model.add_transition( model.start, s1, 0.5, pseudocount=4.2 )
model.add_transition( model.start, s2, 0.5, pseudocount=1.3 )
model.add_transition( s1, s2, 0.5, pseudocount=5.2 )
model.add_transition( s2, s1, 0.5, pseudocount=0.9 )
model.bake()
model.fit( [[5, 2, 3, 4], [5, 7, 2, 3, 5]], max_iterations=5, use_pseudocount=True )
Explanation: Pseudocounts
Another way of regularizing your model is to add pseudocounts to your edges (which have non-zero probabilities). When updating your edges in the future, you add this pseudocount to the count of transitions across that edge in the future. This gives a more Bayesian estimate of the edge probability, and is useful if you have a large model and don't expect to cross most of the edges with your training data. An example might be a complicated profile HMM, where you don't expect to see deletes or inserts at all in your training data, but don't want to change from the default values.
In pomegranate, pseudocounts default to the initial probabilities, so that if you don't see data, the edge values simply aren't updated. You can define both edge specific pseudocounts when you define the transition. When you train, you must define use_pseudocount=True.
End of explanation
s1 = State( NormalDistribution( 3, 1 ), name="s1" )
s2 = State( NormalDistribution( 6, 2 ), name="s2" )
model = HiddenMarkovModel()
model.add_states( [s1, s2] )
model.add_transition( model.start, s1, 0.5 )
model.add_transition( model.start, s2, 0.5 )
model.add_transition( s1, s2, 0.5 )
model.add_transition( s2, s1, 0.5 )
model.bake()
model.fit( [[5, 2, 3, 4], [5, 7, 2, 3, 5]], max_iterations=5, transition_pseudocount=20, use_pseudocount=True )
Explanation: The other way is to put a blanket pseudocount on all edges.
End of explanation
s1 = State( NormalDistribution( 3, 1 ), name="s1" )
s2 = State( NormalDistribution( 6, 2 ), name="s2" )
model = HiddenMarkovModel()
model.add_states( [s1, s2] )
model.add_transition( model.start, s1, 0.5 )
model.add_transition( model.start, s2, 0.5 )
model.add_transition( s1, s2, 0.5 )
model.add_transition( s2, s1, 0.5 )
model.bake()
model.fit( [[5, 2, 3, 4, 7, 3, 6, 3, 5, 2, 4], [5, 7, 2, 3, 5, 1, 3, 5, 6, 2]], max_iterations=5 )
s1 = State( NormalDistribution( 3, 1 ), name="s1" )
s2 = State( NormalDistribution( 6, 2 ), name="s2" )
model = HiddenMarkovModel()
model.add_states( [s1, s2] )
model.add_transition( model.start, s1, 0.5 )
model.add_transition( model.start, s2, 0.5 )
model.add_transition( s1, s2, 0.5 )
model.add_transition( s2, s1, 0.5 )
model.bake()
model.fit( [[5, 2, 3, 4, 7, 3, 6, 3, 5, 2, 4], [5, 7, 2, 3, 5, 1, 3, 5, 6, 2]], max_iterations=5, n_jobs=4 )
Explanation: We can see that there isn't as much of an improvement. This is part of regularization, though. We sacrifice fitting the data exactly in order for our model to generalize better to future data. The majority of the training improvement is likely coming from the emissions better fitting the data, though.
Multithreaded Training
Since pomegranate is implemented in cython, the majority of functions are written with the GIL released. A benefit of doing this is that we can use multithreading in order to make some computationally intensive tasks take less time. However, a downside is that python doesn't play nicely with multithreading, and so there are some cases where training using multithreading can make your model training take significantly longer. I investigate this in an early multithreading pull request <a href="https://github.com/jmschrei/pomegranate/pull/30">here</a>. Things have improved since then, but the gist is that if you have a small model (less than 15 states), it may be detrimental, but the larger your model is, the more it scales towards getting a speed improvement exactly the number of threads you use. You can specify multithreading using the n_jobs keyword. All structures in pomegranate are thread safe, so you don't need to worry about race conditions.
End of explanation
seq = list('CGACTACTGACTACTCGCCGACGCGACTGCCGTCTATACTGCGCATACGGC')
d1 = DiscreteDistribution({'A': 0.25, 'C': 0.25, 'G': 0.25, 'T': 0.25})
d2 = DiscreteDistribution({'A': 0.10, 'C': 0.40, 'G': 0.40, 'T': 0.10})
s1 = State( d1, name='background' )
s2 = State( d2, name='CG island' )
hmm = HiddenMarkovModel()
hmm.add_states(s1, s2)
hmm.add_transition( hmm.start, s1, 0.5 )
hmm.add_transition( hmm.start, s2, 0.5 )
hmm.add_transition( s1, s1, 0.5 )
hmm.add_transition( s1, s2, 0.5 )
hmm.add_transition( s2, s1, 0.5 )
hmm.add_transition( s2, s2, 0.5 )
hmm.bake()
print hmm.to_json()
seq = list('CGACTACTGACTACTCGCCGACGCGACTGCCGTCTATACTGCGCATACGGC')
print hmm.log_probability( seq )
hmm_2 = HiddenMarkovModel.from_json( hmm.to_json() )
print hmm_2.log_probability( seq )
Explanation: Serialization
General Mixture Models support serialization to JSONs using to_json() and from_json( json ). This is useful is you want to train a GMM on large amounts of data, taking a significant amount of time, and then use this model in the future without having to repeat this computationally intensive step (sounds familiar by now). Lets look at the original CG island model, since it's significantly smaller.
End of explanation |
4,177 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notes on Columns
From data reference PDF
tree_dbh
Diameter of the tree, measured at approximately 54" / 137cm above the ground. Data was collected for both living and dead trees; for stumps, use stump_diam
Because standard measuring tapes are more accessible than forestry-specific measuring tapes designed to measure diameter, users originally measured tree circumference in the field. To better match other forestry datasets, this circumference value was subsequently divided by 3.14159 to transform it to diameter. Both the field measurement and processed value were rounded to the nearest whole inch.
health
Indicates the user's perception of tree health.
Step1: The most common tree trunk size is 4.
Assumptions
Increase in number of trees is a benefit. Every new tree within the radius of the bike station should increase score.
Any tree is valuable, even in poor condition. No negative scores.
There is a diminishing return in increasing tree size.
What is the relative benefit of increasing size of trees? Does a tree twice as large bring twice as much value to publice space? Or do trees have a diminishing return on size?
Step2: Much better distrbution. We'll use this for the score.
Step3: Thoughts
Bad health should cause a penalty, good health should simply be 100%.
Good
Step4: The equation for the tree score is as follow
Step5: Merge Into CSV | Python Code:
# Make tree diameter an integer
df.tree_dbh = df.tree_dbh.astype("int64")
df.describe()
len(df[df["tree_dbh"] < 50])
df[df["tree_dbh"] > 100]
df[df["tree_dbh"] < 40].tree_dbh.value_counts(sort=False).plot(kind="bar")
Explanation: Notes on Columns
From data reference PDF
tree_dbh
Diameter of the tree, measured at approximately 54" / 137cm above the ground. Data was collected for both living and dead trees; for stumps, use stump_diam
Because standard measuring tapes are more accessible than forestry-specific measuring tapes designed to measure diameter, users originally measured tree circumference in the field. To better match other forestry datasets, this circumference value was subsequently divided by 3.14159 to transform it to diameter. Both the field measurement and processed value were rounded to the nearest whole inch.
health
Indicates the user's perception of tree health.
End of explanation
tree_size_counts = df.tree_dbh.value_counts(sort=False)
df["size_score"] = np.sqrt(1 + df.tree_dbh)
df.size_score.hist(bins=30)
Explanation: The most common tree trunk size is 4.
Assumptions
Increase in number of trees is a benefit. Every new tree within the radius of the bike station should increase score.
Any tree is valuable, even in poor condition. No negative scores.
There is a diminishing return in increasing tree size.
What is the relative benefit of increasing size of trees? Does a tree twice as large bring twice as much value to publice space? Or do trees have a diminishing return on size?
End of explanation
df.health.value_counts(sort=False).plot(kind="bar")
# One tree does not have health status. Remove it.
df[pd.isnull(df.health)]
df = df[~pd.isnull(df.health)]
Explanation: Much better distrbution. We'll use this for the score.
End of explanation
def define_health(x):
if x == "Good":
return 1.0
elif x == "Fair":
return 0.8
elif x == "Poor":
return 0.6
df["health_multiplier"] = df.health.map(define_health)
df["score"] = df.health_multiplier * df.size_score
df.score.hist(bins=60)
df.describe()
Explanation: Thoughts
Bad health should cause a penalty, good health should simply be 100%.
Good: 100%
Fair: 80%
Poor: 60%
End of explanation
df.to_csv('../data/interim/tree-scores.csv')
Explanation: The equation for the tree score is as follow:
$$
S = h \sqrt{1 + \oslash_{tree}}
$$
$h$ being the health multiplier, from 0.6 to 1.0 depending on the health of the tree.
$\oslash_{tree}$ being the diameter of the tree.
End of explanation
# Merge into stations
stations = pd.read_csv('../data/processed/stations.csv')
# Convert stations csv into buffer polygons
geometry = gpd.GeoSeries([Point(xy) for xy in zip(stations.Longitude, stations.Latitude)])
geometry = geometry.buffer(.0005)
geo_stations = gpd.GeoDataFrame(stations, geometry=geometry)
geo_stations.crs = {'init' :'epsg:4326'}
geo_stations.head()
geo_stations.to_file('../data/interim/geo_stations')
# Merge street quality data with citibike stations using Geopandas Spatial Merge
station_df = gpd.sjoin(geo_stations, df, how="left", op='intersects')
# Save for Map
df.loc[station_df.index_right.dropna().unique(), :].to_csv("../data/map/trees.csv")
station_df
# Create new dataframe with summed score
station_scores = pd.DataFrame()
station_scores["score"] = station_df.groupby(['Station_id']).score.sum()
station_scores["score_mean"] = station_df.groupby(['Station_id']).score.mean()
station_scores["tree_count"] = station_df.groupby(['Station_id']).score.count()
station_scores["station_id"] = station_df.groupby(['Station_id']).score.mean().index
station_scores.fillna(0.0, inplace=True)
station_scores.head(3)
station_scores.describe()
# Histogram of scores
station_scores.score.hist(bins=50)
# Histogram of tree counts
station_scores.tree_count.hist(bins=20)
# Histogram of mean tree score per station
station_scores.score_mean.hist(bins=40)
zero_stations = len(station_scores[station_scores.score == 0])
print("Number of stations without trees:", str(zero_stations))
# Save to CSV
station_scores.to_csv("../data/processed/tree-canopy.csv")
Explanation: Merge Into CSV
End of explanation |
4,178 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PMOD TIMER
In this notebook, PMOD Timer functionalities are illustrated. The Timer has two sub-modules
Step1: Instantiate Pmod_Timer class. The method stop() will stop both timer sub-modules.
In this example, we will use pin 0 of the PMODA interface. PMODB and other pins can also be used.
Step2: 2. Generate pulses for a certain period of time
In this example, we choose the Digilent Analog Discovery 2 as the scope.
The 1+ pin (of channel 1) has to be connected to pin 0 on PMODA interface.
Use the following settings for waveform.
<img src="data/generate_1us_forever_settings.jpg" width="200px"/>
Generate a 10 ns clock pulse every 1 microseconds for 4 seconds and then stop the generation.
Note that pulses are generated every $count\times10$ ns. Here count is defined as period.
You should see output like this
Step3: 3. Generate a certain number of pulses
Note first parameter is the period interval.
Denoting the desired period as $T$ (in ns), we need to set the first parameter period to
Step4: Now generate the pulses at every 1 $\mu$s interval.
Step5: Stop the generation.
Step6: 4. Determine if an event has occurred at the input
An event is either a rising edge or a high logic level. The parameter is duration, $period\times10$ ns, in which the event is to be detected. It returns 0 if no event occurred, otherwise it returns 1.
Use a waveform generator in this example. Connect W1 channel of the Analog Discovery to pin 0 of PMODA.
Do not run the waveform generation in the next cell.
Step7: Now run the waveform generation and then run the next cell. Set the waveform generator settings as shown below
Step8: 5. Count number of events occurred during a desired period
An event is either a rising edge or a high logic level. The parameter is duration, $period\times10$ ns, in which the number of event are counted. In this example we are interested in number of events occurring in 10 $\mu$s.
Use a waveform generator in this example. Use the following settings of the waveform generator and run the generator. Then run the next example.
<img src="data/count_events_10us.jpg" width="200px"/>
Step9: 6. Measure period between two rising edges
An event is either a rising edge or a high logic level. It expects at least two rising edges. The return result is in units of nanoseconds.
Use a waveform generator in this example. Use the following settings of the waveform generator and run the generation. Then run the next example.
<img src="data/measure_period_200KHz.jpg" width="200px"/> | Python Code:
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
Explanation: PMOD TIMER
In this notebook, PMOD Timer functionalities are illustrated. The Timer has two sub-modules: Timer0 and Timer1.
The Generate output and Capture Input of Timer 0 are assumed to be connected to PMODA pin 0.
1. The Generate function outputs one clock (10 ns) pulse after a desired period.
2. The Capture input is sensitive to a rising edge or high level logic.
To see the results of this notebook, you will need a Digilent Analog Discovery 2
<td> <img src="http://cdn6.bigcommerce.com/s-7gavg/products/468/images/2617/Analog_Discovery_2_obl_Academic_600__01249.1447804398.1280.1280.png" alt="Drawing" style="width: 250px;"/> </td>
and WaveForms 2015
<td> <img src="https://reference.digilentinc.com/_media/reference/software/waveforms/waveforms-3/waveforms3-0.png" alt="Drawing" style="width: 250px;"/> </td>
1. Instantiation
Import overlay to use the timers.
End of explanation
from time import sleep
from pynq.lib import Pmod_Timer
pt = Pmod_Timer(base.PMODA,0)
pt.stop()
Explanation: Instantiate Pmod_Timer class. The method stop() will stop both timer sub-modules.
In this example, we will use pin 0 of the PMODA interface. PMODB and other pins can also be used.
End of explanation
# Generate a 10 ns pulse every period*10 ns
period=100
pt.generate_pulse(period)
# Sleep for 4 seconds and stop the timer
sleep(4)
pt.stop()
Explanation: 2. Generate pulses for a certain period of time
In this example, we choose the Digilent Analog Discovery 2 as the scope.
The 1+ pin (of channel 1) has to be connected to pin 0 on PMODA interface.
Use the following settings for waveform.
<img src="data/generate_1us_forever_settings.jpg" width="200px"/>
Generate a 10 ns clock pulse every 1 microseconds for 4 seconds and then stop the generation.
Note that pulses are generated every $count\times10$ ns. Here count is defined as period.
You should see output like this:
<img src="data/generate_1us_forever.jpg" width="800px"/>
End of explanation
# Generate 3 pulses at every 1 us
count=3
period=100
pt.generate_pulse(period, count)
Explanation: 3. Generate a certain number of pulses
Note first parameter is the period interval.
Denoting the desired period as $T$ (in ns), we need to set the first parameter period to:
$period = \frac{T}{10} $
The second parameter is the number of pulses to be generated.
Run the following cell and you should see output in the scope like this:
<img src="data/generate_1us_n_times.jpg" width="800px"/>
End of explanation
# Generate pulses per 1 us forever
count=0
period=100
pt.generate_pulse(period, count)
Explanation: Now generate the pulses at every 1 $\mu$s interval.
End of explanation
pt.stop()
Explanation: Stop the generation.
End of explanation
# Detect any event within 10 us
period=1000
pt.event_detected(period)
Explanation: 4. Determine if an event has occurred at the input
An event is either a rising edge or a high logic level. The parameter is duration, $period\times10$ ns, in which the event is to be detected. It returns 0 if no event occurred, otherwise it returns 1.
Use a waveform generator in this example. Connect W1 channel of the Analog Discovery to pin 0 of PMODA.
Do not run the waveform generation in the next cell.
End of explanation
# Detect any event within 20 ms
period=200000
pt.event_detected(period)
Explanation: Now run the waveform generation and then run the next cell. Set the waveform generator settings as shown below:
<img src="data/measure_period_200KHz.jpg" width="200px"/>
End of explanation
# Count number of events within 10 us
period=1000
pt.event_count(period)
Explanation: 5. Count number of events occurred during a desired period
An event is either a rising edge or a high logic level. The parameter is duration, $period\times10$ ns, in which the number of event are counted. In this example we are interested in number of events occurring in 10 $\mu$s.
Use a waveform generator in this example. Use the following settings of the waveform generator and run the generator. Then run the next example.
<img src="data/count_events_10us.jpg" width="200px"/>
End of explanation
period = pt.get_period_ns()
print("The measured waveform frequency: {} Hz".format(1e9/period))
Explanation: 6. Measure period between two rising edges
An event is either a rising edge or a high logic level. It expects at least two rising edges. The return result is in units of nanoseconds.
Use a waveform generator in this example. Use the following settings of the waveform generator and run the generation. Then run the next example.
<img src="data/measure_period_200KHz.jpg" width="200px"/>
End of explanation |
4,179 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Instruction
This tutorial explain how to use calc_barriers wrapper
calc_barriers is high level wrapeer used for calculation of migration barriers.
The calculations are performed by executing the same command for several times.
First run
Calculation of equillibrium lattice constants
Second run
Construction of supercell based on optimized unit cell, and additional relaxation of atomic positions
Third run
Calculation of migration barrier using obtained supercell
params - special dictionary
'jmol' -{} - to save path as png
Import libraries
Step1: Set configuration parameters
Step2: Starting calculation
Choose starting calculation.
For example 2-atom cell of fcc Cu
Step3: Configuration dictionary
The configuration dictionary should be created
Step4: 1. Unit cell optimization
First argument should be normal
Second and third arguments are moving element
up - update unit cell optimization
upA - update supercell calculation
upC - update neb calculation
Step5: 2. Supercell construction
After the optimization is finished, run the same command once again, it will show the fit and construct the supercell.
Step6: 3. Read supercell and start NEB calculation using the same calculation
This step uses add_neb subroutine from neb.py
To choose different paths change
pd['start_pos'] and
pd['end_pos'] values
The command suggest you possible values of initial and final positions, see below.
If you want to study migration of substitution atom, then
use additional arguments
Step7: 3.1 Migration of substitution atom
Step8: 3.2 Migration of interstitial atom
Attention! This mode relies on C++ routine siman/findpores.cpp; It should be compiled with siman/make_findpores first
Step9: 3.3 Using starting cell with Li
Assuming you already have supercell with Li, the migration barrier for its migration can be calculated as follows
using add_neb | Python Code:
import sys
sys.path.extend(['/home/aksenov/Simulation_wrapper/siman'])
import header
from calc_manage import add, res
from database import write_database, read_database
from set_functions import read_vasp_sets
from calc_manage import smart_structure_read
from SSHTools import SSHTools
from project_funcs import calc_barriers
%matplotlib inline
Explanation: Instruction
This tutorial explain how to use calc_barriers wrapper
calc_barriers is high level wrapeer used for calculation of migration barriers.
The calculations are performed by executing the same command for several times.
First run
Calculation of equillibrium lattice constants
Second run
Construction of supercell based on optimized unit cell, and additional relaxation of atomic positions
Third run
Calculation of migration barrier using obtained supercell
params - special dictionary
'jmol' -{} - to save path as png
Import libraries
End of explanation
header.ssh_object = SSHTools()
header.ssh_object.setup(user="aksenov",host="10.30.16.62",pkey="/home/aksenov/.ssh/id_rsa")
header.PATH2PROJECT = 'barriers' # path to project relative to your home folder on cluster
header.PATH2POTENTIALS = '/home/aksenov/scientific_projects/PAW_PBE_VASP' #path to VASP POTENTIALS
header.PATH2NEBMAKE = '~/Simulation_wrapper/vts/nebmake.pl' # add path to nebmake in your project_conf.py
read_database()
header.varset['static'].potdir = {29:'Cu_new', 3:'Li'} #subfolders with required potentials
read_vasp_sets([
('ion', 'static', {'ISIF':2, 'IBRION':1, 'NSW':20, 'EDIFFG':-0.025}, ), # relax only ions
('cell', 'static', {'ISIF':4, 'IBRION':1, 'NSW':20, 'EDIFFG':-0.025},)]) #relax everything except volume
Explanation: Set configuration parameters
End of explanation
add('Cu2', 'static', 1, input_geo_file = 'Cu/Cu2fcc.geo', it_folder = 'Cu')
Explanation: Starting calculation
Choose starting calculation.
For example 2-atom cell of fcc Cu
End of explanation
pd = {
'id':('Cu2', 'static', 1), # starting calculation
'el':'Li', # Element to move
'itfolder':'Cu/', # Workding directory
'main_set':'ion', # This set is used for supercell calculation
'scaling_set':'ion', # This set is used for determining lattice parameters
'neb_set':'ion', # This set is used for calculation of migration barrier
'scale_region':(-4, 4), # range of unit cell uniform deformation in %
'ortho':[7,7,7], # Target sizes of supercell in A
'r_impurity':1.2, # radius of searchable void
'images':5, # number of images in NEB calculation
'start_pos':0, # starting position for NEB; offered by the wrapper
'end_pos':1, # final position for NEB; offered by the wrapper
'readfiles':1, # read OUTCAR files
}
Explanation: Configuration dictionary
The configuration dictionary should be created
End of explanation
calc_barriers('normal', 'Li', 'Li', show_fit = 0, up = 1, upA = 0, upC = 0, param_dic = pd, add_loop_dic = {'run':1})
Explanation: 1. Unit cell optimization
First argument should be normal
Second and third arguments are moving element
up - update unit cell optimization
upA - update supercell calculation
upC - update neb calculation
End of explanation
calc_barriers('normal', 'Li', 'Li', show_fit = 1, up = 0, upA = 0, upC = 0, param_dic = pd, add_loop_dic = {'run':1})
Explanation: 2. Supercell construction
After the optimization is finished, run the same command once again, it will show the fit and construct the supercell.
End of explanation
pd['el'] = 'Cu' # Cu atom is chosen for moving
pd['i_atom_to_move'] = 1 # number of atom to move
pd['rep_moving_atom'] = 'Li' # replace moving atom with Li
Explanation: 3. Read supercell and start NEB calculation using the same calculation
This step uses add_neb subroutine from neb.py
To choose different paths change
pd['start_pos'] and
pd['end_pos'] values
The command suggest you possible values of initial and final positions, see below.
If you want to study migration of substitution atom, then
use additional arguments:
End of explanation
calc_barriers('normal', 'Cu', 'Cu', show_fit = 0, up = 0, upA = 0, upC = 0, param_dic = pd, add_loop_dic = {'run':0})
# after running this command, go to ./xyz/Cu2.su.s7v100.n5Cu2Cu2v1rLi_all and check the created path
Explanation: 3.1 Migration of substitution atom
End of explanation
pd['i_atom_to_move'] = None
pd['rep_moving_atom'] = None
pd['el'] = 'Li'
calc_barriers('normal', 'Li', 'Li', show_fit = 0, up = 0, upA = 0, upC = 1, param_dic = pd, add_loop_dic = {'run':0})
#after the command is finished please check Cu2.su.s7v100.n5i0e1Li_all folder with POSCARs
Explanation: 3.2 Migration of interstitial atom
Attention! This mode relies on C++ routine siman/findpores.cpp; It should be compiled with siman/make_findpores first
End of explanation
#Here we use additional parameter *end_pos_types_z*; it allow to use Cu as final positions for Li migration
from neb import add_neb
st = smart_structure_read('Cu/POSCAR_Cu310A2Liis2_1lo_2_end')
add_neb(st = st, it_new = 'Cu310A2_212Li', ise_new = 'ion', it_folder = 'Cu/neb',
images = 5, i_atom_to_move = 215, i_void_final = 6, end_pos_types_z = [29])
#Check created path in xyz/Cu310A2_212Li.n5Li216Li216v6_all
write_database()
Explanation: 3.3 Using starting cell with Li
Assuming you already have supercell with Li, the migration barrier for its migration can be calculated as follows
using add_neb:
End of explanation |
4,180 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
机器学习工程师纳米学位
机器学习基础
项目 0
Step1: 从泰坦尼克号的数据样本中,我们可以看到船上每位旅客的特征
Survived:是否存活(0代表否,1代表是)
Pclass:社会阶级(1代表上层阶级,2代表中层阶级,3代表底层阶级)
Name:船上乘客的名字
Sex:船上乘客的性别
Age
Step3: 这个例子展示了如何将泰坦尼克号的 Survived 数据从 DataFrame 移除。注意到 data(乘客数据)和 outcomes (是否存活)现在已经匹配好。这意味着对于任何乘客的 data.loc[i] 都有对应的存活的结果 outcome[i]。
计算准确率
为了验证我们预测的结果,我们需要一个标准来给我们的预测打分。因为我们最感兴趣的是我们预测的准确率,既正确预测乘客存活的比例。运行下面的代码来创建我们的 accuracy_score 函数以对前五名乘客的预测来做测试。
思考题:在前五个乘客中,如果我们预测他们全部都存活,你觉得我们预测的准确率是多少?
Step5: 提示:如果你保存 iPython Notebook,代码运行的输出也将被保存。但是,一旦你重新打开项目,你的工作区将会被重置。请确保每次都从上次离开的地方运行代码来重新生成变量和函数。
最简单的预测
如果我们要预测泰坦尼克号上的乘客是否存活,但是我们又对他们一无所知,那么最好的预测就是船上的人无一幸免。这是因为,我们可以假定当船沉没的时候大多数乘客都遇难了。下面的 predictions_0 函数就预测船上的乘客全部遇难。
Step6: 问题1:对比真实的泰坦尼克号的数据,如果我们做一个所有乘客都没有存活的预测,这个预测的准确率能达到多少?
回答: 请用预测结果来替换掉这里的文字
提示:运行下面的代码来查看预测的准确率。
Step7: 考虑一个特征进行预测
我们可以使用 survival_stats 函数来看看 Sex 这一特征对乘客的存活率有多大影响。这个函数定义在名为 titanic_visualizations.py 的 Python 脚本文件中,我们的项目提供了这个文件。传递给函数的前两个参数分别是泰坦尼克号的乘客数据和乘客的 生还结果。第三个参数表明我们会依据哪个特征来绘制图形。
运行下面的代码绘制出依据乘客性别计算存活率的柱形图。
Step9: 观察泰坦尼克号上乘客存活的数据统计,我们可以发现大部分男性乘客在船沉没的时候都遇难了。相反的,大部分女性乘客都在事故中生还。让我们以此改进先前的预测:如果乘客是男性,那么我们就预测他们遇难;如果乘客是女性,那么我们预测他们在事故中活了下来。
将下面的代码补充完整,让函数可以进行正确预测。
提示:您可以用访问 dictionary(字典)的方法来访问船上乘客的每个特征对应的值。例如, passenger['Sex'] 返回乘客的性别。
Step10: 问题2:当我们预测船上女性乘客全部存活,而剩下的人全部遇难,那么我们预测的准确率会达到多少?
回答
Step12: 仔细观察泰坦尼克号存活的数据统计,在船沉没的时候,大部分小于10岁的男孩都活着,而大多数10岁以上的男性都随着船的沉没而遇难。让我们继续在先前预测的基础上构建:如果乘客是女性,那么我们就预测她们全部存活;如果乘客是男性并且小于10岁,我们也会预测他们全部存活;所有其它我们就预测他们都没有幸存。
将下面缺失的代码补充完整,让我们的函数可以实现预测。
提示
Step13: 问题3:当预测所有女性以及小于10岁的男性都存活的时候,预测的准确率会达到多少?
回答
Step15: 当查看和研究了图形化的泰坦尼克号上乘客的数据统计后,请补全下面这段代码中缺失的部分,使得函数可以返回你的预测。
在到达最终的预测模型前请确保记录你尝试过的各种特征和条件。
提示 | Python Code:
# 检查你的Python版本
from sys import version_info
if version_info.major != 2 and version_info.minor != 7:
raise Exception('请使用Python 2.7来完成此项目')
import numpy as np
import pandas as pd
# 数据可视化代码
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# 加载数据集
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# 显示数据列表中的前几项乘客数据
display(full_data.head())
Explanation: 机器学习工程师纳米学位
机器学习基础
项目 0: 预测泰坦尼克号乘客生还率
1912年,泰坦尼克号在第一次航行中就与冰山相撞沉没,导致了大部分乘客和船员身亡。在这个入门项目中,我们将探索部分泰坦尼克号旅客名单,来确定哪些特征可以最好地预测一个人是否会生还。为了完成这个项目,你将需要实现几个基于条件的预测并回答下面的问题。我们将根据代码的完成度和对问题的解答来对你提交的项目的进行评估。
提示:这样的文字将会指导你如何使用 iPython Notebook 来完成项目。
点击这里查看本文件的英文版本。
了解数据
当我们开始处理泰坦尼克号乘客数据时,会先导入我们需要的功能模块以及将数据加载到 pandas DataFrame。运行下面区域中的代码加载数据,并使用 .head() 函数显示前几项乘客数据。
提示:你可以通过单击代码区域,然后使用键盘快捷键 Shift+Enter 或 Shift+ Return 来运行代码。或者在选择代码后使用播放(run cell)按钮执行代码。像这样的 MarkDown 文本可以通过双击编辑,并使用这些相同的快捷键保存。Markdown 允许你编写易读的纯文本并且可以转换为 HTML。
End of explanation
# 从数据集中移除 'Survived' 这个特征,并将它存储在一个新的变量中。
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# 显示已移除 'Survived' 特征的数据集
display(data.head())
Explanation: 从泰坦尼克号的数据样本中,我们可以看到船上每位旅客的特征
Survived:是否存活(0代表否,1代表是)
Pclass:社会阶级(1代表上层阶级,2代表中层阶级,3代表底层阶级)
Name:船上乘客的名字
Sex:船上乘客的性别
Age:船上乘客的年龄(可能存在 NaN)
SibSp:乘客在船上的兄弟姐妹和配偶的数量
Parch:乘客在船上的父母以及小孩的数量
Ticket:乘客船票的编号
Fare:乘客为船票支付的费用
Cabin:乘客所在船舱的编号(可能存在 NaN)
Embarked:乘客上船的港口(C 代表从 Cherbourg 登船,Q 代表从 Queenstown 登船,S 代表从 Southampton 登船)
因为我们感兴趣的是每个乘客或船员是否在事故中活了下来。可以将 Survived 这一特征从这个数据集移除,并且用一个单独的变量 outcomes 来存储。它也做为我们要预测的目标。
运行该代码,从数据集中移除 Survived 这个特征,并将它存储在变量 outcomes 中。
End of explanation
def accuracy_score(truth, pred):
返回 pred 相对于 truth 的准确率
# 确保预测的数量与结果的数量一致
if len(truth) == len(pred):
# 计算预测准确率(百分比)
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# 测试 'accuracy_score' 函数
predictions = pd.Series(np.ones(5, dtype = int)) #五个预测全部为1,既存活
print accuracy_score(outcomes[:5], predictions)
Explanation: 这个例子展示了如何将泰坦尼克号的 Survived 数据从 DataFrame 移除。注意到 data(乘客数据)和 outcomes (是否存活)现在已经匹配好。这意味着对于任何乘客的 data.loc[i] 都有对应的存活的结果 outcome[i]。
计算准确率
为了验证我们预测的结果,我们需要一个标准来给我们的预测打分。因为我们最感兴趣的是我们预测的准确率,既正确预测乘客存活的比例。运行下面的代码来创建我们的 accuracy_score 函数以对前五名乘客的预测来做测试。
思考题:在前五个乘客中,如果我们预测他们全部都存活,你觉得我们预测的准确率是多少?
End of explanation
def predictions_0(data):
不考虑任何特征,预测所有人都无法生还
predictions = []
for _, passenger in data.iterrows():
# 预测 'passenger' 的生还率
predictions.append(0)
# 返回预测结果
return pd.Series(predictions)
# 进行预测
predictions = predictions_0(data)
Explanation: 提示:如果你保存 iPython Notebook,代码运行的输出也将被保存。但是,一旦你重新打开项目,你的工作区将会被重置。请确保每次都从上次离开的地方运行代码来重新生成变量和函数。
最简单的预测
如果我们要预测泰坦尼克号上的乘客是否存活,但是我们又对他们一无所知,那么最好的预测就是船上的人无一幸免。这是因为,我们可以假定当船沉没的时候大多数乘客都遇难了。下面的 predictions_0 函数就预测船上的乘客全部遇难。
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: 问题1:对比真实的泰坦尼克号的数据,如果我们做一个所有乘客都没有存活的预测,这个预测的准确率能达到多少?
回答: 请用预测结果来替换掉这里的文字
提示:运行下面的代码来查看预测的准确率。
End of explanation
survival_stats(data, outcomes, 'Sex')
Explanation: 考虑一个特征进行预测
我们可以使用 survival_stats 函数来看看 Sex 这一特征对乘客的存活率有多大影响。这个函数定义在名为 titanic_visualizations.py 的 Python 脚本文件中,我们的项目提供了这个文件。传递给函数的前两个参数分别是泰坦尼克号的乘客数据和乘客的 生还结果。第三个参数表明我们会依据哪个特征来绘制图形。
运行下面的代码绘制出依据乘客性别计算存活率的柱形图。
End of explanation
def predictions_1(data):
只考虑一个特征,如果是女性则生还
predictions = []
for _, passenger in data.iterrows():
# TODO 1
# 移除下方的 'pass' 声明
# 输入你自己的预测条件
pass
# 返回预测结果
return pd.Series(predictions)
# 进行预测
predictions = predictions_1(data)
Explanation: 观察泰坦尼克号上乘客存活的数据统计,我们可以发现大部分男性乘客在船沉没的时候都遇难了。相反的,大部分女性乘客都在事故中生还。让我们以此改进先前的预测:如果乘客是男性,那么我们就预测他们遇难;如果乘客是女性,那么我们预测他们在事故中活了下来。
将下面的代码补充完整,让函数可以进行正确预测。
提示:您可以用访问 dictionary(字典)的方法来访问船上乘客的每个特征对应的值。例如, passenger['Sex'] 返回乘客的性别。
End of explanation
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
Explanation: 问题2:当我们预测船上女性乘客全部存活,而剩下的人全部遇难,那么我们预测的准确率会达到多少?
回答: 用预测结果来替换掉这里的文字
提示:你需要在下面添加一个代码区域,实现代码并运行来计算准确率。
考虑两个特征进行预测
仅仅使用乘客性别(Sex)这一特征,我们预测的准确性就有了明显的提高。现在再看一下使用额外的特征能否更进一步提升我们的预测准确度。例如,综合考虑所有在泰坦尼克号上的男性乘客:我们是否找到这些乘客中的一个子集,他们的存活概率较高。让我们再次使用 survival_stats 函数来看看每位男性乘客的年龄(Age)。这一次,我们将使用第四个参数来限定柱形图中只有男性乘客。
运行下面这段代码,把男性基于年龄的生存结果绘制出来。
End of explanation
def predictions_2(data):
考虑两个特征:
- 如果是女性则生还
- 如果是男性并且小于10岁则生还
predictions = []
for _, passenger in data.iterrows():
# TODO 2
# 移除下方的 'pass' 声明
# 输入你自己的预测条件
pass
# 返回预测结果
return pd.Series(predictions)
# 进行预测
predictions = predictions_2(data)
Explanation: 仔细观察泰坦尼克号存活的数据统计,在船沉没的时候,大部分小于10岁的男孩都活着,而大多数10岁以上的男性都随着船的沉没而遇难。让我们继续在先前预测的基础上构建:如果乘客是女性,那么我们就预测她们全部存活;如果乘客是男性并且小于10岁,我们也会预测他们全部存活;所有其它我们就预测他们都没有幸存。
将下面缺失的代码补充完整,让我们的函数可以实现预测。
提示: 您可以用之前 predictions_1 的代码作为开始来修改代码,实现新的预测函数。
End of explanation
survival_stats(data, outcomes, 'Age', ["Sex == 'male'", "Age < 18"])
Explanation: 问题3:当预测所有女性以及小于10岁的男性都存活的时候,预测的准确率会达到多少?
回答: 用预测结果来替换掉这里的文字
提示:你需要在下面添加一个代码区域,实现代码并运行来计算准确率。
你自己的预测模型
添加年龄(Age)特征与性别(Sex)的结合比单独使用性别(Sex)也提高了不少准确度。现在该你来做预测了:找到一系列的特征和条件来对数据进行划分,使得预测结果提高到80%以上。这可能需要多个特性和多个层次的条件语句才会成功。你可以在不同的条件下多次使用相同的特征。Pclass,Sex,Age,SibSp 和 Parch 是建议尝试使用的特征。
使用 survival_stats 函数来观测泰坦尼克号上乘客存活的数据统计。
提示: 要使用多个过滤条件,把每一个条件放在一个列表里作为最后一个参数传递进去。例如: ["Sex == 'male'", "Age < 18"]
End of explanation
def predictions_3(data):
考虑多个特征,准确率至少达到80%
predictions = []
for _, passenger in data.iterrows():
# TODO 3
# 移除下方的 'pass' 声明
# 输入你自己的预测条件
pass
# 返回预测结果
return pd.Series(predictions)
# 进行预测
predictions = predictions_3(data)
Explanation: 当查看和研究了图形化的泰坦尼克号上乘客的数据统计后,请补全下面这段代码中缺失的部分,使得函数可以返回你的预测。
在到达最终的预测模型前请确保记录你尝试过的各种特征和条件。
提示: 您可以用之前 predictions_2 的代码作为开始来修改代码,实现新的预测函数。
End of explanation |
4,181 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
You can download water chemistry of an entire HUC. It downloads wells and springs and major ions by default, unless specified otherwise.
Step1: Standardize the headers and units in the results file. | Python Code:
chem = wa.WQP(16020301,'huc')
Explanation: You can download water chemistry of an entire HUC. It downloads wells and springs and major ions by default, unless specified otherwise.
End of explanation
Results = chem.massage_results()
Stations = chem.massage_stations()
Piv = chem.piv_chem()
Piv.reset_index(inplace=True)
Piv
pipr = wa.piper(Piv,var_col='TDS')
stream = wa.WQP(16020301,'huc',siteType='Stream',sampleMedia='Water',characteristicName='Specific conductance')
Explanation: Standardize the headers and units in the results file.
End of explanation |
4,182 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to run TARDIS with a custom ejecta model
This notebook will go through multiple detailed examples of how to properly run TARDIS with a custom ejecta profile specified by a custom density file and a custom abundance file.
Step1: Your custom density file
First, let's look at an example of a custom density file.
The first line specifies the time in days after the explosion
After a skipped line, each row corresponds to a shell with index specified by the first column.
The second column lists the velocities of the outer boundary of the cell in km/s.
The third column lists the density of the cell.
<font color=red>Important
Step2: You can check to make sure that the model loaded and used by TARDIS during the simulation is consistent with your expectations based on the custom files you provided
Step3: Specifying boundary velocities in the config file
In addition to specifying custom density and abundance files, the user can set the v_inner_boundary and v_outer_boundary velocities in the YAML config file. This can cause some confusion, so we carefully go through some examples.
<font color=red>Important
Step4: Example 2) v_outer_boundary larger than last velocity in density file
In this example, the last velocity in the density file is 12000 km/s. The user can specify in the config file the velocity of the outer boundary to a larger velocity, say v_outer_boundary = 13000 km/s. This will cause TARDIS to raise an error.
Step5: Example 3) v_boundaries in config file are within density file velocity range
Here the user sets v_inner_boundary = 9700 and v_outer_boundary = 11500 in the config file. Both values fall within the velocity range specified by the custom density file. | Python Code:
import tardis
import matplotlib.pyplot as plt
import numpy as np
Explanation: How to run TARDIS with a custom ejecta model
This notebook will go through multiple detailed examples of how to properly run TARDIS with a custom ejecta profile specified by a custom density file and a custom abundance file.
End of explanation
model = tardis.run_tardis('./test_config.yml')
Explanation: Your custom density file
First, let's look at an example of a custom density file.
The first line specifies the time in days after the explosion
After a skipped line, each row corresponds to a shell with index specified by the first column.
The second column lists the velocities of the outer boundary of the cell in km/s.
The third column lists the density of the cell.
<font color=red>Important: </font>
The default behavior of TARDIS is to use the first shell as the inner boundary. This means that v_inner_boundary = 9500, and the corresponding density 9e-16 is ignored because it is within the inner boundary. It can be replaced by an arbitrary number. The outer boundary of the last shell will be used as v_outer_boundary, so the default behavior will set v_outer_boundary = 12000.
Your custom abundance file
Let's look at an example of a custom density file.
The first line indicates which elements (or isotopes) correspond to which columns.
After a skipped line, each row specifies the chemical abundance of one shell. Therefore the numbers in a given row should sum to 1.0
<font color=red>Important: </font>
Note that there are only 2 shells specified in this abundance file (despite the custom density file having 3 lines). This is because the custom density file specifies the boundaries of the shells, while the abundance file specifies the abundances within each shell.
Running TARDIS with the custom files
Now let's run TARDIS using the example custom files.
End of explanation
print('v_inner_boundary = ',model.model.v_boundary_inner)
print('v_outer_boundary = ',model.model.v_boundary_outer)
print('\n')
print('velocities of shell boundaries: ')
print(model.model.velocity)
print('\n')
print('densities loaded by TARDIS: (NOTE that the density in the first line of the file was ignored! Densities are also rescaled.)')
print(model.model.density)
Explanation: You can check to make sure that the model loaded and used by TARDIS during the simulation is consistent with your expectations based on the custom files you provided:
End of explanation
model = tardis.run_tardis('./test_config_ex1.yml')
Explanation: Specifying boundary velocities in the config file
In addition to specifying custom density and abundance files, the user can set the v_inner_boundary and v_outer_boundary velocities in the YAML config file. This can cause some confusion, so we carefully go through some examples.
<font color=red>Important: </font>
Boundary velocities set in the YAML config file must be within the velocity range specified in the custom density file (if one is provided).
Example 1) v_inner_boundary lower than first velocity in density file
In this example, the first velocity in the density file is 9500 km/s. The user can specify in the config file the velocity of the inner boundary to a lower velocity, say v_inner_boundary = 9000 km/s. This will cause TARDIS to raise an error.
End of explanation
model = tardis.run_tardis('./test_config_ex2.yml')
Explanation: Example 2) v_outer_boundary larger than last velocity in density file
In this example, the last velocity in the density file is 12000 km/s. The user can specify in the config file the velocity of the outer boundary to a larger velocity, say v_outer_boundary = 13000 km/s. This will cause TARDIS to raise an error.
End of explanation
model = tardis.run_tardis('./test_config_ex3.yml')
print('v_inner_boundary = ',model.model.v_boundary_inner)
print('v_outer_boundary = ',model.model.v_boundary_outer)
print('\n')
print('velocities of shell boundaries: ')
print(model.model.velocity)
print('\n')
print('densities loaded by TARDIS: (NOTE that the density in the first line of the file was ignored! Densities are also rescaled.)')
print(model.model.density)
Explanation: Example 3) v_boundaries in config file are within density file velocity range
Here the user sets v_inner_boundary = 9700 and v_outer_boundary = 11500 in the config file. Both values fall within the velocity range specified by the custom density file.
End of explanation |
4,183 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Analysis with Pandas Dataframe
Pandas is a popular library for manipulating vectors, tables, and time series. We will frequently use Pandas data structures instead of the built-in python data structures, as they provide much richer functionality. Also, Pandas is fast, which makes working with large datasets easier. Check out the official pandas website at [http
Step1: Data I/O
Data cleaning...
Step2: First and last five rows.
Step3: Add or delete columns and write data to .csv file with one command line.
Step4: Indexing and Slicing
.iloc[ ]
Step5: The function takes array as index, too.
Step6: Access the data array/list as array using .values
Step7: In this case, indexing by position may not be practical. Instead, we can designate the column of row label 'ID' as an 'index'. It is common operation to pick a column as index to work on. When indexing the dataframe, explicitly designate the row and columns, even if with colon ('
Step8: Use .values to access the data stored in the dataframe.
Step9: Ploting with matplotlib and cartopy
Step10: How to plot every drifter trajectory, aka spagetti plot?
Grouping Data Frames
In order to aggregate the data of each drifter, we can use group-by method. We can specify which column to group by. In this case, 'ID' will be the choice.
Step11: Dictionary is a collection of items, which are unordered, changeable and indexed. Each item can be different types such as number, string, list, etc.
Step12: Keys of each group are also in a dictionary.
Step13: You can access the items of a dictionary by referring to its key name, inside square brackets
Step14: Iterate over the dictinary above to access the coordinates of each drifter.
Step15: Select data in certain time period.
Set the date as index.
Step16: the "Date" index is Datetime Index
Step17: pd.date_range will give us a list of Index
Step18: use .strftime() method to convert "DatetimeIndex" to "Index" | Python Code:
import pandas as pd
Explanation: Data Analysis with Pandas Dataframe
Pandas is a popular library for manipulating vectors, tables, and time series. We will frequently use Pandas data structures instead of the built-in python data structures, as they provide much richer functionality. Also, Pandas is fast, which makes working with large datasets easier. Check out the official pandas website at [http://pandas.pydata.org/]
Pandas provides three data structures:
the series, which represents a single column of data similar to a python list. Series are most fundamental data structures in Pandas.
the data frame, which represents multiple series of data
the panel, which represents multiple data frames
Today we will mainly work with dataframe.
End of explanation
glad = pd.read_csv('./GLAD_15min_filtered_S1_41days_sample.csv')
glad
glad.shape
Explanation: Data I/O
Data cleaning...
End of explanation
glad_orig.head()
glad_orig.tail()
Explanation: First and last five rows.
End of explanation
import numpy as np
np.zeros(240000)
glad['temperature'] = np.zeros(240000)
glad.head()
del glad['vel_Error']
del glad['Pos_Error']
glad.head()
glad.to_csv('./test.csv')
glad.to_csv?
glad.to_csv('./test_without_index.csv', index = False)
Explanation: Add or delete columns and write data to .csv file with one command line.
End of explanation
glad.iloc[0]
Explanation: Indexing and Slicing
.iloc[ ] : indexing by position
.loc[ ] : indexing by index
End of explanation
glad_orig.iloc[:10]
Explanation: The function takes array as index, too.
End of explanation
glad_orig.iloc[0].values
Explanation: Access the data array/list as array using .values
End of explanation
glad_id = glad.set_index('ID')
glad_id.head()
glad_id.loc['CARTHE_021']
Explanation: In this case, indexing by position may not be practical. Instead, we can designate the column of row label 'ID' as an 'index'. It is common operation to pick a column as index to work on. When indexing the dataframe, explicitly designate the row and columns, even if with colon (':').
End of explanation
lat = glad_id.loc['CARTHE_021', 'Latitude'].values
lat
lon = glad_id.loc['CARTHE_021', 'Longitude'].values
lon
Explanation: Use .values to access the data stored in the dataframe.
End of explanation
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
plt.figure(figsize = (6, 8))
min_lat, max_lat = 23, 30.5
min_lon, max_lon = -91.5, -85
ax = plt.axes(projection = ccrs.PlateCarree())
ax.set_extent([min_lon, max_lon, min_lat, max_lat], ccrs.PlateCarree())
ax.coastlines(resolution = '50m', color = 'black')
ax.gridlines(crs = ccrs.PlateCarree(), draw_labels = True, color = 'grey')
ax.plot(lon, lat)
Explanation: Ploting with matplotlib and cartopy
End of explanation
drifter_grouped = glad.groupby('ID')
Explanation: How to plot every drifter trajectory, aka spagetti plot?
Grouping Data Frames
In order to aggregate the data of each drifter, we can use group-by method. We can specify which column to group by. In this case, 'ID' will be the choice.
End of explanation
drifter_grouped.groups
Explanation: Dictionary is a collection of items, which are unordered, changeable and indexed. Each item can be different types such as number, string, list, etc.
End of explanation
drifter_grouped.groups.keys()
Explanation: Keys of each group are also in a dictionary.
End of explanation
drifter_grouped.groups['CARTHE_021']
Explanation: You can access the items of a dictionary by referring to its key name, inside square brackets
End of explanation
drifter_ids = drifter_grouped.groups.keys()
for drifter_id in drifter_ids:
print(drifter_id)
glad_id.head()
plt.figure(figsize = (6, 8))
min_lat, max_lat = 23, 30.5
min_lon, max_lon = -91.5, -85
ax = plt.axes(projection = ccrs.PlateCarree())
ax.set_extent([min_lon, max_lon, min_lat, max_lat], ccrs.PlateCarree())
ax.coastlines(resolution = '50m', color = 'black')
ax.gridlines(crs = ccrs.PlateCarree(), draw_labels = True, color = 'grey')
for drifter_id in drifter_ids:
lon = glad_id.loc[drifter_id, 'Longitude'].values
lat = glad_id.loc[drifter_id, 'Latitude'].values
ax.plot(lon, lat)
Explanation: Iterate over the dictinary above to access the coordinates of each drifter.
End of explanation
glad_date = glad_orig.set_index('Date')
glad_date.head()
Explanation: Select data in certain time period.
Set the date as index.
End of explanation
glad_date.index
Explanation: the "Date" index is Datetime Index
End of explanation
pd.date_range(start = '2012-07-22', end = '2012-08-05')
glad_date.loc[date_range,:]
Explanation: pd.date_range will give us a list of Index
End of explanation
date_range = pd.date_range(start=first_day, end = last_day).strftime("%Y-%m-%d")
date_range
glad_selected = glad_date.loc[date_range,:]
Explanation: use .strftime() method to convert "DatetimeIndex" to "Index"
End of explanation |
4,184 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Introduction
In an upcoming analysis, we want to calculate the structural similarity between test cases. For this, we need the information which test methods call which code in the application (the "production code").
In this blog post, I'll show how you can get this information by using jQAssistant for a Java application. With jQAssistant, you can scan the structural information of your software. I'll also explain the relevant database query that delivers the information we need later on.
Dataset
I've scanned a small pet project of mine called "DropOver" that was originally developed as a web application for organizing parties or bar-hoppings. I've just added jQAssistant as a Maven plugin to my project's Maven build (see here for a mini tutorial). The structures of this application are stored by jQAssistant in a property graph within the graph database Neo4j. A subgraph with the structural information that's relevant for our purposes looks like this
Step2: Cypher query explained
Let's go through that query from above step by step. The Cypher query that finds all test methods that call methods of our production types works as follows | Python Code:
import py2neo
import pandas as pd
graph = py2neo.Graph()
query =
MATCH
(testMethod:Method)
-[:ANNOTATED_BY]->()-[:OF_TYPE]->
(:Type {fqn:"org.junit.Test"}),
(testType:Type)-[:DECLARES]->(testMethod),
(type)-[:DECLARES]->(method:Method),
(testMethod)-[i:INVOKES]->(method)
WHERE
NOT type.name ENDS WITH "Test"
AND type.fqn STARTS WITH "at.dropover"
AND NOT method.signature CONTAINS "<init>"
RETURN
testType.name as test_type,
testMethod.signature as test_method,
type.name as prod_type,
method.signature as prod_method,
COUNT(DISTINCT i) as invocations
ORDER BY
test_type, test_method, prod_type, prod_method
invocations = pd.DataFrame(graph.data(query))
# reverse sort columns for better representation
invocations = invocations[invocations.columns[::-1]]
invocations.head()
Explanation: Introduction
In an upcoming analysis, we want to calculate the structural similarity between test cases. For this, we need the information which test methods call which code in the application (the "production code").
In this blog post, I'll show how you can get this information by using jQAssistant for a Java application. With jQAssistant, you can scan the structural information of your software. I'll also explain the relevant database query that delivers the information we need later on.
Dataset
I've scanned a small pet project of mine called "DropOver" that was originally developed as a web application for organizing parties or bar-hoppings. I've just added jQAssistant as a Maven plugin to my project's Maven build (see here for a mini tutorial). The structures of this application are stored by jQAssistant in a property graph within the graph database Neo4j. A subgraph with the structural information that's relevant for our purposes looks like this:
We can see the scanned software entities like Java types (red) or methods (blue) as well their relationships with each other. We can now explore the database's content with the included Neo4j browser frontend or access the data with a programming language. I use Python (the programming language we'll write our analysis later on) with the py2neo module (the bridge between Python and Neo4j). The information we need can be retrieved by creating and executing a Cypher query (explained in the following) – Neo4j's language for accessing information in the property graph.
Last, we store the results in a Pandas DataFrame named invocations for a nice tabular representation of the outputs and for further analysis.
End of explanation
invocations.to_csv("datasets/test_code_invocations.csv", sep=";", index=False)
Explanation: Cypher query explained
Let's go through that query from above step by step. The Cypher query that finds all test methods that call methods of our production types works as follows:
In the MATCH clause, we start our search for particular structural information. We first identify all test methods. These are methods that are annotated by @Test, which is an annotation that the JUnit4 framework provides.
cypher
MATCH
(testMethod:Method)-[:ANNOTATED_BY]->()-[:OF_TYPE]->(:Type {fqn:"org.junit.Test"})
Next, we find all the test classes that declare (via the DECLARES relationship type) all test methods from above.
cypher
(testType:Type)-[:DECLARES]->(testMethod)
With the same approach, we first identify all the Java types and methods (at first regardless of their meaning. Later, we'll define them as production types and methods).
cypher
(type)-[:DECLARES]->(method:Method)
Last, we find test methods that call methods of the other methods by querying the appropriate INVOKES relationship.
cypher
(testMethod)-[i:INVOKES]->(method)
In the WHERE clause, we define what we see as production type (and thus implicitly production method). We achieve this by saying that a production type is not a test and that the types must be within our application. These are all types that start with the fqn (full qualified name) at.dropover. We also filter out any calls to constructors, because those are irrelevant for our analysis.
cypher
WHERE
NOT type.name ENDS WITH "Test"
AND type.fqn STARTS WITH "at.dropover"
AND NOT method.signature CONTAINS "<init>"
In the RETURN clause, we just return the information needed for further analysis. These are all names of our test and production types as well as the signatures of the test methods and production methods. We also count the number of calls from the test methods to the production methods. This is a nice indicator for the cohesion of a test method to a production method.
cypher
RETURN
testType.name as test_type,
testMethod.signature as test_method,
type.name as prod_type,
method.signature as prod_method,
COUNT(DISTINCT i) as invocations
In the ORDER BY clause, we simply order the results in a useful way (and for reproducible results):
cypher
ORDER BY
test_type, test_method, prod_type, prod_method
A long explanation, but if you are familiar with Cypher and the underlying schema of your graph, you write those queries within half a minute.
Data export
Because we need that data in a follow-up analysis, we store the information in a semicolon-separated file.
End of explanation |
4,185 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Timescales in QuTiP
Andrew M.C. Dawes — 2016
An overview to one frequently asked question about QuTiP.
Introduction
QuTiP is a python package, if you are new to QuTiP, you should first read the tutorial materials available. If you have used QuTiP but are unsure about timescale and time units, this document will help clarify these concepts.
It is important to note that QuTiP routines do not care what time units you use. There is no internal unit of time. Time is defined purely by the units of the other values in your problem, so it is up to you to be careful about units! QuTiP includes a set of solvers for several equations that are relevant to quantum mechanics. Those equations relate various quantities (such as energy and time) and the units of these quantities are constrained only by the equations (i.e. not by QuTiP itself).
$$i \hbar \frac{d}{dt}\left|\psi\right\rangle = H\left|\psi\right\rangle$$
python imports
We'll use qutip, numpy, matplotlib according to the following import scheme
Step1: It will be useful to define the three components of a spin-1/2 in terms of the Pauli matrices
Step2: Now the Hamiltonian of a spin-1/2 system in an external magnetic field is $H = \boldsymbol{\omega}_L \cdot \boldsymbol{S}$ where $\boldsymbol{\omega}_L = -\gamma B$ is the Larmor frequency of precession. Here we see our first introduction to units in the system. Either by assumption (following convention) or by derivation, we see that the units of the Hamiltonian are angular frequency. In particular, the equation solved by QuTiP is | Python Code:
from qutip import *
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Timescales in QuTiP
Andrew M.C. Dawes — 2016
An overview to one frequently asked question about QuTiP.
Introduction
QuTiP is a python package, if you are new to QuTiP, you should first read the tutorial materials available. If you have used QuTiP but are unsure about timescale and time units, this document will help clarify these concepts.
It is important to note that QuTiP routines do not care what time units you use. There is no internal unit of time. Time is defined purely by the units of the other values in your problem, so it is up to you to be careful about units! QuTiP includes a set of solvers for several equations that are relevant to quantum mechanics. Those equations relate various quantities (such as energy and time) and the units of these quantities are constrained only by the equations (i.e. not by QuTiP itself).
$$i \hbar \frac{d}{dt}\left|\psi\right\rangle = H\left|\psi\right\rangle$$
python imports
We'll use qutip, numpy, matplotlib according to the following import scheme:
End of explanation
Sx = 0.5 * sigmax()
Sy = 0.5 * sigmay()
Sz = 0.5 * sigmaz()
Explanation: It will be useful to define the three components of a spin-1/2 in terms of the Pauli matrices:
End of explanation
H = 2 * np.pi * 3 * Sz
psi0 = 1/np.sqrt(2)*Qobj([[1],[1]])
times = np.linspace(0,1,50)
result = mesolve(H, psi0, times, [], [Sx, Sy, Sz])
x = result.expect[0]
y = result.expect[1]
z = result.expect[2]
plt.plot(times,x)
plt.plot(times,y)
plt.plot(times,z)
plt.ylim(-1.2,1.2)
Explanation: Now the Hamiltonian of a spin-1/2 system in an external magnetic field is $H = \boldsymbol{\omega}_L \cdot \boldsymbol{S}$ where $\boldsymbol{\omega}_L = -\gamma B$ is the Larmor frequency of precession. Here we see our first introduction to units in the system. Either by assumption (following convention) or by derivation, we see that the units of the Hamiltonian are angular frequency. In particular, the equation solved by QuTiP is:
$$i \frac{d}{dt}|\psi\rangle = \omega_L S_Z |\psi\rangle$$
If we make this angular frequency explicit by entering $f = 3$:
$$ H = 2 \pi \cdot 3 \cdot S_z$$
Hypothesis
We would expect this system to undergo three complete oscillations in one unit of time.
End of explanation |
4,186 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Create an experiment</h1>
Step1: <h1>Get a list of mzML files that you uploaded and assign them to a group</h1>
Step2: <h1>Specify the descriptive names for each group</h1>
Step3: <h1>Steps in the file description and conversion process</h1>
<ul>
<li>upload mzml files</li>
<li>glob to get list of mzml files</li>
<li>for a homogenous set of mzml files make a single filespec object with </li>
metatlas_objects.FileSpec(polarity = ,group = inclus = )
<li>Call an experiment, e = metatlas_objects.Experiment(name = 'Test_20150722')</li>
<li>e.load_files(mzmlfiles,sp)</li>
<li>repeat this process for each homogeneous set of files</li>
<li>Alternative, you can specify your own filespec object for each file</li>
</ul>
Step4: <h1>Convert All Your Files Manually</h1>
<h3>This is typically not performed because the "load_files" command above has already taken care of it</h3> | Python Code:
myExperiment = metatlas_objects.Experiment(name = 'QExactive_Hilic_Pos_Actinobacteria_Phylogeny')
Explanation: <h1>Create an experiment</h1>
End of explanation
myPath = '/global/homes/b/bpb/ExoMetabolomic_Example_Data/'
myPath = '/project/projectdirs/metatlas/data_for_metatlas_2/20150324_LPSilva_BHedlund_chloroflexi_POS_rerun/'
myFiles = glob.glob('%s*.mzML'%myPath)
myFiles.sort()
groupID = []
for f in myFiles:
groupID.append('')
i = 0
while i < len(myFiles):
a,b = os.path.split(myFiles[i])
j = raw_input('enter group id for %s [number, "x" to go back]:'%b)
if j == 'x':
i = i - 1
else:
groupID[i] = j
i = i + 1
print groupID
uGroupID = sorted(set(groupID))
print uGroupID
Explanation: <h1>Get a list of mzML files that you uploaded and assign them to a group</h1>
End of explanation
uGroupName = []
for u in uGroupID:
j = raw_input('enter group name for Group #%s: '%u)
uGroupName.append(j)
Explanation: <h1>Specify the descriptive names for each group</h1>
End of explanation
fsList = []
for i,g in enumerate(groupID):
for j,u in enumerate(uGroupID):
if g == u:
fs = metatlas_objects.FileSpec(polarity = 1,
group = uGroupName[j],
inclusion_order = i)
fsList.append(fs)
myExperiment.load_files([myFiles[i]],fs)
myExperiment.save()
print myExperiment.finfos[0].hdf_file
print myExperiment.finfos[0].group
print myExperiment.finfos[0].polarity
Explanation: <h1>Steps in the file description and conversion process</h1>
<ul>
<li>upload mzml files</li>
<li>glob to get list of mzml files</li>
<li>for a homogenous set of mzml files make a single filespec object with </li>
metatlas_objects.FileSpec(polarity = ,group = inclus = )
<li>Call an experiment, e = metatlas_objects.Experiment(name = 'Test_20150722')</li>
<li>e.load_files(mzmlfiles,sp)</li>
<li>repeat this process for each homogeneous set of files</li>
<li>Alternative, you can specify your own filespec object for each file</li>
</ul>
End of explanation
# myH5Files = []
# for f in myFiles:
# metatlas.mzml_to_hdf('%s'%(f))
# myH5Files.append(f.replace('.mzML','.h5'))
# print f
print len(myExperiment.finfos)
Explanation: <h1>Convert All Your Files Manually</h1>
<h3>This is typically not performed because the "load_files" command above has already taken care of it</h3>
End of explanation |
4,187 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ABU量化系统使用文档
<center>
<img src="./image/abu_logo.png" alt="" style="vertical-align
Step1: 很多刚接触交易的人总喜欢把交易看成一种有固定收入的工作,比如他们有自己的规矩,周五一定要把所有股票都卖了,安安心心过周末,周一看情况一切良好再把股票买回来。
还有一些人有着很奇怪的癖好认为周三是他的幸运日,在周三买入他选中的股票,有些人每个月第一个周五发工资,市场就是由这些各种各样的人组成的。
某一个股票上的活跃用户在一段短时间内变化并不大,也就是说这些习惯周五卖周一买的人会反复在一支股票上交易,普通投资者普遍的投资方式是针对一支股票不断的进行买卖,他们不会长期持有这支股票,但也不会远离这支股票很长时间,我认为有两点促成了以上事实。
贪欲:贪欲在这中间起到了很大的作用,当一个人第一次买入一支股票并且持有到有一定利润的时候,他选择卖出这支股票,因为他认为涨的已经很多了,该适当的回调了,之后股价的走势只有两种可能:第一按照他的预期下跌,这样的话他可能选择跌到某种程度再次进场买入这支股票;第二就是继续上涨,这种情况下他会选择不断‘诅咒’这支股票,直到有一天股价上涨到让他无法忍受,从此由‘黑转粉’。
时间成本与懒惰:一个人类的时间和精力都是有限的,它无法获取市场中所有股票的信息,每次获取熟悉一支股票的时间成本在他看来也是非常巨大,他反复的盯着自己最频繁买卖的那几支股票。
1. 美股周期短线分析
下面先获取沙盒数据中美股一年的数据,做为短线分析示例:
Step2: 从日振幅涨跌幅比来看,只有BIDU和WUBA能勉强有短线套利的空间(值 > 1.8), 但是由于沙盒数据中只有这些symbol,所以暂时忽略这个特证,之后做非沙盒数据全市场周期短线分析时再使用这个值。
Step3: 下面先一个一个观察每一个股票的周期涨跌概率,可以发现:
特斯拉在周四上涨的概率最大59%
诺亚财富也在周四上涨的概率最大65%
百度在周五上涨概率达到60%
苹果在周三上涨概率达到56%
Step4: 假如择时策略中需要找到每一个股票上涨概率超过55%的交易日,做为策略买入的日子,比如下面示例找特斯拉超过55%的交易日:
Step5: 可以看到上面的结果就是符号要求的交易日,但是如果虽然周四的胜率很很高,但是周四的上涨比例很低呢,如果上涨比例很低,会造成盈亏比很低,造成最终交易依然亏损,下面使用date_week_mean看看上面各个美股每个交易日的涨跌比例,如下:
Step6: 看看特斯拉满足胜率要求的交易日中的涨跌幅比例,如下:
Step7: 可以看到周四的涨跌平均值是0.54,在具体策略编写中可以使用如下两种阀值计算方式,确定周四的涨幅比例是否高于下面两种算法:
Step8: 可以看到第一种算法的值计算为0.73,第二种为0.45,0.54虽然大于0.45但是小于0.73,即虽然特斯拉在周四有大概率的上涨可能,如果使用第一种算法,那么由于涨幅比例不符合要求,在具体策略中将不会发出买入信号。
备注:
上面的计算中0.618是可以在具体策略中通过参数传递
无论是上面使用的55%胜率还是0.618都是以制造非均衡概率优势为目的
在实际策略编写中根据交易量需求,以及市场交易目标数量等等确定具体使用上面那一种算法, 或两个并行生效
下面看看百度上涨概率超过55%的交易日:
Step9: 看看百度满足胜率要求的交易日中的涨跌幅比例,以及两种阀值计算,如下:
Step16: 结果看到第一种算法的值计算为0.17,第二种为0.28,0.25虽然大于0.17但是小于0.28,即如果策略中使用第一种阀值计算方式将满足买入信号发出,如果第二种就不满足。
2. 日胜率均值回复策略
实盘中使用的symbol数量会远远多于本例中使用沙盒数据的数量,策略可以要求两种阀值都满足,且加入更多的非均衡条件构造最终的非均衡结果,但是由于本例沙盒数据量少,所以下面编写策略时采用两种阀值计算方式满足一种即可,策略大概原理如下:
策略的性质属于:均值回复
默认以40天为周期(8周)结合涨跌阀值计算周几适合买入
回测运行中每一个月重新计算一次上述的周几适合买入
在策略日任务中买入信号为:昨天下跌,今天开盘也下跌,且明天是计算出来的上涨概率大的'周几'
具体策略编写如下所示:
Step17: 3. 各个市场回测日胜率均值回复策略
上面的AbuFactorBuyWD即完成了整个策略的编写,下面开始进行回测,如下所示:
Step18: 可以看到上面的回测中胜率超过了50%,从下面的交易单中可以看到所有交易都只持有了一天,如下:
Step19: 上面的策略中计算'周几'上涨概率最大的交易周期默认为40天周期(8周),这个周期长度不能太长也不能太短,因为某一个股票上的活跃用户只是在一段短时间内变化不大,但是一个市场中的参与者随着时间的流逝,也在慢慢不断变化,不断新老交替,就像我们人类,每7年我们就是一个全新的自己,所有细胞血液都将完全更新一遍。
下面使用这个策略对比特币,莱特币进行回测,如下所示:
Step20: 下面使用这个沙盒中A股市场symbol进行回测,如下所示: | Python Code:
# 基础库导入
from __future__ import print_function
from __future__ import division
import warnings
warnings.filterwarnings('ignore')
warnings.simplefilter('ignore')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import os
import sys
# 使用insert 0即只使用github,避免交叉使用了pip安装的abupy,导致的版本不一致问题
sys.path.insert(0, os.path.abspath('../'))
import abupy
# 使用沙盒数据,目的是和书中一样的数据环境
abupy.env.enable_example_env_ipython()
from abupy import abu, AbuFactorBuyTD, BuyCallMixin, ABuSymbolPd, ABuKLUtil
from abupy import AbuFactorSellNDay, AbuMetricsBase, ABuProgress
Explanation: ABU量化系统使用文档
<center>
<img src="./image/abu_logo.png" alt="" style="vertical-align:middle;padding:10px 20px;"><font size="6" color="black"><b>第26节 星期几是这个股票的‘好日子’</b></font>
</center>
作者: 阿布
阿布量化版权所有 未经允许 禁止转载
abu量化系统github地址 (欢迎+star)
本节ipython notebook
本节界面操作教程视频
上一节讲解量化交易中跨市场低频统计套利的示例,本节将示例一个与周期相关的短线择时策略,本节的内容是为《量化交易之路》中的一个小节做的完整策略实例补充。
首先导入本节需要使用的abupy中的模块:
End of explanation
us_choice_symbols = ['usTSLA', 'usNOAH', 'usSFUN', 'usBIDU', 'usAAPL', 'usGOOG', 'usWUBA', 'usVIPS']
kl_dict = {us_symbol[2:]:
ABuSymbolPd.make_kl_df(us_symbol, start='2014-07-26', end='2015-07-26')
for us_symbol in us_choice_symbols}
Explanation: 很多刚接触交易的人总喜欢把交易看成一种有固定收入的工作,比如他们有自己的规矩,周五一定要把所有股票都卖了,安安心心过周末,周一看情况一切良好再把股票买回来。
还有一些人有着很奇怪的癖好认为周三是他的幸运日,在周三买入他选中的股票,有些人每个月第一个周五发工资,市场就是由这些各种各样的人组成的。
某一个股票上的活跃用户在一段短时间内变化并不大,也就是说这些习惯周五卖周一买的人会反复在一支股票上交易,普通投资者普遍的投资方式是针对一支股票不断的进行买卖,他们不会长期持有这支股票,但也不会远离这支股票很长时间,我认为有两点促成了以上事实。
贪欲:贪欲在这中间起到了很大的作用,当一个人第一次买入一支股票并且持有到有一定利润的时候,他选择卖出这支股票,因为他认为涨的已经很多了,该适当的回调了,之后股价的走势只有两种可能:第一按照他的预期下跌,这样的话他可能选择跌到某种程度再次进场买入这支股票;第二就是继续上涨,这种情况下他会选择不断‘诅咒’这支股票,直到有一天股价上涨到让他无法忍受,从此由‘黑转粉’。
时间成本与懒惰:一个人类的时间和精力都是有限的,它无法获取市场中所有股票的信息,每次获取熟悉一支股票的时间成本在他看来也是非常巨大,他反复的盯着自己最频繁买卖的那几支股票。
1. 美股周期短线分析
下面先获取沙盒数据中美股一年的数据,做为短线分析示例:
End of explanation
ABuKLUtil.wave_change_rate(kl_dict)
Explanation: 从日振幅涨跌幅比来看,只有BIDU和WUBA能勉强有短线套利的空间(值 > 1.8), 但是由于沙盒数据中只有这些symbol,所以暂时忽略这个特证,之后做非沙盒数据全市场周期短线分析时再使用这个值。
End of explanation
pd.options.display.precision = 2
pd.options.display.max_columns = 30
ABuKLUtil.date_week_win(kl_dict)
Explanation: 下面先一个一个观察每一个股票的周期涨跌概率,可以发现:
特斯拉在周四上涨的概率最大59%
诺亚财富也在周四上涨的概率最大65%
百度在周五上涨概率达到60%
苹果在周三上涨概率达到56%
End of explanation
tl_dw = ABuKLUtil.date_week_win(kl_dict['TSLA'])
tl_dw_vd = tl_dw[tl_dw.win > 0.55]
tl_dw_vd
Explanation: 假如择时策略中需要找到每一个股票上涨概率超过55%的交易日,做为策略买入的日子,比如下面示例找特斯拉超过55%的交易日:
End of explanation
ABuKLUtil.date_week_mean(kl_dict)
Explanation: 可以看到上面的结果就是符号要求的交易日,但是如果虽然周四的胜率很很高,但是周四的上涨比例很低呢,如果上涨比例很低,会造成盈亏比很低,造成最终交易依然亏损,下面使用date_week_mean看看上面各个美股每个交易日的涨跌比例,如下:
End of explanation
tl_dwm = ABuKLUtil.date_week_mean(kl_dict['TSLA'])
tl_dwm.loc[tl_dw_vd.index]
Explanation: 看看特斯拉满足胜率要求的交易日中的涨跌幅比例,如下:
End of explanation
abs(tl_dwm.sum()).values[0] / 0.618, abs(tl_dwm._p_change).mean() / 0.618
Explanation: 可以看到周四的涨跌平均值是0.54,在具体策略编写中可以使用如下两种阀值计算方式,确定周四的涨幅比例是否高于下面两种算法:
End of explanation
bd_dw = ABuKLUtil.date_week_win(kl_dict['BIDU'])
bd_dw_vd = bd_dw[bd_dw.win > 0.55]
bd_dw_vd
Explanation: 可以看到第一种算法的值计算为0.73,第二种为0.45,0.54虽然大于0.45但是小于0.73,即虽然特斯拉在周四有大概率的上涨可能,如果使用第一种算法,那么由于涨幅比例不符合要求,在具体策略中将不会发出买入信号。
备注:
上面的计算中0.618是可以在具体策略中通过参数传递
无论是上面使用的55%胜率还是0.618都是以制造非均衡概率优势为目的
在实际策略编写中根据交易量需求,以及市场交易目标数量等等确定具体使用上面那一种算法, 或两个并行生效
下面看看百度上涨概率超过55%的交易日:
End of explanation
bd_dwm = ABuKLUtil.date_week_mean(kl_dict['BIDU'])
print(abs(bd_dwm.sum()).values[0] / 0.618, abs(bd_dwm._p_change).mean() / 0.618)
bd_dwm.loc[bd_dw_vd.index]
Explanation: 看看百度满足胜率要求的交易日中的涨跌幅比例,以及两种阀值计算,如下:
End of explanation
class AbuFactorBuyWD(AbuFactorBuyTD, BuyCallMixin):
def _init_self(self, **kwargs):
kwargs中可选参数:buy_dw: 代表周期胜率阀值,默认0.55即55%
kwargs中可选参数:buy_dwm: 代表涨幅比例阀值系数,默认0.618
kwargs中可选参数:dw_period: 代表分析dw,dwm所使用的交易周期,默认40天周期(8周)
self.buy_dw = kwargs.pop('buy_dw', 0.55)
self.buy_dwm = kwargs.pop('buy_dwm', 0.618)
self.dw_period = kwargs.pop('dw_period', 40)
# combine_kl_pd中包含择时金融时间数据与择时之前一年的金融时间数据, 先取出择时开始之前的周期数据
last_kl = self.combine_kl_pd.loc[:self.kl_pd.index[0]]
if last_kl.shape[0] > self.dw_period:
last_kl = last_kl[-self.dw_period:]
# 开始计算周几买,_make_buy_date把结果被放在self.buy_date_week序列中
self._make_buy_date(last_kl)
def fit_month(self, today):
月任务,每一个重新取之前一年的金融时间序列数据,重新计算一遍'周几买'
end_ind = self.combine_kl_pd[self.combine_kl_pd.date == today.date].key.values[0]
start_ind = end_ind - self.dw_period if end_ind - self.dw_period > 0 else 0
# 根据当前的交易日,切片过去的一年金融时间序列
last_kl = self.combine_kl_pd.iloc[start_ind:end_ind]
# 重新计算一遍'周几买'
self._make_buy_date(last_kl)
def fit_day(self, today):
日任务:昨天下跌,今天开盘也下跌,根据今天是周几,在不在序列self.buy_date_week中决定今天买不买
if self.yesterday.p_change < 0 and today.open < self.yesterday.close \
and int(today.date_week) in self.buy_date_week:
# 由于没有用到今天的收盘价格等,可以直接使用buy_today
return self.buy_today()
return None
# noinspection PyProtectedMember
def _make_buy_date(self, last_kl):
self.buy_date_week = []
# 计算周期内,周期的胜率
last_dw = ABuKLUtil.date_week_win(last_kl)
# 摘取大于阀值self.buy_dw的'周几',buy_dw默认0.55
last_dw_vd = last_dw[last_dw.win >= self.buy_dw]
eg: last_dw_vd
0 1 win
date_week
周四 3 5 0.62
周五 2 6 0.75
if len(last_dw_vd) > 0:
# 如果胜率有符合要求的,使用周几平均涨幅计算date_week_mean
last_dwm = ABuKLUtil.date_week_mean(last_kl)
# 摘取满足胜率的last_dw_vd
last_dwm_vd = last_dwm.loc[last_dw_vd.index]
eg: last_dwm_vd
_p_change
date_week
周四 1.55
周五 1.12
# 阀值计算方式1
dwm1 = abs(last_dwm.sum()).values[0] / self.buy_dwm
# 阀值计算方式2
dwm2 = abs(last_dwm._p_change).mean() / self.buy_dwm
# 如果symbol多可以使用&的关系
dm_effect = (last_dwm_vd._p_change > dwm1) | (last_dwm_vd._p_change > dwm2)
buy_date_loc = last_dwm_vd[dm_effect].index
eg: buy_date_loc
Index(['周四', '周五'], dtype='object', name='date_week')
if len(buy_date_loc) > 0:
# 如果涨跌幅阀值也满足,tolist,eg:['周一', '周二', '周三', '周四', '周五']
dw_index = last_dw.index.tolist()
# 如果是一周5个交易日的就是4,如果是比特币等7天交易日的就是6
max_ind = len(dw_index) - 1
for bdl in buy_date_loc:
sell_ind = dw_index.index(bdl)
buy_ind = sell_ind - 1 if sell_ind > 0 else max_ind
self.buy_date_week.append(buy_ind)
Explanation: 结果看到第一种算法的值计算为0.17,第二种为0.28,0.25虽然大于0.17但是小于0.28,即如果策略中使用第一种阀值计算方式将满足买入信号发出,如果第二种就不满足。
2. 日胜率均值回复策略
实盘中使用的symbol数量会远远多于本例中使用沙盒数据的数量,策略可以要求两种阀值都满足,且加入更多的非均衡条件构造最终的非均衡结果,但是由于本例沙盒数据量少,所以下面编写策略时采用两种阀值计算方式满足一种即可,策略大概原理如下:
策略的性质属于:均值回复
默认以40天为周期(8周)结合涨跌阀值计算周几适合买入
回测运行中每一个月重新计算一次上述的周几适合买入
在策略日任务中买入信号为:昨天下跌,今天开盘也下跌,且明天是计算出来的上涨概率大的'周几'
具体策略编写如下所示:
End of explanation
# 初始化资金
read_cash = 1000000
# 买入策略AbuFactorBuyWD,参数都使用默认的
buy_factors = [{'class': AbuFactorBuyWD}]
# 卖出策略使用AbuFactorSellNDay,sell_n=1即只持有一天,is_sell_today=True, 持有一天后当天卖出
sell_factors = [{'class': AbuFactorSellNDay, 'sell_n': 1, 'is_sell_today': True}]
def run_loo_back(choice_symbols, start, end):
abu_result_tuple, _ = abu.run_loop_back(read_cash,
buy_factors,
sell_factors,
start=start,
end=end,
choice_symbols=choice_symbols, n_process_pick=1)
ABuProgress.clear_output()
AbuMetricsBase.show_general(*abu_result_tuple, returns_cmp=True, only_info=True)
return abu_result_tuple
# 开始进行美股沙盒数据回测,沙盒数据中美股只有从13年7月到16年7月的数据,其它市场会多一些
abu_result_tuple = run_loo_back(us_choice_symbols, '2013-07-26', '2016-07-26')
Explanation: 3. 各个市场回测日胜率均值回复策略
上面的AbuFactorBuyWD即完成了整个策略的编写,下面开始进行回测,如下所示:
End of explanation
abu_result_tuple.orders_pd.filter(
['symbol', 'buy_date', 'sell_date', 'keep_days', 'profit'])[:7]
Explanation: 可以看到上面的回测中胜率超过了50%,从下面的交易单中可以看到所有交易都只持有了一天,如下:
End of explanation
_ = run_loo_back(['btc', 'ltc'], '2013-07-26', '2017-07-26')
Explanation: 上面的策略中计算'周几'上涨概率最大的交易周期默认为40天周期(8周),这个周期长度不能太长也不能太短,因为某一个股票上的活跃用户只是在一段短时间内变化不大,但是一个市场中的参与者随着时间的流逝,也在慢慢不断变化,不断新老交替,就像我们人类,每7年我们就是一个全新的自己,所有细胞血液都将完全更新一遍。
下面使用这个策略对比特币,莱特币进行回测,如下所示:
End of explanation
cn_choice_symbols = ['002230', '300104', '300059', '601766', '600085', '600036', '600809', '000002', '002594']
_ = run_loo_back(cn_choice_symbols, '2013-07-26', '2017-07-26')
Explanation: 下面使用这个沙盒中A股市场symbol进行回测,如下所示:
End of explanation |
4,188 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Emojify!
Welcome to the second assignment of Week 2. You are going to use word vector representations to build an Emojifier.
Have you ever wanted to make your text messages more expressive? Your emojifier app will help you do that. So rather than writing "Congratulations on the promotion! Lets get coffee and talk. Love you!" the emojifier can automatically turn this into "Congratulations on the promotion! 👍 Lets get coffee and talk. ☕️ Love you! ❤️"
You will implement a model which inputs a sentence (such as "Let's go see the baseball game tonight!") and finds the most appropriate emoji to be used with this sentence (⚾️). In many emoji interfaces, you need to remember that ❤️ is the "heart" symbol rather than the "love" symbol. But using word vectors, you'll see that even if your training set explicitly relates only a few words to a particular emoji, your algorithm will be able to generalize and associate words in the test set to the same emoji even if those words don't even appear in the training set. This allows you to build an accurate classifier mapping from sentences to emojis, even using a small training set.
In this exercise, you'll start with a baseline model (Emojifier-V1) using word embeddings, then build a more sophisticated model (Emojifier-V2) that further incorporates an LSTM.
Lets get started! Run the following cell to load the package you are going to use.
Step1: 1 - Baseline model
Step2: Run the following cell to print sentences from X_train and corresponding labels from Y_train. Change index to see different examples. Because of the font the iPython notebook uses, the heart emoji may be colored black rather than red.
Step3: 1.2 - Overview of the Emojifier-V1
In this part, you are going to implement a baseline model called "Emojifier-v1".
<center>
<img src="images/image_1.png" style="width
Step4: Let's see what convert_to_one_hot() did. Feel free to change index to print out different values.
Step5: All the data is now ready to be fed into the Emojify-V1 model. Let's implement the model!
1.3 - Implementing Emojifier-V1
As shown in Figure (2), the first step is to convert an input sentence into the word vector representation, which then get averaged together. Similar to the previous exercise, we will use pretrained 50-dimensional GloVe embeddings. Run the following cell to load the word_to_vec_map, which contains all the vector representations.
Step6: You've loaded
Step8: Exercise
Step10: Expected Output
Step11: Run the next cell to train your model and learn the softmax parameters (W,b).
Step12: Expected Output (on a subset of iterations)
Step13: Expected Output
Step14: Amazing! Because adore has a similar embedding as love, the algorithm has generalized correctly even to a word it has never seen before. Words such as heart, dear, beloved or adore have embedding vectors similar to love, and so might work too---feel free to modify the inputs above and try out a variety of input sentences. How well does it work?
Note though that it doesn't get "not feeling happy" correct. This algorithm ignores word ordering, so is not good at understanding phrases like "not happy."
Printing the confusion matrix can also help understand which classes are more difficult for your model. A confusion matrix shows how often an example whose label is one class ("actual" class) is mislabeled by the algorithm with a different class ("predicted" class).
Step15: <font color='blue'>
What you should remember from this part
Step17: 2.1 - Overview of the model
Here is the Emojifier-v2 you will implement
Step18: Run the following cell to check what sentences_to_indices() does, and check your results.
Step20: Expected Output
Step22: Expected Output
Step23: Run the following cell to create your model and check its summary. Because all sentences in the dataset are less than 10 words, we chose max_len = 10. You should see your architecture, it uses "20,223,927" parameters, of which 20,000,050 (the word embeddings) are non-trainable, and the remaining 223,877 are. Because our vocabulary size has 400,001 words (with valid indices from 0 to 400,000) there are 400,001*50 = 20,000,050 non-trainable parameters.
Step24: As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using categorical_crossentropy loss, adam optimizer and ['accuracy'] metrics
Step25: It's time to train your model. Your Emojifier-V2 model takes as input an array of shape (m, max_len) and outputs probability vectors of shape (m, number of classes). We thus have to convert X_train (array of sentences as strings) to X_train_indices (array of sentences as list of word indices), and Y_train (labels as indices) to Y_train_oh (labels as one-hot vectors).
Step26: Fit the Keras model on X_train_indices and Y_train_oh. We will use epochs = 50 and batch_size = 32.
Step27: Your model should perform close to 100% accuracy on the training set. The exact accuracy you get may be a little different. Run the following cell to evaluate your model on the test set.
Step28: You should get a test accuracy between 80% and 95%. Run the cell below to see the mislabelled examples.
Step29: Now you can try it on your own example. Write your own sentence below. | Python Code:
import numpy as np
from emo_utils import *
import emoji
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Emojify!
Welcome to the second assignment of Week 2. You are going to use word vector representations to build an Emojifier.
Have you ever wanted to make your text messages more expressive? Your emojifier app will help you do that. So rather than writing "Congratulations on the promotion! Lets get coffee and talk. Love you!" the emojifier can automatically turn this into "Congratulations on the promotion! 👍 Lets get coffee and talk. ☕️ Love you! ❤️"
You will implement a model which inputs a sentence (such as "Let's go see the baseball game tonight!") and finds the most appropriate emoji to be used with this sentence (⚾️). In many emoji interfaces, you need to remember that ❤️ is the "heart" symbol rather than the "love" symbol. But using word vectors, you'll see that even if your training set explicitly relates only a few words to a particular emoji, your algorithm will be able to generalize and associate words in the test set to the same emoji even if those words don't even appear in the training set. This allows you to build an accurate classifier mapping from sentences to emojis, even using a small training set.
In this exercise, you'll start with a baseline model (Emojifier-V1) using word embeddings, then build a more sophisticated model (Emojifier-V2) that further incorporates an LSTM.
Lets get started! Run the following cell to load the package you are going to use.
End of explanation
X_train, Y_train = read_csv('data/train_emoji.csv')
X_test, Y_test = read_csv('data/tesss.csv')
maxLen = len(max(X_train, key=len).split())
Explanation: 1 - Baseline model: Emojifier-V1
1.1 - Dataset EMOJISET
Let's start by building a simple baseline classifier.
You have a tiny dataset (X, Y) where:
- X contains 127 sentences (strings)
- Y contains a integer label between 0 and 4 corresponding to an emoji for each sentence
<img src="images/data_set.png" style="width:700px;height:300px;">
<caption><center> Figure 1: EMOJISET - a classification problem with 5 classes. A few examples of sentences are given here. </center></caption>
Let's load the dataset using the code below. We split the dataset between training (127 examples) and testing (56 examples).
End of explanation
index = 1
print(X_train[index], label_to_emoji(Y_train[index]))
Explanation: Run the following cell to print sentences from X_train and corresponding labels from Y_train. Change index to see different examples. Because of the font the iPython notebook uses, the heart emoji may be colored black rather than red.
End of explanation
Y_oh_train = convert_to_one_hot(Y_train, C = 5)
Y_oh_test = convert_to_one_hot(Y_test, C = 5)
Explanation: 1.2 - Overview of the Emojifier-V1
In this part, you are going to implement a baseline model called "Emojifier-v1".
<center>
<img src="images/image_1.png" style="width:900px;height:300px;">
<caption><center> Figure 2: Baseline model (Emojifier-V1).</center></caption>
</center>
The input of the model is a string corresponding to a sentence (e.g. "I love you). In the code, the output will be a probability vector of shape (1,5), that you then pass in an argmax layer to extract the index of the most likely emoji output.
To get our labels into a format suitable for training a softmax classifier, lets convert $Y$ from its current shape current shape $(m, 1)$ into a "one-hot representation" $(m, 5)$, where each row is a one-hot vector giving the label of one example, You can do so using this next code snipper. Here, Y_oh stands for "Y-one-hot" in the variable names Y_oh_train and Y_oh_test:
End of explanation
index = 50
print(Y_train[index], "is converted into one hot", Y_oh_train[index])
Explanation: Let's see what convert_to_one_hot() did. Feel free to change index to print out different values.
End of explanation
word_to_index, index_to_word, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt')
Explanation: All the data is now ready to be fed into the Emojify-V1 model. Let's implement the model!
1.3 - Implementing Emojifier-V1
As shown in Figure (2), the first step is to convert an input sentence into the word vector representation, which then get averaged together. Similar to the previous exercise, we will use pretrained 50-dimensional GloVe embeddings. Run the following cell to load the word_to_vec_map, which contains all the vector representations.
End of explanation
word = "cucumber"
index = 289846
print("the index of", word, "in the vocabulary is", word_to_index[word])
print("the", str(index) + "th word in the vocabulary is", index_to_word[index])
Explanation: You've loaded:
- word_to_index: dictionary mapping from words to their indices in the vocabulary (400,001 words, with the valid indices ranging from 0 to 400,000)
- index_to_word: dictionary mapping from indices to their corresponding words in the vocabulary
- word_to_vec_map: dictionary mapping words to their GloVe vector representation.
Run the following cell to check if it works.
End of explanation
# GRADED FUNCTION: sentence_to_avg
def sentence_to_avg(sentence, word_to_vec_map):
Converts a sentence (string) into a list of words (strings). Extracts the GloVe representation of each word
and averages its value into a single vector encoding the meaning of the sentence.
Arguments:
sentence -- string, one training example from X
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
Returns:
avg -- average vector encoding information about the sentence, numpy-array of shape (50,)
### START CODE HERE ###
# Step 1: Split sentence into list of lower case words (≈ 1 line)
words = sentence.lower().split()
print(words)
# Initialize the average word vector, should have the same shape as your word vectors.
avg = np.zeros((50,))
# Step 2: average the word vectors. You can loop over the words in the list "words".
for w in words:
avg += word_to_vec_map[w]
avg = avg / len(words)
### END CODE HERE ###
return avg
avg = sentence_to_avg("Morrocan couscous is my favorite dish", word_to_vec_map)
print("avg = ", avg)
Explanation: Exercise: Implement sentence_to_avg(). You will need to carry out two steps:
1. Convert every sentence to lower-case, then split the sentence into a list of words. X.lower() and X.split() might be useful.
2. For each word in the sentence, access its GloVe representation. Then, average all these values.
End of explanation
# GRADED FUNCTION: model
def model(X, Y, word_to_vec_map, learning_rate = 0.01, num_iterations = 400):
Model to train word vector representations in numpy.
Arguments:
X -- input data, numpy array of sentences as strings, of shape (m, 1)
Y -- labels, numpy array of integers between 0 and 7, numpy-array of shape (m, 1)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
learning_rate -- learning_rate for the stochastic gradient descent algorithm
num_iterations -- number of iterations
Returns:
pred -- vector of predictions, numpy-array of shape (m, 1)
W -- weight matrix of the softmax layer, of shape (n_y, n_h)
b -- bias of the softmax layer, of shape (n_y,)
np.random.seed(1)
# Define number of training examples
m = Y.shape[0] # number of training examples
n_y = 5 # number of classes
n_h = 50 # dimensions of the GloVe vectors
# Initialize parameters using Xavier initialization
W = np.random.randn(n_y, n_h) / np.sqrt(n_h)
b = np.zeros((n_y,))
# Convert Y to Y_onehot with n_y classes
Y_oh = convert_to_one_hot(Y, C = n_y)
# Optimization loop
for t in range(num_iterations): # Loop over the number of iterations
for i in range(m): # Loop over the training examples
### START CODE HERE ### (≈ 4 lines of code)
# Average the word vectors of the words from the i'th training example
avg = sentence_to_avg(X[i], word_to_vec_map)
# Forward propagate the avg through the softmax layer
z = np.dot(W, avg) + b
a = softmax(z)
# Compute cost using the i'th training label's one hot representation and "A" (the output of the softmax)
cost = -np.sum(Y_oh[i] * np.log(a))
### END CODE HERE ###
# Compute gradients
dz = a - Y_oh[i]
dW = np.dot(dz.reshape(n_y,1), avg.reshape(1, n_h))
db = dz
# Update parameters with Stochastic Gradient Descent
W = W - learning_rate * dW
b = b - learning_rate * db
if t % 100 == 0:
print("Epoch: " + str(t) + " --- cost = " + str(cost))
pred = predict(X, Y, W, b, word_to_vec_map)
return pred, W, b
print(X_train.shape)
print(Y_train.shape)
print(np.eye(5)[Y_train.reshape(-1)].shape)
print(X_train[0])
print(type(X_train))
Y = np.asarray([5,0,0,5, 4, 4, 4, 6, 6, 4, 1, 1, 5, 6, 6, 3, 6, 3, 4, 4])
print(Y.shape)
X = np.asarray(['I am going to the bar tonight', 'I love you', 'miss you my dear',
'Lets go party and drinks','Congrats on the new job','Congratulations',
'I am so happy for you', 'Why are you feeling bad', 'What is wrong with you',
'You totally deserve this prize', 'Let us go play football',
'Are you down for football this afternoon', 'Work hard play harder',
'It is suprising how people can be dumb sometimes',
'I am very disappointed','It is the best day in my life',
'I think I will end up alone','My life is so boring','Good job',
'Great so awesome'])
print(X.shape)
print(np.eye(5)[Y_train.reshape(-1)].shape)
print(type(X_train))
Explanation: Expected Output:
<table>
<tr>
<td>
**avg= **
</td>
<td>
[-0.008005 0.56370833 -0.50427333 0.258865 0.55131103 0.03104983
-0.21013718 0.16893933 -0.09590267 0.141784 -0.15708967 0.18525867
0.6495785 0.38371117 0.21102167 0.11301667 0.02613967 0.26037767
0.05820667 -0.01578167 -0.12078833 -0.02471267 0.4128455 0.5152061
0.38756167 -0.898661 -0.535145 0.33501167 0.68806933 -0.2156265
1.797155 0.10476933 -0.36775333 0.750785 0.10282583 0.348925
-0.27262833 0.66768 -0.10706167 -0.283635 0.59580117 0.28747333
-0.3366635 0.23393817 0.34349183 0.178405 0.1166155 -0.076433
0.1445417 0.09808667]
</td>
</tr>
</table>
Model
You now have all the pieces to finish implementing the model() function. After using sentence_to_avg() you need to pass the average through forward propagation, compute the cost, and then backpropagate to update the softmax's parameters.
Exercise: Implement the model() function described in Figure (2). Assuming here that $Yoh$ ("Y one hot") is the one-hot encoding of the output labels, the equations you need to implement in the forward pass and to compute the cross-entropy cost are:
$$ z^{(i)} = W . avg^{(i)} + b$$
$$ a^{(i)} = softmax(z^{(i)})$$
$$ \mathcal{L}^{(i)} = - \sum_{k = 0}^{n_y - 1} Yoh^{(i)}_k * log(a^{(i)}_k)$$
It is possible to come up with a more efficient vectorized implementation. But since we are using a for-loop to convert the sentences one at a time into the avg^{(i)} representation anyway, let's not bother this time.
We provided you a function softmax().
End of explanation
pred, W, b = model(X_train, Y_train, word_to_vec_map)
print(pred)
Explanation: Run the next cell to train your model and learn the softmax parameters (W,b).
End of explanation
print("Training set:")
pred_train = predict(X_train, Y_train, W, b, word_to_vec_map)
print('Test set:')
pred_test = predict(X_test, Y_test, W, b, word_to_vec_map)
Explanation: Expected Output (on a subset of iterations):
<table>
<tr>
<td>
**Epoch: 0**
</td>
<td>
cost = 1.95204988128
</td>
<td>
Accuracy: 0.348484848485
</td>
</tr>
<tr>
<td>
**Epoch: 100**
</td>
<td>
cost = 0.0797181872601
</td>
<td>
Accuracy: 0.931818181818
</td>
</tr>
<tr>
<td>
**Epoch: 200**
</td>
<td>
cost = 0.0445636924368
</td>
<td>
Accuracy: 0.954545454545
</td>
</tr>
<tr>
<td>
**Epoch: 300**
</td>
<td>
cost = 0.0343226737879
</td>
<td>
Accuracy: 0.969696969697
</td>
</tr>
</table>
Great! Your model has pretty high accuracy on the training set. Lets now see how it does on the test set.
1.4 - Examining test set performance
End of explanation
X_my_sentences = np.array(["i adore you", "i love you", "funny lol", "lets play with a ball", "food is ready", "not feeling happy"])
Y_my_labels = np.array([[0], [0], [2], [1], [4],[3]])
pred = predict(X_my_sentences, Y_my_labels , W, b, word_to_vec_map)
print_predictions(X_my_sentences, pred)
Explanation: Expected Output:
<table>
<tr>
<td>
**Train set accuracy**
</td>
<td>
97.7
</td>
</tr>
<tr>
<td>
**Test set accuracy**
</td>
<td>
85.7
</td>
</tr>
</table>
Random guessing would have had 20% accuracy given that there are 5 classes. This is pretty good performance after training on only 127 examples.
In the training set, the algorithm saw the sentence "I love you" with the label ❤️. You can check however that the word "adore" does not appear in the training set. Nonetheless, lets see what happens if you write "I adore you."
End of explanation
print(Y_test.shape)
print(' '+ label_to_emoji(0)+ ' ' + label_to_emoji(1) + ' ' + label_to_emoji(2)+ ' ' + label_to_emoji(3)+' ' + label_to_emoji(4))
print(pd.crosstab(Y_test, pred_test.reshape(56,), rownames=['Actual'], colnames=['Predicted'], margins=True))
plot_confusion_matrix(Y_test, pred_test)
Explanation: Amazing! Because adore has a similar embedding as love, the algorithm has generalized correctly even to a word it has never seen before. Words such as heart, dear, beloved or adore have embedding vectors similar to love, and so might work too---feel free to modify the inputs above and try out a variety of input sentences. How well does it work?
Note though that it doesn't get "not feeling happy" correct. This algorithm ignores word ordering, so is not good at understanding phrases like "not happy."
Printing the confusion matrix can also help understand which classes are more difficult for your model. A confusion matrix shows how often an example whose label is one class ("actual" class) is mislabeled by the algorithm with a different class ("predicted" class).
End of explanation
import numpy as np
np.random.seed(0)
from keras.models import Model
from keras.layers import Dense, Input, Dropout, LSTM, Activation
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
from keras.initializers import glorot_uniform
np.random.seed(1)
Explanation: <font color='blue'>
What you should remember from this part:
- Even with a 127 training examples, you can get a reasonably good model for Emojifying. This is due to the generalization power word vectors gives you.
- Emojify-V1 will perform poorly on sentences such as "This movie is not good and not enjoyable" because it doesn't understand combinations of words--it just averages all the words' embedding vectors together, without paying attention to the ordering of words. You will build a better algorithm in the next part.
2 - Emojifier-V2: Using LSTMs in Keras:
Let's build an LSTM model that takes as input word sequences. This model will be able to take word ordering into account. Emojifier-V2 will continue to use pre-trained word embeddings to represent words, but will feed them into an LSTM, whose job it is to predict the most appropriate emoji.
Run the following cell to load the Keras packages.
End of explanation
# GRADED FUNCTION: sentences_to_indices
def sentences_to_indices(X, word_to_index, max_len):
Converts an array of sentences (strings) into an array of indices corresponding to words in the sentences.
The output shape should be such that it can be given to `Embedding()` (described in Figure 4).
Arguments:
X -- array of sentences (strings), of shape (m, 1)
word_to_index -- a dictionary containing the each word mapped to its index
max_len -- maximum number of words in a sentence. You can assume every sentence in X is no longer than this.
Returns:
X_indices -- array of indices corresponding to words in the sentences from X, of shape (m, max_len)
m = X.shape[0] # number of training examples
### START CODE HERE ###
# Initialize X_indices as a numpy matrix of zeros and the correct shape (≈ 1 line)
X_indices = np.zeros((m, max_len))
for i in range(m): # loop over training examples
# Convert the ith training sentence in lower case and split is into words. You should get a list of words.
sentence_words = X[i].lower().split()
# Initialize j to 0
j = 0
# Loop over the words of sentence_words
for w in sentence_words:
# Set the (i,j)th entry of X_indices to the index of the correct word.
X_indices[i, j] = word_to_index[w]
# Increment j to j + 1
j = j+1
### END CODE HERE ###
return X_indices
Explanation: 2.1 - Overview of the model
Here is the Emojifier-v2 you will implement:
<img src="images/emojifier-v2.png" style="width:700px;height:400px;"> <br>
<caption><center> Figure 3: Emojifier-V2. A 2-layer LSTM sequence classifier. </center></caption>
2.2 Keras and mini-batching
In this exercise, we want to train Keras using mini-batches. However, most deep learning frameworks require that all sequences in the same mini-batch have the same length. This is what allows vectorization to work: If you had a 3-word sentence and a 4-word sentence, then the computations needed for them are different (one takes 3 steps of an LSTM, one takes 4 steps) so it's just not possible to do them both at the same time.
The common solution to this is to use padding. Specifically, set a maximum sequence length, and pad all sequences to the same length. For example, of the maximum sequence length is 20, we could pad every sentence with "0"s so that each input sentence is of length 20. Thus, a sentence "i love you" would be represented as $(e_{i}, e_{love}, e_{you}, \vec{0}, \vec{0}, \ldots, \vec{0})$. In this example, any sentences longer than 20 words would have to be truncated. One simple way to choose the maximum sequence length is to just pick the length of the longest sentence in the training set.
2.3 - The Embedding layer
In Keras, the embedding matrix is represented as a "layer", and maps positive integers (indices corresponding to words) into dense vectors of fixed size (the embedding vectors). It can be trained or initialized with a pretrained embedding. In this part, you will learn how to create an Embedding() layer in Keras, initialize it with the GloVe 50-dimensional vectors loaded earlier in the notebook. Because our training set is quite small, we will not update the word embeddings but will instead leave their values fixed. But in the code below, we'll show you how Keras allows you to either train or leave fixed this layer.
The Embedding() layer takes an integer matrix of size (batch size, max input length) as input. This corresponds to sentences converted into lists of indices (integers), as shown in the figure below.
<img src="images/embedding1.png" style="width:700px;height:250px;">
<caption><center> Figure 4: Embedding layer. This example shows the propagation of two examples through the embedding layer. Both have been zero-padded to a length of max_len=5. The final dimension of the representation is (2,max_len,50) because the word embeddings we are using are 50 dimensional. </center></caption>
The largest integer (i.e. word index) in the input should be no larger than the vocabulary size. The layer outputs an array of shape (batch size, max input length, dimension of word vectors).
The first step is to convert all your training sentences into lists of indices, and then zero-pad all these lists so that their length is the length of the longest sentence.
Exercise: Implement the function below to convert X (array of sentences as strings) into an array of indices corresponding to words in the sentences. The output shape should be such that it can be given to Embedding() (described in Figure 4).
End of explanation
X1 = np.array(["funny lol", "lets play baseball", "food is ready for you"])
X1_indices = sentences_to_indices(X1,word_to_index, max_len = 5)
print("X1 =", X1)
print("X1_indices =", X1_indices)
Explanation: Run the following cell to check what sentences_to_indices() does, and check your results.
End of explanation
# GRADED FUNCTION: pretrained_embedding_layer
def pretrained_embedding_layer(word_to_vec_map, word_to_index):
Creates a Keras Embedding() layer and loads in pre-trained GloVe 50-dimensional vectors.
Arguments:
word_to_vec_map -- dictionary mapping words to their GloVe vector representation.
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
embedding_layer -- pretrained layer Keras instance
vocab_len = len(word_to_index) + 1 # adding 1 to fit Keras embedding (requirement)
emb_dim = word_to_vec_map["cucumber"].shape[0] # define dimensionality of your GloVe word vectors (= 50)
### START CODE HERE ###
# Initialize the embedding matrix as a numpy array of zeros of shape (vocab_len, dimensions of word vectors = emb_dim)
emb_matrix = np.zeros((vocab_len, emb_dim))
# Set each row "index" of the embedding matrix to be the word vector representation of the "index"th word of the vocabulary
for word, index in word_to_index.items():
emb_matrix[index, :] = word_to_vec_map[word]
# Define Keras embedding layer with the correct output/input sizes, make it trainable. Use Embedding(...). Make sure to set trainable=False.
embedding_layer = Embedding(vocab_len, emb_dim, trainable = False)
### END CODE HERE ###
# Build the embedding layer, it is required before setting the weights of the embedding layer. Do not modify the "None".
embedding_layer.build((None,))
# Set the weights of the embedding layer to the embedding matrix. Your layer is now pretrained.
embedding_layer.set_weights([emb_matrix])
return embedding_layer
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
print("weights[0][1][3] =", embedding_layer.get_weights()[0][1][3])
Explanation: Expected Output:
<table>
<tr>
<td>
**X1 =**
</td>
<td>
['funny lol' 'lets play football' 'food is ready for you']
</td>
</tr>
<tr>
<td>
**X1_indices =**
</td>
<td>
[[ 155345. 225122. 0. 0. 0.] <br>
[ 220930. 286375. 151266. 0. 0.] <br>
[ 151204. 192973. 302254. 151349. 394475.]]
</td>
</tr>
</table>
Let's build the Embedding() layer in Keras, using pre-trained word vectors. After this layer is built, you will pass the output of sentences_to_indices() to it as an input, and the Embedding() layer will return the word embeddings for a sentence.
Exercise: Implement pretrained_embedding_layer(). You will need to carry out the following steps:
1. Initialize the embedding matrix as a numpy array of zeroes with the correct shape.
2. Fill in the embedding matrix with all the word embeddings extracted from word_to_vec_map.
3. Define Keras embedding layer. Use Embedding(). Be sure to make this layer non-trainable, by setting trainable = False when calling Embedding(). If you were to set trainable = True, then it will allow the optimization algorithm to modify the values of the word embeddings.
4. Set the embedding weights to be equal to the embedding matrix
End of explanation
# GRADED FUNCTION: Emojify_V2
def Emojify_V2(input_shape, word_to_vec_map, word_to_index):
Function creating the Emojify-v2 model's graph.
Arguments:
input_shape -- shape of the input, usually (max_len,)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
model -- a model instance in Keras
### START CODE HERE ###
# Define sentence_indices as the input of the graph, it should be of shape input_shape and dtype 'int32' (as it contains indices).
sentence_indices = Input(shape = input_shape, dtype = 'int32')
# Create the embedding layer pretrained with GloVe Vectors (≈1 line)
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
# Propagate sentence_indices through your embedding layer, you get back the embeddings
embeddings = embedding_layer(sentence_indices)
# Propagate the embeddings through an LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a batch of sequences.
X = LSTM(128, return_sequences = True)(embeddings)
# Add dropout with a probability of 0.5
X = Dropout(0.5)(X)
# Propagate X trough another LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a single hidden state, not a batch of sequences.
X = LSTM(128, return_sequences = False)(X)
# Add dropout with a probability of 0.5
X = Dropout(0.5)(X)
# Propagate X through a Dense layer with softmax activation to get back a batch of 5-dimensional vectors.
X = Dense(5)(X)
# Add a softmax activation
X = Activation('softmax')(X)
# Create Model instance which converts sentence_indices into X.
model = Model(inputs = sentence_indices, outputs = X)
### END CODE HERE ###
return model
Explanation: Expected Output:
<table>
<tr>
<td>
**weights[0][1][3] =**
</td>
<td>
-0.3403
</td>
</tr>
</table>
2.3 Building the Emojifier-V2
Lets now build the Emojifier-V2 model. You will do so using the embedding layer you have built, and feed its output to an LSTM network.
<img src="images/emojifier-v2.png" style="width:700px;height:400px;"> <br>
<caption><center> Figure 3: Emojifier-v2. A 2-layer LSTM sequence classifier. </center></caption>
Exercise: Implement Emojify_V2(), which builds a Keras graph of the architecture shown in Figure 3. The model takes as input an array of sentences of shape (m, max_len, ) defined by input_shape. It should output a softmax probability vector of shape (m, C = 5). You may need Input(shape = ..., dtype = '...'), LSTM(), Dropout(), Dense(), and Activation().
End of explanation
model = Emojify_V2((maxLen,), word_to_vec_map, word_to_index)
model.summary()
Explanation: Run the following cell to create your model and check its summary. Because all sentences in the dataset are less than 10 words, we chose max_len = 10. You should see your architecture, it uses "20,223,927" parameters, of which 20,000,050 (the word embeddings) are non-trainable, and the remaining 223,877 are. Because our vocabulary size has 400,001 words (with valid indices from 0 to 400,000) there are 400,001*50 = 20,000,050 non-trainable parameters.
End of explanation
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
Explanation: As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using categorical_crossentropy loss, adam optimizer and ['accuracy'] metrics:
End of explanation
X_train_indices = sentences_to_indices(X_train, word_to_index, maxLen)
Y_train_oh = convert_to_one_hot(Y_train, C = 5)
Explanation: It's time to train your model. Your Emojifier-V2 model takes as input an array of shape (m, max_len) and outputs probability vectors of shape (m, number of classes). We thus have to convert X_train (array of sentences as strings) to X_train_indices (array of sentences as list of word indices), and Y_train (labels as indices) to Y_train_oh (labels as one-hot vectors).
End of explanation
model.fit(X_train_indices, Y_train_oh, epochs = 50, batch_size = 32, shuffle=True)
Explanation: Fit the Keras model on X_train_indices and Y_train_oh. We will use epochs = 50 and batch_size = 32.
End of explanation
X_test_indices = sentences_to_indices(X_test, word_to_index, max_len = maxLen)
Y_test_oh = convert_to_one_hot(Y_test, C = 5)
loss, acc = model.evaluate(X_test_indices, Y_test_oh)
print()
print("Test accuracy = ", acc)
Explanation: Your model should perform close to 100% accuracy on the training set. The exact accuracy you get may be a little different. Run the following cell to evaluate your model on the test set.
End of explanation
# This code allows you to see the mislabelled examples
C = 5
y_test_oh = np.eye(C)[Y_test.reshape(-1)]
X_test_indices = sentences_to_indices(X_test, word_to_index, maxLen)
pred = model.predict(X_test_indices)
for i in range(len(X_test)):
x = X_test_indices
num = np.argmax(pred[i])
if(num != Y_test[i]):
print('Expected emoji:'+ label_to_emoji(Y_test[i]) + ' prediction: '+ X_test[i] + label_to_emoji(num).strip())
Explanation: You should get a test accuracy between 80% and 95%. Run the cell below to see the mislabelled examples.
End of explanation
# Change the sentence below to see your prediction. Make sure all the words are in the Glove embeddings.
x_test = np.array(['not feeling happy'])
X_test_indices = sentences_to_indices(x_test, word_to_index, maxLen)
print(x_test[0] +' '+ label_to_emoji(np.argmax(model.predict(X_test_indices))))
Explanation: Now you can try it on your own example. Write your own sentence below.
End of explanation |
4,189 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Filter hits
Step1: Keep best hit per database for each cluster
Filtered by e-value < 1e-3 and best domain e-value < 1 | Python Code:
filt_hits = all_hmmer_hits[ (all_hmmer_hits.e_value < 1e-3) & (all_hmmer_hits.best_dmn_e_value < 1e-3) ]
filt_hits.to_csv("1_out/filtered_hmmer_all_hits.csv",index=False)
print(filt_hits.shape)
filt_hits.head()
Explanation: Filter hits
End of explanation
gb = filt_hits.groupby(["cluster","db"])
reliable_fam_hits = pd.DataFrame( hits.ix[hits.bitscore.idxmax()] for _,hits in gb )[["cluster","db","tool","query_id","subject_id",
"bitscore","e_value","s_description","best_dmn_e_value"]]
sorted_fam_hits = pd.concat( hits.sort_values(by="bitscore",ascending=False) for _,hits in reliable_fam_hits.groupby("cluster") )
sorted_fam_hits.to_csv("1_out/filtered_hmmer_best_hits.csv",index=False)
print(sorted_fam_hits.shape)
sorted_fam_hits.head()
Explanation: Keep best hit per database for each cluster
Filtered by e-value < 1e-3 and best domain e-value < 1
End of explanation |
4,190 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Primitive generators
This notebook contains tests for tohu's primitive generators.
Step1: Constant
Constant simply returns the same, constant value every time.
Step2: Boolean
Boolean returns either True or False, optionally with different probabilities.
Step3: Integer
Integer returns a random integer between low and high (both inclusive).
Step4: Float
Float returns a random float between low and high (both inclusive).
Step5: HashDigest
HashDigest returns hex strings representing hash digest values (or alternatively raw bytes).
HashDigest hex strings (uppercase)
Step6: HashDigest hex strings (lowercase)
Step7: HashDigest byte strings
Step8: NumpyRandomGenerator
This generator can produce random numbers using any of the random number generators supported by numpy.
Step9: FakerGenerator
FakerGenerator gives access to any of the methods supported by the faker module. Here are a couple of examples.
Example
Step10: Example
Step11: IterateOver
IterateOver is a generator which simply iterates over a given sequence. Note that once the generator has been exhausted (by iterating over all its elements), it needs to be reset before it can produce elements again.
Step12: SelectOne
Step13: By default, all possible values are chosen with equal probability, but this can be changed by passing a distribution as the parameter p.
Step14: We can see that the item 'cc' has the highest chance of being selected (70%), followed by 'ee' and 'aa' (12% and 10%, respectively).
Timestamp
Timestamp produces random timestamps between a start and end time (both inclusive).
Step15: If start or end are dates of the form YYYY-MM-DD (without the exact HH
Step16: For convenience, one can also pass a single date, which will produce timestamps during this particular date.
Step17: Note that the generated items are datetime objects (even though they appear as strings when printed above).
Step18: We can use the .strftime() method to create another generator which returns timestamps as strings instead of datetime objects.
Step19: CharString
Step20: It is possible to explicitly specify the character set.
Step21: There are also a few pre-defined character sets.
Step22: DigitString
DigitString is the same as CharString with charset='0123456789'.
Step23: Sequential
Generates a sequence of sequentially numbered strings with a given prefix.
Step24: Calling reset() on the generator makes the numbering start from 1 again.
Step25: Note that the method Sequential.reset() supports the seed argument for consistency with other generators, but its value is ignored - the generator is simply reset to its initial value. This is illustrated here | Python Code:
import tohu
from tohu.v4.primitive_generators import *
from tohu.v4.dispatch_generators import *
from tohu.v4.utils import print_generated_sequence
print(f'Tohu version: {tohu.__version__}')
Explanation: Primitive generators
This notebook contains tests for tohu's primitive generators.
End of explanation
g = Constant('quux')
print_generated_sequence(g, num=10, seed=12345)
Explanation: Constant
Constant simply returns the same, constant value every time.
End of explanation
g1 = Boolean()
g2 = Boolean(p=0.8)
print_generated_sequence(g1, num=20, seed=12345)
print_generated_sequence(g2, num=20, seed=99999)
Explanation: Boolean
Boolean returns either True or False, optionally with different probabilities.
End of explanation
g = Integer(low=100, high=200)
print_generated_sequence(g, num=10, seed=12345)
Explanation: Integer
Integer returns a random integer between low and high (both inclusive).
End of explanation
g = Float(low=2.3, high=4.2)
print_generated_sequence(g, num=10, sep='\n', fmt='.12f', seed=12345)
Explanation: Float
Float returns a random float between low and high (both inclusive).
End of explanation
g = HashDigest(length=6)
print_generated_sequence(g, num=10, seed=12345)
Explanation: HashDigest
HashDigest returns hex strings representing hash digest values (or alternatively raw bytes).
HashDigest hex strings (uppercase)
End of explanation
g = HashDigest(length=6, uppercase=False)
print_generated_sequence(g, num=10, seed=12345)
Explanation: HashDigest hex strings (lowercase)
End of explanation
g = HashDigest(length=10, as_bytes=True)
print_generated_sequence(g, num=5, seed=12345, sep='\n')
Explanation: HashDigest byte strings
End of explanation
g1 = NumpyRandomGenerator(method="normal", loc=3.0, scale=5.0)
g2 = NumpyRandomGenerator(method="poisson", lam=30)
g3 = NumpyRandomGenerator(method="exponential", scale=0.3)
g1.reset(seed=12345); print_generated_sequence(g1, num=4)
g2.reset(seed=12345); print_generated_sequence(g2, num=15)
g3.reset(seed=12345); print_generated_sequence(g3, num=4)
Explanation: NumpyRandomGenerator
This generator can produce random numbers using any of the random number generators supported by numpy.
End of explanation
g = FakerGenerator(method='name')
print_generated_sequence(g, num=8, seed=12345)
Explanation: FakerGenerator
FakerGenerator gives access to any of the methods supported by the faker module. Here are a couple of examples.
Example: random names
End of explanation
g = FakerGenerator(method='address')
print_generated_sequence(g, num=8, seed=12345, sep='\n---\n')
Explanation: Example: random addresses
End of explanation
seq = ['a', 'b', 'c', 'd', 'e']
g = IterateOver(seq)
g.reset()
print([x for x in g])
print([x for x in g])
g.reset()
print([x for x in g])
Explanation: IterateOver
IterateOver is a generator which simply iterates over a given sequence. Note that once the generator has been exhausted (by iterating over all its elements), it needs to be reset before it can produce elements again.
End of explanation
some_items = ['aa', 'bb', 'cc', 'dd', 'ee']
g = SelectOne(some_items)
print_generated_sequence(g, num=30, seed=12345)
Explanation: SelectOne
End of explanation
g = SelectOne(some_items, p=[0.1, 0.05, 0.7, 0.03, 0.12])
print_generated_sequence(g, num=30, seed=99999)
Explanation: By default, all possible values are chosen with equal probability, but this can be changed by passing a distribution as the parameter p.
End of explanation
g = Timestamp(start='1998-03-01 00:02:00', end='1998-03-01 00:02:15')
print_generated_sequence(g, num=10, sep='\n', seed=99999)
Explanation: We can see that the item 'cc' has the highest chance of being selected (70%), followed by 'ee' and 'aa' (12% and 10%, respectively).
Timestamp
Timestamp produces random timestamps between a start and end time (both inclusive).
End of explanation
g = Timestamp(start='2018-02-14', end='2018-02-18')
print_generated_sequence(g, num=5, sep='\n', seed=12345)
Explanation: If start or end are dates of the form YYYY-MM-DD (without the exact HH:MM:SS timestamp), they are interpreted as start='YYYY-MM-DD 00:00:00 and end='YYYY-MM-DD 23:59:59', respectively - i.e., as the beginning and the end of the day.
End of explanation
g = Timestamp(date='2018-01-01')
print_generated_sequence(g, num=5, sep='\n', seed=12345)
Explanation: For convenience, one can also pass a single date, which will produce timestamps during this particular date.
End of explanation
g.reset(seed=12345)
[next(g), next(g), next(g)]
Explanation: Note that the generated items are datetime objects (even though they appear as strings when printed above).
End of explanation
h = Timestamp(date='2018-01-01').strftime('%-d %b %Y, %H:%M (%a)')
h.reset(seed=12345)
[next(h), next(h), next(h)]
Explanation: We can use the .strftime() method to create another generator which returns timestamps as strings instead of datetime objects.
End of explanation
g = CharString(length=15)
print_generated_sequence(g, num=5, seed=12345)
print_generated_sequence(g, num=5, seed=99999)
Explanation: CharString
End of explanation
g = CharString(length=12, charset="ABCDEFG")
print_generated_sequence(g, num=5, sep='\n', seed=12345)
Explanation: It is possible to explicitly specify the character set.
End of explanation
g1 = CharString(length=12, charset="<lowercase>")
g2 = CharString(length=12, charset="<alphanumeric_uppercase>")
print_generated_sequence(g1, num=5, sep='\n', seed=12345); print()
print_generated_sequence(g2, num=5, sep='\n', seed=12345)
Explanation: There are also a few pre-defined character sets.
End of explanation
g = DigitString(length=15)
print_generated_sequence(g, num=5, seed=12345)
print_generated_sequence(g, num=5, seed=99999)
Explanation: DigitString
DigitString is the same as CharString with charset='0123456789'.
End of explanation
g = Sequential(prefix='Foo_', digits=3)
Explanation: Sequential
Generates a sequence of sequentially numbered strings with a given prefix.
End of explanation
g.reset()
print_generated_sequence(g, num=5)
print_generated_sequence(g, num=5)
print()
g.reset()
print_generated_sequence(g, num=5)
Explanation: Calling reset() on the generator makes the numbering start from 1 again.
End of explanation
g.reset(seed=12345); print_generated_sequence(g, num=5)
g.reset(seed=99999); print_generated_sequence(g, num=5)
Explanation: Note that the method Sequential.reset() supports the seed argument for consistency with other generators, but its value is ignored - the generator is simply reset to its initial value. This is illustrated here:
End of explanation |
4,191 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reusing a pool of workers
Some algorithms require to make several consecutive calls to a parallel function interleaved with processing of the intermediate results. Calling Parallel several times in a loop is sub-optimal because it will create and destroy a pool of workers (threads or processes) several times which can cause a significant overhead.
For this case it is more efficient to use the context manager API of the Parallel class to re-use the same pool of workers for several calls to the Parallel object
Step1: Working with numerical data in shared memory (memmaping)
Automated array to memmap conversion | Python Code:
with Parallel(n_jobs=2) as parallel:
accumulator = 0.
n_iter = 0
while accumulator < 1000:
results = parallel(delayed(sqrt)(accumulator + i ** 2)for i in range(5))
accumulator += sum(results) # synchronization barrier
n_iter += 1
(accumulator, n_iter)
Explanation: Reusing a pool of workers
Some algorithms require to make several consecutive calls to a parallel function interleaved with processing of the intermediate results. Calling Parallel several times in a loop is sub-optimal because it will create and destroy a pool of workers (threads or processes) several times which can cause a significant overhead.
For this case it is more efficient to use the context manager API of the Parallel class to re-use the same pool of workers for several calls to the Parallel object:
End of explanation
import numpy as np
from joblib import Parallel, delayed
from joblib.pool import has_shareable_memory
Parallel(n_jobs=2, max_nbytes=1e6)(
delayed(has_shareable_memory)(np.ones(int(i)))
for i in [1e2, 1e4, 1e6])
import joblib
joblib.__version__
Explanation: Working with numerical data in shared memory (memmaping)
Automated array to memmap conversion
End of explanation |
4,192 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
With all that you've learned, your SQL queries are getting pretty long, which can make them hard understand (and debug).
You are about to learn how to use AS and WITH to tidy up your queries and make them easier to read.
Along the way, we'll use the familiar pets table, but now it includes the ages of the animals.
AS
You learned in an earlier tutorial how to use AS to rename the columns generated by your queries, which is also known as aliasing. This is similar to how Python uses as for aliasing when doing imports like import pandas as pd or import seaborn as sns.
To use AS in SQL, insert it right after the column you select. Here's an example of a query without an AS clause
Step2: Since the block_timestamp column contains the date of each transaction in DATETIME format, we'll convert these into DATE format using the DATE() command.
We do that using a CTE, and then the next part of the query counts the number of transactions for each date and sorts the table so that earlier dates appear first.
Step3: Since they're returned sorted, we can easily plot the raw results to show us the number of Bitcoin transactions per day over the whole timespan of this dataset. | Python Code:
#$HIDE_INPUT$
from google.cloud import bigquery
# Create a "Client" object
client = bigquery.Client()
# Construct a reference to the "crypto_bitcoin" dataset
dataset_ref = client.dataset("crypto_bitcoin", project="bigquery-public-data")
# API request - fetch the dataset
dataset = client.get_dataset(dataset_ref)
# Construct a reference to the "transactions" table
table_ref = dataset_ref.table("transactions")
# API request - fetch the table
table = client.get_table(table_ref)
# Preview the first five lines of the "transactions" table
client.list_rows(table, max_results=5).to_dataframe()
Explanation: Introduction
With all that you've learned, your SQL queries are getting pretty long, which can make them hard understand (and debug).
You are about to learn how to use AS and WITH to tidy up your queries and make them easier to read.
Along the way, we'll use the familiar pets table, but now it includes the ages of the animals.
AS
You learned in an earlier tutorial how to use AS to rename the columns generated by your queries, which is also known as aliasing. This is similar to how Python uses as for aliasing when doing imports like import pandas as pd or import seaborn as sns.
To use AS in SQL, insert it right after the column you select. Here's an example of a query without an AS clause:
And here's an example of the same query, but with AS.
These queries return the same information, but in the second query the column returned by the COUNT() function will be called Number, rather than the default name of f0__.
WITH ... AS
On its own, AS is a convenient way to clean up the data returned by your query. It's even more powerful when combined with WITH in what's called a "common table expression".
A common table expression (or CTE) is a temporary table that you return within your query. CTEs are helpful for splitting your queries into readable chunks, and you can write queries against them.
For instance, you might want to use the pets table to ask questions about older animals in particular. So you can start by creating a CTE which only contains information about animals more than five years old like this:
While this incomplete query above won't return anything, it creates a CTE that we can then refer to (as Seniors) while writing the rest of the query.
We can finish the query by pulling the information that we want from the CTE. The complete query below first creates the CTE, and then returns all of the IDs from it.
You could do this without a CTE, but if this were the first part of a very long query, removing the CTE would make it much harder to follow.
Also, it's important to note that CTEs only exist inside the query where you create them, and you can't reference them in later queries. So, any query that uses a CTE is always broken into two parts: (1) first, we create the CTE, and then (2) we write a query that uses the CTE.
Example: How many Bitcoin transactions are made per month?
We're going to use a CTE to find out how many Bitcoin transactions were made each day for the entire timespan of a bitcoin transaction dataset.
We'll investigate the transactions table. Here is a view of the first few rows. (The corresponding code is hidden, but you can un-hide it by clicking on the "Code" button below.)
End of explanation
# Query to select the number of transactions per date, sorted by date
query_with_CTE =
WITH time AS
(
SELECT DATE(block_timestamp) AS trans_date
FROM `bigquery-public-data.crypto_bitcoin.transactions`
)
SELECT COUNT(1) AS transactions,
trans_date
FROM time
GROUP BY trans_date
ORDER BY trans_date
# Set up the query (cancel the query if it would use too much of
# your quota, with the limit set to 10 GB)
safe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**10)
query_job = client.query(query_with_CTE, job_config=safe_config)
# API request - run the query, and convert the results to a pandas DataFrame
transactions_by_date = query_job.to_dataframe()
# Print the first five rows
transactions_by_date.head()
Explanation: Since the block_timestamp column contains the date of each transaction in DATETIME format, we'll convert these into DATE format using the DATE() command.
We do that using a CTE, and then the next part of the query counts the number of transactions for each date and sorts the table so that earlier dates appear first.
End of explanation
transactions_by_date.set_index('trans_date').plot()
Explanation: Since they're returned sorted, we can easily plot the raw results to show us the number of Bitcoin transactions per day over the whole timespan of this dataset.
End of explanation |
4,193 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DiscreteDP
Implementation Details
Daisuke Oyama
Faculty of Economics, University of Tokyo
This notebook describes the implementation details of the DiscreteDP class.
For the theoretical background and notation,
see the lecture Discrete Dynamic Programming.
Solution methods
The DiscreteDP class currently implements the following solution algorithms
Step1: Analytical solution
Step2: Value iteration
Solve the problem by value iteration;
see Example 6.3.1, p.164 in Puterman (2005).
Step3: The number of iterations required to satisfy the termination criterion
Step4: The returned value function
Step5: It is indeed an $\varepsilon/2$-approximation of $v^*$
Step6: The returned policy function
Step7: Value iteration converges very slowly.
Let us replicate Table 6.3.1 on p.165
Step8: On the other hand, the span decreases faster than the norm;
the following replicates Table 6.6.1, page 205
Step9: The span-based termination criterion is satisfied when $i = 11$
Step10: In fact, modified policy iteration with $k = 0$ terminates with $11$ iterations
Step11: Policy iteration
If ${\sigma^i}$ is the sequence of policies obtained by policy iteration
with an initial policy $\sigma^0$,
one can show that $T^i v_{\sigma^0} \leq v_{\sigma^i}$ ($\leq v^*$),
so that the number of iterations required for policy iteration is smaller than
that for value iteration at least weakly,
and indeed in many cases, the former is significantly smaller than the latter.
Step12: Policy iteration returns the exact optimal value function (up to rounding errors)
Step13: To look into the iterations
Step14: See Example 6.4.1, pp.176-177.
Modified policy iteration
The evaluation step in policy iteration
which solves the linear equation $v = T_{\sigma} v$
to obtain the policy value $v_{\sigma}$
can be expensive for problems with a large number of states.
Modified policy iteration is to reduce the cost of this step
by using an approximation of $v_{\sigma}$ obtained by iteration of $T_{\sigma}$.
The tradeoff is that this approach only computes an $\varepsilon$-optimal policy,
and for small $\varepsilon$, takes a larger number of iterations than policy iteration
(but much smaller than value iteration).
Step15: The returned value function
Step16: It is indeed an $\varepsilon/2$-approximation of $v^*$
Step17: To look into the iterations | Python Code:
import numpy as np
import pandas as pd
from quantecon.markov import DiscreteDP
n = 2 # Number of states
m = 2 # Number of actions
# Reward array
R = [[5, 10],
[-1, -float('inf')]]
# Transition probability array
Q = [[(0.5, 0.5), (0, 1)],
[(0, 1), (0.5, 0.5)]] # Probabilities in Q[1, 1] are arbitrary
# Discount rate
beta = 0.95
ddp = DiscreteDP(R, Q, beta)
Explanation: DiscreteDP
Implementation Details
Daisuke Oyama
Faculty of Economics, University of Tokyo
This notebook describes the implementation details of the DiscreteDP class.
For the theoretical background and notation,
see the lecture Discrete Dynamic Programming.
Solution methods
The DiscreteDP class currently implements the following solution algorithms:
value iteration;
policy iteration (default);
modified policy iteration.
Policy iteration computes an exact optimal policy in finitely many iterations,
while value iteration and modified policy iteration return an $\varepsilon$-optimal policy
for a prespecified value of $\varepsilon$.
Value iteration relies on (only) the fact that
the Bellman operator $T$ is a contraction mapping
and thus iterative application of $T$ to any initial function $v^0$
converges to its unique fixed point $v^*$.
Policy iteration more closely exploits the particular structure of the problem,
where each iteration consists of a policy evaluation step,
which computes the value $v_{\sigma}$ of a policy $\sigma$
by solving the linear equation $v = T_{\sigma} v$,
and a policy improvement step, which computes a $v_{\sigma}$-greedy policy.
Modified policy iteration replaces the policy evaluation step
in policy iteration with "partial policy evaluation",
which computes an approximation of the value of a policy $\sigma$
by iterating $T_{\sigma}$ for a specified number of times.
Below we describe our implementation of these algorithms more in detail.
(While not explicit, in the actual implementation each algorithm is terminated
when the number of iterations reaches iter_max.)
Value iteration
DiscreteDP.value_iteration(v_init, epsilon, iter_max)
Choose any $v^0 \in \mathbb{R}^n$, and
specify $\varepsilon > 0$; set $i = 0$.
Compute $v^{i+1} = T v^i$.
If $\lVert v^{i+1} - v^i\rVert < [(1 - \beta) / (2\beta)] \varepsilon$,
then go to step 4;
otherwise, set $i = i + 1$ and go to step 2.
Compute a $v^{i+1}$-greedy policy $\sigma$, and return $v^{i+1}$ and $\sigma$.
Given $\varepsilon > 0$,
the value iteration algorithm terminates in a finite number of iterations,
and returns an $\varepsilon/2$-approximation of the optimal value funciton and
an $\varepsilon$-optimal policy function
(unless iter_max is reached).
Policy iteration
DiscreteDP.policy_iteration(v_init, iter_max)
Choose any $v^0 \in \mathbb{R}^n$ and compute a $v^0$-greedy policy $\sigma^0$;
set $i = 0$.
[Policy evaluation]
Compute the value $v_{\sigma^i}$ by solving the equation $v = T_{\sigma^i} v$.
[Policy improvement]
Compute a $v_{\sigma^i}$-greedy policy $\sigma^{i+1}$;
let $\sigma^{i+1} = \sigma^i$ if possible.
If $\sigma^{i+1} = \sigma^i$,
then return $v_{\sigma^i}$ and $\sigma^{i+1}$;
otherwise, set $i = i + 1$ and go to step 2.
The policy iteration algorithm terminates in a finite number of iterations, and
returns an optimal value function and an optimal policy function
(unless iter_max is reached).
Modified policy iteration
DiscreteDP.modified_policy_iteration(v_init, epsilon, iter_max, k)
Choose any $v^0 \in \mathbb{R}^n$, and
specify $\varepsilon > 0$ and $k \geq 0$;
set $i = 0$.
[Policy improvement]
Compute a $v^i$-greedy policy $\sigma^{i+1}$;
let $\sigma^{i+1} = \sigma^i$ if possible (for $i \geq 1$).
Compute $u = T v^i$ ($= T_{\sigma^{i+1}} v^i$).
If $\mathrm{span}(u - v^i) < [(1 - \beta) / \beta] \varepsilon$, then go to step 5;
otherwise go to step 4.
[Partial policy evaluation]
Compute $v^{i+1} = (T_{\sigma^{i+1}})^k u$ ($= (T_{\sigma^{i+1}})^{k+1} v^i$).
Set $i = i + 1$ and go to step 2.
Return
$v = u + [\beta / (1 - \beta)] [(\min(u - v^i) + \max(u - v^i)) / 2] \mathbf{1}$
and $\sigma_{i+1}$.
Given $\varepsilon > 0$,
provided that $v^0$ is such that $T v^0 \geq v^0$,
the modified policy iteration algorithm terminates in a finite number of iterations,
and returns an $\varepsilon/2$-approximation of the optimal value funciton and
an $\varepsilon$-optimal policy function
(unless iter_max is reached).
Remarks
Here we employ the termination criterion based on the span semi-norm,
where $\mathrm{span}(z) = \max(z) - \min(z)$ for $z \in \mathbb{R}^n$.
Since $\mathrm{span}(T v - v) \leq 2\lVert T v - v\rVert$,
this reaches $\varepsilon$-optimality faster than the norm-based criterion
as employed in the value iteration above.
Except for the termination criterion,
modified policy is equivalent to value iteration if $k = 0$ and
to policy iteration in the limit as $k \to \infty$.
Thus, if one would like to have value iteration with the span-based rule,
run modified policy iteration with $k = 0$.
In returning a value function, our implementation is slightly different from
that by Puterman (2005), Section 6.6.3, pp.201-202, which uses
$u + [\beta / (1 - \beta)] \min(u - v^i) \mathbf{1}$.
The condition for convergence, $T v^0 \geq v^0$, is satisfied
for example when $v^0 = v_{\sigma}$ for some policy $\sigma$,
or when $v^0(s) = \min_{(s', a)} r(s', a)$ for all $s$.
If v_init is not specified, it is set to the latter, $\min_{(s', a)} r(s', a))$.
Illustration
We illustrate the algorithms above
by the simple example from Puterman (2005), Section 3.1, pp.33-35.
End of explanation
def sigma_star(beta):
sigma = np.empty(2, dtype=int)
sigma[1] = 0
if beta > 10/11:
sigma[0] = 0
else:
sigma[0] = 1
return sigma
def v_star(beta):
v = np.empty(2)
v[1] = -1 / (1 - beta)
if beta > 10/11:
v[0] = (5 - 5.5*beta) / ((1 - 0.5*beta) * (1 - beta))
else:
v[0] = (10 - 11*beta) / (1 - beta)
return v
sigma_star(beta=beta)
v_star(beta=beta)
Explanation: Analytical solution:
End of explanation
epsilon = 1e-2
v_init = [0, 0]
res_vi = ddp.solve(method='value_iteration', v_init=v_init, epsilon=epsilon)
Explanation: Value iteration
Solve the problem by value iteration;
see Example 6.3.1, p.164 in Puterman (2005).
End of explanation
res_vi.num_iter
Explanation: The number of iterations required to satisfy the termination criterion:
End of explanation
res_vi.v
Explanation: The returned value function:
End of explanation
np.abs(res_vi.v - v_star(beta=beta)).max() < epsilon/2
Explanation: It is indeed an $\varepsilon/2$-approximation of $v^*$:
End of explanation
res_vi.sigma
Explanation: The returned policy function:
End of explanation
num_reps = 164
values = np.empty((num_reps, n))
diffs = np.empty(num_reps)
spans = np.empty(num_reps)
v = np.array([0, 0])
values[0] = v
diffs[0] = np.nan
spans[0] = np.nan
for i in range(1, num_reps):
v_new = ddp.bellman_operator(v)
values[i] = v_new
diffs[i] = np.abs(v_new - v).max()
spans[i] = (v_new - v).max() - (v_new - v).min()
v = v_new
df = pd.DataFrame()
df[0], df[1], df[2], df[3] = values[:, 0], values[:, 1], diffs, spans
df.columns = '$v^i(0)$', '$v^i(1)$', \
'$\\lVert v^i - v^{i-1}\\rVert$', '$\\mathrm{span}(v^i - v^{i-1})$'
iter_nums = pd.Series(list(range(num_reps)), name='$i$')
df.index = iter_nums
display_nums = \
list(range(10)) + [10*i for i in range(1, 16)] + [160+i for i in range(4)]
df.iloc[display_nums, [0, 1, 2]]
Explanation: Value iteration converges very slowly.
Let us replicate Table 6.3.1 on p.165:
End of explanation
df.iloc[list(range(1, 13)) + [10*i for i in range(2, 7)], [2, 3]]
Explanation: On the other hand, the span decreases faster than the norm;
the following replicates Table 6.6.1, page 205:
End of explanation
epsilon * (1-beta) / beta
spans[11] < epsilon * (1-beta) / beta
Explanation: The span-based termination criterion is satisfied when $i = 11$:
End of explanation
epsilon = 1e-2
v_init = [0, 0]
k = 0
res_mpi_1 = ddp.solve(method='modified_policy_iteration',
v_init=v_init, epsilon=epsilon, k=k)
res_mpi_1.num_iter
res_mpi_1.v
Explanation: In fact, modified policy iteration with $k = 0$ terminates with $11$ iterations:
End of explanation
v_init = [0, 0]
res_pi = ddp.solve(method='policy_iteration', v_init=v_init)
res_pi.num_iter
Explanation: Policy iteration
If ${\sigma^i}$ is the sequence of policies obtained by policy iteration
with an initial policy $\sigma^0$,
one can show that $T^i v_{\sigma^0} \leq v_{\sigma^i}$ ($\leq v^*$),
so that the number of iterations required for policy iteration is smaller than
that for value iteration at least weakly,
and indeed in many cases, the former is significantly smaller than the latter.
End of explanation
res_pi.v
np.abs(res_pi.v - v_star(beta=beta)).max()
Explanation: Policy iteration returns the exact optimal value function (up to rounding errors):
End of explanation
v = np.array([0, 0])
sigma = np.array([-1, -1]) # Dummy
sigma_new = ddp.compute_greedy(v)
i = 0
while True:
print('Iterate {0}'.format(i))
print(' value: {0}'.format(v))
print(' policy: {0}'.format(sigma_new))
if np.array_equal(sigma_new, sigma):
break
sigma[:] = sigma_new
v = ddp.evaluate_policy(sigma)
sigma_new = ddp.compute_greedy(v)
i += 1
print('Terminated')
Explanation: To look into the iterations:
End of explanation
epsilon = 1e-2
v_init = [0, 0]
k = 6
res_mpi = ddp.solve(method='modified_policy_iteration',
v_init=v_init, epsilon=epsilon, k=k)
res_mpi.num_iter
Explanation: See Example 6.4.1, pp.176-177.
Modified policy iteration
The evaluation step in policy iteration
which solves the linear equation $v = T_{\sigma} v$
to obtain the policy value $v_{\sigma}$
can be expensive for problems with a large number of states.
Modified policy iteration is to reduce the cost of this step
by using an approximation of $v_{\sigma}$ obtained by iteration of $T_{\sigma}$.
The tradeoff is that this approach only computes an $\varepsilon$-optimal policy,
and for small $\varepsilon$, takes a larger number of iterations than policy iteration
(but much smaller than value iteration).
End of explanation
res_mpi.v
Explanation: The returned value function:
End of explanation
np.abs(res_mpi.v - v_star(beta=beta)).max() < epsilon/2
Explanation: It is indeed an $\varepsilon/2$-approximation of $v^*$:
End of explanation
epsilon = 1e-2
v = np.array([0, 0])
k = 6
i = 0
print('Iterate {0}'.format(i))
print(' v: {0}'.format(v))
sigma = np.empty(n, dtype=int) # Store the policy function
while True:
i += 1
u = ddp.bellman_operator(v, sigma=sigma)
diff = u - v
span = diff.max() - diff.min()
print('Iterate {0}'.format(i))
print(' sigma: {0}'.format(sigma))
print(' T_sigma(v): {0}'.format(u))
print(' span: {0}'.format(span))
if span < epsilon * (1-ddp.beta) / ddp.beta:
v = u + ((diff.max() + diff.min()) / 2) * \
(ddp.beta / (1 - ddp.beta))
break
ddp.operator_iteration(ddp.T_sigma(sigma), v=u, max_iter=k)
v = u
print(' T_sigma^k+1(v): {0}'.format(v))
print('Terminated')
print(' sigma: {0}'.format(sigma))
print(' v: {0}'.format(v))
Explanation: To look into the iterations:
End of explanation |
4,194 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1"><a href="#Example-of-GibbsLDA-and-vbLDA"><span class="toc-item-num">1 - </span>Example of GibbsLDA and vbLDA</a></div><div class="lev2"><a href="#Loading-Reuter-corpus-from-NLTK"><span class="toc-item-num">1.1 - </span>Loading Reuter corpus from NLTK</a></div><div class="lev2"><a href="#Inferencen-through-the-Gibbs-sampling"><span class="toc-item-num">1.2 - </span>Inferencen through the Gibbs sampling</a></div><div class="lev3"><a href="#Print-top-10-probability-words-for-each-topic"><span class="toc-item-num">1.2.1 - </span>Print top 10 probability words for each topic</a></div><div class="lev2"><a href="#Inferencen-through-the-Variational-Bayes"><span class="toc-item-num">1.3 - </span>Inferencen through the Variational Bayes</a></div><div class="lev3"><a href="#Print-top-10-probability-words-for-each-topic"><span class="toc-item-num">1.3.1 - </span>Print top 10 probability words for each topic</a></div>
# Example of GibbsLDA and vbLDA
This example requires to install three nltk corpora
Step1: Loading Reuter corpus from NLTK
Load reuter corpus including 1000 documents with maximum vocabulary size of 10000 from NLTK corpus
Step2: Inferencen through the Gibbs sampling
Step3: Print top 10 probability words for each topic
Step4: Inferencen through the Variational Bayes
Step5: Print top 10 probability words for each topic | Python Code:
import logging
import numpy as np
from ptm import GibbsLDA
from ptm import vbLDA
from ptm.nltk_corpus import get_reuters_ids_cnt
from ptm.utils import convert_cnt_to_list, get_top_words
Explanation: Table of Contents
<p><div class="lev1"><a href="#Example-of-GibbsLDA-and-vbLDA"><span class="toc-item-num">1 - </span>Example of GibbsLDA and vbLDA</a></div><div class="lev2"><a href="#Loading-Reuter-corpus-from-NLTK"><span class="toc-item-num">1.1 - </span>Loading Reuter corpus from NLTK</a></div><div class="lev2"><a href="#Inferencen-through-the-Gibbs-sampling"><span class="toc-item-num">1.2 - </span>Inferencen through the Gibbs sampling</a></div><div class="lev3"><a href="#Print-top-10-probability-words-for-each-topic"><span class="toc-item-num">1.2.1 - </span>Print top 10 probability words for each topic</a></div><div class="lev2"><a href="#Inferencen-through-the-Variational-Bayes"><span class="toc-item-num">1.3 - </span>Inferencen through the Variational Bayes</a></div><div class="lev3"><a href="#Print-top-10-probability-words-for-each-topic"><span class="toc-item-num">1.3.1 - </span>Print top 10 probability words for each topic</a></div>
# Example of GibbsLDA and vbLDA
This example requires to install three nltk corpora:nltk.corpus.reuters, nltk.corpus.words, nltk.corpus.stopwords.
You can download the corpora via `nltk.download()`
End of explanation
n_doc = 1000
voca, doc_ids, doc_cnt = get_reuters_ids_cnt(num_doc=n_doc, max_voca=10000)
docs = convert_cnt_to_list(doc_ids, doc_cnt)
n_voca = len(voca)
print('Vocabulary size:%d' % n_voca)
Explanation: Loading Reuter corpus from NLTK
Load reuter corpus including 1000 documents with maximum vocabulary size of 10000 from NLTK corpus
End of explanation
max_iter=100
n_topic=10
logger = logging.getLogger('GibbsLDA')
logger.propagate = False
model = GibbsLDA(n_doc, len(voca), n_topic)
model.fit(docs, max_iter=max_iter)
Explanation: Inferencen through the Gibbs sampling
End of explanation
for ti in range(n_topic):
top_words = get_top_words(model.TW, voca, ti, n_words=10)
print('Topic', ti ,': ', ','.join(top_words))
Explanation: Print top 10 probability words for each topic
End of explanation
logger = logging.getLogger('vbLDA')
logger.propagate = False
vbmodel = vbLDA(n_doc, n_voca, n_topic)
vbmodel.fit(doc_ids, doc_cnt, max_iter=max_iter)
Explanation: Inferencen through the Variational Bayes
End of explanation
for ti in range(n_topic):
top_words = get_top_words(vbmodel._lambda, voca, ti, n_words=10)
print('Topic', ti ,': ', ','.join(top_words))
Explanation: Print top 10 probability words for each topic
End of explanation |
4,195 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Changepoint analysis
This notebook reflects an intermediate stage of work on the project that became "You say you found a revolution." Underwood was attempting to directly compare non-overlapping segments of a timeline and then use changepoint analysis on the comparisons. That works, but in the end it may not provide results that are as satisfactory as Foote novelty (if you're careful about testing Foote appropriately). We've preserved the notebook, in case it's useful, but probably in the long run it would be better to run changepoint analysis on Foote novelties.
The basic problem here is that we want to know whether works of music and literature changed more rapidly in some periods of history than in others.
Defining "cultural change" is not of course a purely quantitative problem. But it's worth starting with a simple approach, to see what we find. If simple measures of similarity between works do reveal significant periods of acceleration, that would be interesting -- even if it's not the only kind of change that matters.
So let's provisionally define the distance between two works (songs or novels) as cosine distance between two vectors. In the case of a novel, the components of a vector might represent word frequencies, or topic frequencies. In the case of a song, they're going to represent the proportion of the song assigned to one of fourteen primary components inferred by Mauch et al. in The Evolution of Popular Music.
I'm using the Mauch et al. dataset as a test case. I'm not actually interested in making claims about the history of music, and I cannot know whether their dimension-reduction of music (using LDA and PCA) is reliable. But it's a large and well-organized dataset that I can use to develop ways of measuring historical change before I move on to my own (literary) data. Mauch and his coauthors have already identified three significant "revolutions" in the music data, which they date to specific years, but I'm skeptical about their assessment of statistical significance. (I believe the method they used is likely to produce p < 0.05 for any year in a historically-sequential time series.)
Let's try a different approach, using changepoint detection. We'll start by importing some useful modules, and loading the music data. The timeline is 200 quarter-year periods between 1960 and 2010. Each row of the data file represents a single song, and the variables we want are principal components in columns labeled PC1 to PC14. We're going to turn each song into a numpy vector, and aggregate the songs for each quarter in a dictionary where they can be recalled by "quarter numbers" that range from 0 to 199.
Step1: Ideally, the number of songs would be constant across the timeline. Otherwise we might run into unequal variance. In the music dataset, this is a significant issue.
Step2: Let's create a more even distribution by randomly selecting 75 songs per quarter. This won't produce a perfectly even distribution, but I believe it's close enough not to make a big difference.
Step3: Good enough. In principle, the dip that remains could make change seem more rapid around 2000 (since smaller samples could be more volatile). So we should be skeptical of any signal to that effect. (In practice, I don't think we'll see that sort of signal.)
Now, how to assess the pace of change?
A simple way is to compare the mean vectors for adjacent segments of the timeline.
Let's define a function.
Step4: Now we can straightforwardly plot the pace of change by comparing, say, the first half of each year (2 quarters) to the next.
Step5: The peaks here represent periods of rapid change within a year -- places where the centroid for the first half of the year was very different from the centroid for the second.
Now, of course we don't know that these changes continue in the same direction. Maybe periods of "rapid change" here are just periods where spring and summer was very different from fall and winter. The "rapid change" could just be rapid motion back and forth.
We can test this by using larger windows. For instance, what happens if we compare change year-over-year, or compare the previous two years to the next two years?
Step6: This is semi-reassuring. Periods where short-term differences are substantial mostly seem to be the same periods where difference is substantial over longer windows. So we're probably not seeing simple oscillation; there's some reason to think that short-term and long-term change correlate. (However, the arbitrariness of these windows is a basic problem. This is a place where I'd love to use a more elegant method if one is available. At the end of the notebook I discuss some possibilities.)
The results we're seeing loosely but only loosely conform to the results in the original article, where 1964, 1991, and especially 1983 were points of particularly rapid change.
But a more fundamental problem remains
Step7: The cumulative sum plot is not very easy to read. Peaks now represent years at the end of a period of rapid change; rapider-than-normal change is indicated by a rising slope. Slower-than-normal change is a declining slope. The real point of the cumulative sum technique is that it allows us to run a significance test. If we randomly permuted the underlying year-to-year distances, how likely would we be to see graphs that span a vertical distance equal to the distance shown above between highest peak and deepest valley?
A lot of what I do here is guided by Wayne Taylor's discussion of changepoint analysis.
Step8: We can use the function we just defined to find all the changepoints in the time series. (Remember, we're looking now for points that mark significant divisions in rates of change.) The strategy we'll use to find all significant points of change is recursive. Test the whole series for a significant changepoint. If you find one, divide the series into two parts at the changepoint, and test both of the parts for significant changepoints. Keep dividing recursively, stopping when you get a non-significant result or the series is too small to matter. When you run this, you often find a large number of significant changepoints. But remember, you're also running more than one test! We're going to need to compensate for multiple comparisons.
Step9: This is a list of possible changepoints, each of which is represented as a (pvalue, year) tuple. We compensate for multiple comparisons using the Holm-Bonferroni method. Many fewer changepoints are returned.
Step10: Now we can use those changepoints to divide periods where the pace of change is really significantly different. Then we can visualize the mean pace of change for each of those periods. (It would also be nice to have confidence intervals, but this is a first pass.)
Step11: This method suggests that Mauch et al. are over-interpreting their evidence. We cannot really make claims about multiple "revolutions" dated precisely to 1964, 1983, and 1991. According to this graph, there's only one significant change in pace -- a decline from more rapid to less rapid change around 1983.
On the other hand, the arbitrariness of the window we're using for year-to-year comparisons remains basically troubling. Intuitively it's easy to imagine slow changes that would never make much difference on a year-to-year scale, but might be quite dramatic if we compared (say) four-year periods. And there's some evidence that could make a real difference.
Step12: Above, for instance, you see a really different story when we compare distances between the four years immediately before and after various points on the timeline. 1992 is now the most significant point of change. But when I analyze the time series on this scale, it's difficult to make any claims about statistical significance, because we don't have enough observations to use a "cumulative sum" technique.
There are two separate problems here
Step13: The best solution I can see is just to use visualizations that reveal multiple scales. The authors of "The Evolution of Popular Music" have one nice way to visualize multiple scales of comparison at once, by creating a distance matrix of all segments against all segments.
Step14: This is a great visualization, revealing similarity and difference on many different scales. The diagonal line here is basically the timeline, and yellow "squares" are in effect areas of similarity. The "pinch points" between squares are places where change is relatively rapid.
However, this visualization by itself is not a good foundation for claims about effect size or statistical significance. Mauch et al. run a permutation test on it, but it's not very helpful since all historically sequential data is going to have that yellow road running diagonally through the middle, and all permuted datasets won't. To infer statistical significance we have to run a permutation on a sequence of differences, which means we have to choose a particular window width.
Another possible solution to the arbitrariness of window width would be to run changepoint analysis directly on the underlying multivariate time series instead of on measurements of distance between fixed segments. I think there are such techniques. James and Matteson have an R package ecp that does this. But I notice they say that it makes the assumption "that observations are independent over time," and I don't think I can make that assumption. We know for a fact that songs near each other on the timeline tend to be much more similar than those far apart (because all historical datasets have a "diagonal yellow road").
So, unless there's a cool thing I haven't tried yet, I think problem 1 (arbitrariness of window width) is basically insoluble.
However there are some things we could do to solve problem 2 (the difficulty of testing significance at all on wide windows).
Take two
Step15: Not bad. Now we're getting somewhere. This plots the distances between pairs of quarters separated by four years (2 yrs on either side of the midpoint), and then identifies sections of the timeline where sequences of those distances are significantly above or below trend.
But the results we're seeing here are different enough from the year-over-year method that they may just confirm our worry that the width of the "window" you're using is an arbitrary and important parameter. For instance, how much do things change if we look at pairs of quarters separated by six years? | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import csv, os, random
import numpy as np
from collections import Counter
from scipy import spatial
songsbyquarter = dict()
numfields = 14
fieldnames = []
for i in range(14):
fieldnames.append('PC' + str(i+1))
maxquarter = 0
with open('EvolutionPopUSA_MainData.csv', encoding = 'utf-8') as f:
reader = csv.DictReader(f)
for row in reader:
dateparts = row['quarter'].split(' Q')
year = int(dateparts[0])
quarter = int(dateparts[1])
quarter = (((year - 1960) * 4) + quarter) - 1
if quarter > maxquarter:
maxquarter = quarter
thisvector = np.zeros(14)
for i in range(14):
thisvector[i] = float(row[fieldnames[i]])
if quarter not in songsbyquarter:
songsbyquarter[quarter] = []
songsbyquarter[quarter].append(thisvector)
print(maxquarter)
Explanation: Changepoint analysis
This notebook reflects an intermediate stage of work on the project that became "You say you found a revolution." Underwood was attempting to directly compare non-overlapping segments of a timeline and then use changepoint analysis on the comparisons. That works, but in the end it may not provide results that are as satisfactory as Foote novelty (if you're careful about testing Foote appropriately). We've preserved the notebook, in case it's useful, but probably in the long run it would be better to run changepoint analysis on Foote novelties.
The basic problem here is that we want to know whether works of music and literature changed more rapidly in some periods of history than in others.
Defining "cultural change" is not of course a purely quantitative problem. But it's worth starting with a simple approach, to see what we find. If simple measures of similarity between works do reveal significant periods of acceleration, that would be interesting -- even if it's not the only kind of change that matters.
So let's provisionally define the distance between two works (songs or novels) as cosine distance between two vectors. In the case of a novel, the components of a vector might represent word frequencies, or topic frequencies. In the case of a song, they're going to represent the proportion of the song assigned to one of fourteen primary components inferred by Mauch et al. in The Evolution of Popular Music.
I'm using the Mauch et al. dataset as a test case. I'm not actually interested in making claims about the history of music, and I cannot know whether their dimension-reduction of music (using LDA and PCA) is reliable. But it's a large and well-organized dataset that I can use to develop ways of measuring historical change before I move on to my own (literary) data. Mauch and his coauthors have already identified three significant "revolutions" in the music data, which they date to specific years, but I'm skeptical about their assessment of statistical significance. (I believe the method they used is likely to produce p < 0.05 for any year in a historically-sequential time series.)
Let's try a different approach, using changepoint detection. We'll start by importing some useful modules, and loading the music data. The timeline is 200 quarter-year periods between 1960 and 2010. Each row of the data file represents a single song, and the variables we want are principal components in columns labeled PC1 to PC14. We're going to turn each song into a numpy vector, and aggregate the songs for each quarter in a dictionary where they can be recalled by "quarter numbers" that range from 0 to 199.
End of explanation
def display_count(songsbyquarter):
sequential_numbers = []
for i in range(200):
sequential_numbers.append(len(songsbyquarter[i]))
ax = plt.axes()
ax.set_ylim(0, max(sequential_numbers) + 5)
xvals = [1960 + x/4 for x in range(200)]
plt.plot(xvals, sequential_numbers)
plt.show()
display_count(songsbyquarter)
Explanation: Ideally, the number of songs would be constant across the timeline. Otherwise we might run into unequal variance. In the music dataset, this is a significant issue.
End of explanation
songsample = dict()
for i in range(200):
if len(songsbyquarter[i]) < 75:
n = len(songsbyquarter[i])
else:
n = 75
songsample[i] = random.sample(songsbyquarter[i], n)
display_count(songsample)
Explanation: Let's create a more even distribution by randomly selecting 75 songs per quarter. This won't produce a perfectly even distribution, but I believe it's close enough not to make a big difference.
End of explanation
import csv
with open('randomsubset.csv', mode='w', encoding = 'utf-8') as f:
writer = csv.writer(f)
header = ['quarternumber']
header.extend(fieldnames)
writer.writerow(header)
for i in range(200):
s = songsample[i]
for song in s:
outrow = [i]
outrow.extend(song)
writer.writerow(outrow)
def segment_cosine(center, halfwidth, songs):
''' Calculates the cosine distance between two segments of
the timeline -- one of length "halfwidth" before the position
called "center," and one of length "halfwidth" starting at
"center." We calculate cosine distance between centroids that
are simply the mean vector for each segment.
'''
global numfields
oldvec = np.zeros(numfields)
newvec = np.zeros(numfields)
for i in range(center - halfwidth, center):
if i in songs:
for song in songs[i]:
oldvec += song
for i in range(center, center + halfwidth):
if i in songs:
for song in songs[i]:
newvec += song
dist = spatial.distance.cosine(oldvec, newvec)
return dist
Explanation: Good enough. In principle, the dip that remains could make change seem more rapid around 2000 (since smaller samples could be more volatile). So we should be skeptical of any signal to that effect. (In practice, I don't think we'll see that sort of signal.)
Now, how to assess the pace of change?
A simple way is to compare the mean vectors for adjacent segments of the timeline.
Let's define a function.
End of explanation
def get_distance(interval, centroids):
''' Calculates the cosine distances between non-overlapping
segments of "interval" width. It also prints out the year where
distance is at a maximum.
'''
distances = []
years = []
for i in range(interval, 200 - interval, interval):
thisdist = segment_cosine(i, interval, centroids)
distances.append(thisdist)
years.append(1960 + i / 4)
return years, distances
def plot_timeseries(twotuple):
years, distances = twotuple
ax = plt.axes()
ax.set_ylim(min(distances) - 0.02, max(distances) + 0.02)
plt.plot(years, distances)
plt.show()
print(years[distances.index(max(distances))])
plot_timeseries(get_distance(2, songsample))
Explanation: Now we can straightforwardly plot the pace of change by comparing, say, the first half of each year (2 quarters) to the next.
End of explanation
x = plot_timeseries(get_distance(4, songsample))
x = plot_timeseries(get_distance(8, songsample))
Explanation: The peaks here represent periods of rapid change within a year -- places where the centroid for the first half of the year was very different from the centroid for the second.
Now, of course we don't know that these changes continue in the same direction. Maybe periods of "rapid change" here are just periods where spring and summer was very different from fall and winter. The "rapid change" could just be rapid motion back and forth.
We can test this by using larger windows. For instance, what happens if we compare change year-over-year, or compare the previous two years to the next two years?
End of explanation
def cumulative_sum(twotuple):
''' Returns a cumulative running sum of differences from the mean.
'''
years, distances = twotuple
meandist = sum(distances) / len(distances)
cs = []
seqlen = len(distances)
for i in range(seqlen):
if len(cs) > 0:
oldcs = cs[i-1]
else:
oldcs = 0
newcs = oldcs + (distances[i] - meandist)
cs.append(newcs)
assert len(cs) == len(years)
return years, cs
plot_timeseries(cumulative_sum(get_distance(4,songsample)))
Explanation: This is semi-reassuring. Periods where short-term differences are substantial mostly seem to be the same periods where difference is substantial over longer windows. So we're probably not seeing simple oscillation; there's some reason to think that short-term and long-term change correlate. (However, the arbitrariness of these windows is a basic problem. This is a place where I'd love to use a more elegant method if one is available. At the end of the notebook I discuss some possibilities.)
The results we're seeing loosely but only loosely conform to the results in the original article, where 1964, 1991, and especially 1983 were points of particularly rapid change.
But a more fundamental problem remains: we have no way of knowing whether the differences of rate between different periods are significant. The distance between 1981 and 1982 may be six or seven times the distance between 1986 and 1987. But is that difference significant, or could a variation of that magnitude occur randomly?
Normally I'd try a permutation test by shuffling songs and re-running the comparison. But that doesn't make sense here. The sequential ordering of songs is not an accidental feature of the dataset. We have very strong reason to expect that songs near each other on the timeline will be more similar than those far apart. So if we randomize placement on the timeline, we'll get distances between years that bear no relation at all to the distances you would see in any historical sequence.
However, changepoint analysis offers a solution. We can calculate changes on some fairly short window (say, year-to-year) and then use a cumulative summing technique to identify periods where those year-to-year distances tend to be consistently above or below average for the whole timeline. As long as the individual year-to-year distances are independent of each other, we can meaningfully test the statistical significance of these periods of sustained change by permuting, not the underlying songs, but the distance measurements.
End of explanation
def permute_test(distances, value_to_test):
''' Runs 1000 random permutations, to assess how often random sequences
produce a cumulative sum graph that varies as much as the actual one.
'''
peaktovalley = []
years = [0] * len(distances)
for i in range(1000):
permuted = random.sample(distances, len(distances))
years, cumsum = cumulative_sum((years, permuted))
thisp2v = (max(cumsum) - min(cumsum))
peaktovalley.append(thisp2v)
peaktovalley.sort(reverse = True)
for idx, value in enumerate(peaktovalley):
if value_to_test > value:
break
return (idx / 1000)
Explanation: The cumulative sum plot is not very easy to read. Peaks now represent years at the end of a period of rapid change; rapider-than-normal change is indicated by a rising slope. Slower-than-normal change is a declining slope. The real point of the cumulative sum technique is that it allows us to run a significance test. If we randomly permuted the underlying year-to-year distances, how likely would we be to see graphs that span a vertical distance equal to the distance shown above between highest peak and deepest valley?
A lot of what I do here is guided by Wayne Taylor's discussion of changepoint analysis.
End of explanation
def find_changepoints(years, distances, existing_points):
years, cumsum = cumulative_sum((years, distances))
p2v = max(cumsum) - min(cumsum)
pval = permute_test(distances, p2v)
absolutes = [abs(x) for x in cumsum]
idx = absolutes.index(max(absolutes))
# Even if this segment of the time series fails to produce a significant
# changepoint, we record it so we know how many hypotheses
# we tested.
existing_points.append((pval, years[idx]))
if pval < 0.05:
# This is a significant changepoint. We now recursively test the
# halves on either side.
firstyrs = years[0: idx]
secondyrs = years[idx : len(years)]
firstdist = distances[0: idx]
seconddist = distances[idx : len(distances)]
# We only look for more changepoints in sequences that are reasonably
# long. This is admittedly an arbitrary parameter. Taking it out
# probably wouldn't change that much.
if len(firstyrs) > 9:
first_points = find_changepoints(firstyrs, firstdist, [])
else:
first_points = []
if len(secondyrs) > 9:
second_points = find_changepoints(secondyrs, seconddist, [])
else:
second_points = []
existing_points.extend(first_points)
existing_points.extend(second_points)
return existing_points
years, distances = get_distance(6, songsample)
cpoints = find_changepoints(years, distances, [])
cpoints.sort()
print(cpoints)
Explanation: We can use the function we just defined to find all the changepoints in the time series. (Remember, we're looking now for points that mark significant divisions in rates of change.) The strategy we'll use to find all significant points of change is recursive. Test the whole series for a significant changepoint. If you find one, divide the series into two parts at the changepoint, and test both of the parts for significant changepoints. Keep dividing recursively, stopping when you get a non-significant result or the series is too small to matter. When you run this, you often find a large number of significant changepoints. But remember, you're also running more than one test! We're going to need to compensate for multiple comparisons.
End of explanation
def holm_bonferroni(testlist):
''' Accepts a list of two-tuples, each of which is a pvalue
and a year. Sorts them and applies the Holm-Bonferroni
correction for multiple comparisons.
'''
testlist.sort()
accepted = []
m = len(testlist)
for idx, twotuple in enumerate(testlist):
pvalue, year = twotuple
threshold = 0.05 / (m + 1 - idx)
if pvalue > threshold:
break
else:
accepted.append(twotuple)
return accepted
changepoints = holm_bonferroni(cpoints)
print(changepoints)
Explanation: This is a list of possible changepoints, each of which is represented as a (pvalue, year) tuple. We compensate for multiple comparisons using the Holm-Bonferroni method. Many fewer changepoints are returned.
End of explanation
def plot_changepoints(songsample, changepoints, years, distances, overplot):
plt.rcParams["figure.figsize"] = [9.0, 6.0]
ax = plt.axes()
ax.set_ylim(0, max(distances) + 0.02)
ax.set_xlim(min(years) - 1, max(years) + 1)
if overplot:
plt.plot(years, distances, marker = 's')
else:
plt.plot(years, distances)
changepoints.sort(key = lambda x: x[1])
changepoints.append((0, 2009))
startpoint = 1960
changeidx = 0
for year in range(1960, 2010):
if year >= changepoints[changeidx][1]:
changeidx += 1
endpoint = year
thisrange = []
for year, dist in zip(years, distances):
if year >= startpoint and year < endpoint:
thisrange.append(dist)
thismean = sum(thisrange) / len(thisrange)
plt.hlines(thismean, startpoint, endpoint, 'r', linewidth = 3)
startpoint = endpoint
plt.show()
years, distances = get_distance(8,songsample)
plot_changepoints(songsample, changepoints, years, distances, True)
fig_size = plt.rcParams["figure.figsize"]
print(fig_size)
Explanation: Now we can use those changepoints to divide periods where the pace of change is really significantly different. Then we can visualize the mean pace of change for each of those periods. (It would also be nice to have confidence intervals, but this is a first pass.)
End of explanation
x = plot_timeseries(get_distance(16, songsample))
Explanation: This method suggests that Mauch et al. are over-interpreting their evidence. We cannot really make claims about multiple "revolutions" dated precisely to 1964, 1983, and 1991. According to this graph, there's only one significant change in pace -- a decline from more rapid to less rapid change around 1983.
On the other hand, the arbitrariness of the window we're using for year-to-year comparisons remains basically troubling. Intuitively it's easy to imagine slow changes that would never make much difference on a year-to-year scale, but might be quite dramatic if we compared (say) four-year periods. And there's some evidence that could make a real difference.
End of explanation
def smooth_distance(interval, centroids):
''' Calculates the cosine distances between non-overlapping
segments of "interval" width. It also prints out the year where
distance is at a maximum.
'''
distances = []
years = []
for i in range(interval, 200 - interval, 1):
thisdist = segment_cosine(i, interval, centroids)
distances.append(thisdist)
years.append(1960 + i / 4)
return years, distances
years, distances = smooth_distance(20, songsample)
plt.plot(years, distances)
years, distances = smooth_distance(16, songsample)
plt.plot(years, distances)
years, distances = smooth_distance(12, songsample)
plt.plot(years, distances)
plt.ylabel('Cosine distance')
plt.show()
Explanation: Above, for instance, you see a really different story when we compare distances between the four years immediately before and after various points on the timeline. 1992 is now the most significant point of change. But when I analyze the time series on this scale, it's difficult to make any claims about statistical significance, because we don't have enough observations to use a "cumulative sum" technique.
There are two separate problems here:
The arbitrariness of window width.
The problem that wide windows don't give you enough observations to test for significance.
The first one, I think, is just a basic problem that's hard to get around. Here's another example of how it affects the curves even if you don't insist on non-overlapping comparisons.
End of explanation
def distance_matrix(songs, maxquarter, numfields):
observations = maxquarter + 1
distmat = np.zeros((observations, observations))
for i in range(observations):
for j in range(observations):
icentroid = np.zeros(numfields)
jcentroid = np.zeros(numfields)
for song in songs[i]:
icentroid += song
for song in songs[j]:
jcentroid += song
dist = spatial.distance.cosine(icentroid, jcentroid)
distmat[i, j] = dist
return distmat
d = distance_matrix(songsbyquarter, maxquarter, numfields)
plt.matshow(d, origin = 'lower', cmap = plt.cm.YlOrRd)
plt.show()
Explanation: The best solution I can see is just to use visualizations that reveal multiple scales. The authors of "The Evolution of Popular Music" have one nice way to visualize multiple scales of comparison at once, by creating a distance matrix of all segments against all segments.
End of explanation
def gappy_cosine(center, halfwidth, songs):
''' Calculates the cosine distance between two quarters
the timeline -- one of length "halfwidth" before the position
called "center," and one of length "halfwidth" starting at
"center." We calculate cosine distance between centroids that
are simply the mean vector for each segment.
'''
global numfields
oldvec = np.zeros(numfields)
newvec = np.zeros(numfields)
for song in songs[center - halfwidth]:
oldvec += song
for song in songs[center + halfwidth]:
newvec += song
dist = spatial.distance.cosine(oldvec, newvec)
return dist
def gappy_distance(interval, centroids):
''' Calculates the cosine distances between quarters
that are 'interval' quarters from a midpoint.
It also prints out the year where distance is at a maximum.
'''
distances = []
years = []
for i in range(interval, 200 - interval, 1):
thisdist = gappy_cosine(i, interval, centroids)
distances.append(thisdist)
years.append(1960 + i / 4)
return years, distances
years, distances = gappy_distance(8, songsample)
cpoints = find_changepoints(years, distances, [])
changepoints = holm_bonferroni(cpoints)
print(changepoints)
plot_changepoints(songsample, changepoints, years, distances, False)
# with open('fouryeardistances.csv', mode='w', encoding = 'utf-8') as f:
# writer = csv.writer(f)
# writer.writerow(['year', 'distance'])
# for year, distance in zip(years, distances):
# writer.writerow([year, distance])
Explanation: This is a great visualization, revealing similarity and difference on many different scales. The diagonal line here is basically the timeline, and yellow "squares" are in effect areas of similarity. The "pinch points" between squares are places where change is relatively rapid.
However, this visualization by itself is not a good foundation for claims about effect size or statistical significance. Mauch et al. run a permutation test on it, but it's not very helpful since all historically sequential data is going to have that yellow road running diagonally through the middle, and all permuted datasets won't. To infer statistical significance we have to run a permutation on a sequence of differences, which means we have to choose a particular window width.
Another possible solution to the arbitrariness of window width would be to run changepoint analysis directly on the underlying multivariate time series instead of on measurements of distance between fixed segments. I think there are such techniques. James and Matteson have an R package ecp that does this. But I notice they say that it makes the assumption "that observations are independent over time," and I don't think I can make that assumption. We know for a fact that songs near each other on the timeline tend to be much more similar than those far apart (because all historical datasets have a "diagonal yellow road").
So, unless there's a cool thing I haven't tried yet, I think problem 1 (arbitrariness of window width) is basically insoluble.
However there are some things we could do to solve problem 2 (the difficulty of testing significance at all on wide windows).
Take two: gappy distances
<em>Everything below is under erasure, because it involves overlapping windows of comparison, and I'm not sure that it can ever be principled to run a permutation test on values that were produced by overlapping comparisons! I've left the experiment here, though, so you can see what would happen if you did try it.</em>
So far we've been measuring the distances between adjacent segments of the timespan. Two years before Jan 1, 1970 and the two years after. Then, move forward two years, and compare the two years before and after Jan 1, 1972. The problem is that as distances get large, the number of observations decreases. You could just move the centerpoint forward a quarter at a time, from Jan 1, 1970 to Apr 1, 1970, keeping the "windows" two years wide -- but the problem with that idea is, adjacent observations would no longer be independent because the windows being compared would now overlap. A permutation test would no longer be a reliable way of testing the cumulative sum, because you'd be comparing apples (autocorrelated comparisons) and oranges (a really random series).
But what if we eliminated that problem by measuring the distance between pairs of quarters separated by a gap. In other words, we'll compare Oct 1, 1967 - Dec 31, 1967 to Jan 1, 1972 - Mar 31, 1972. Then we can still move forward a quarter at a time without (at least, I think without) necessarily destroying the independence of adjacent observations. In practice, adjacent observations will still tend to be similar. But that's only true because the changes inside that "gap" are really more rapid in some periods than in other periods. So we can still use a permutation test; the test won't be merely revealing the autocorrelation of the original time series, but the fact that paces of change are correlated (which is in fact the claim that we're seeking to test). I'm explaining this at length because I'm honestly still a little uncertain about it. It's true that sequential "gaps" will overlap. I think that's less problematic than overlaps in the actual segments-compared, but I want to hear what y'all have to say.
End of explanation
years, distances = gappy_distance(12, songsample)
cpoints = find_changepoints(years, distances, [])
changepoints = holm_bonferroni(cpoints)
print(changepoints)
plot_changepoints(songsample, changepoints, years, distances, False)
distmap = np.zeros((20, maxquarter + 1))
for i in range(1, 20):
years, distances = gappy_distance(i, songsample)
for j in range(200 - i*2):
distmap[i, j + i] = distances[j]
fig = plt.figure(figsize = (6.1,10))
ax = fig.add_subplot(111)
ax.matshow(distmap, origin = 'lower', cmap = 'Blues', extent = [1960,2010,0,20])
plt.show()
Explanation: Not bad. Now we're getting somewhere. This plots the distances between pairs of quarters separated by four years (2 yrs on either side of the midpoint), and then identifies sections of the timeline where sequences of those distances are significantly above or below trend.
But the results we're seeing here are different enough from the year-over-year method that they may just confirm our worry that the width of the "window" you're using is an arbitrary and important parameter. For instance, how much do things change if we look at pairs of quarters separated by six years?
End of explanation |
4,196 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Identifying Spam from SMS Text Messages
This analysis attempts to identify spam messages from a corpus of 5,574 SMS text messages. The corpus is labeled as either spam or ham (legitimate messages) with 4,827 as ham and 747 as spam. Using Sci-kit Learn and the Multinomial Naive Bayes model to classify messages as spam and ham.
We will look at various options to tune the model to see if we can get to 0 false positives in which legitimate messages are labled as spam. It is expected that a small percentage of spam messages making it through the spam filter is preferable to legitimate messages being excluded.
Sources
Step1: Loading the data from the UCI Machine Learning Repository
Step2: Data Exploration
Step3: Since the data is labeled for us, we can do further data exploration by taking a look at how spam and ham differ.
Step4: In addition to the difference in the number of ham vs. spam, it appears that spam messages are generally longer than spam messages and more normally distributed than ham messages.
Step5: Define the feature set through vectorization.
Step6: Using Yellowbrick
Step7: Using the default settings for our model does a pretty good job predicting spam and ham although not perfect. The confusion matrix shows us that there are 12 false positives ( 5 actual spam messages that are predicted to be ham with 7 actual ham message predicted as spam).
I think it is more important to a user to receive 100% of their real messages while tolerating a few spam messages. So let's see if we can tune the model to eliminate the false positives that are tagged as spam but are really ham.
We can use grid search with cross-validation to find the optimal alpha value.
Step8: Since we are more concerned with minimizing the false positives especially with ham classified as spam, we will use an alpha value of 3.0 with fit_prior = True. | Python Code:
%matplotlib inline
import os
import json
import time
import pickle
import requests
from io import BytesIO
from zipfile import ZipFile
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction import text
import seaborn as sns
sns.set(font_scale=1.5)
Explanation: Identifying Spam from SMS Text Messages
This analysis attempts to identify spam messages from a corpus of 5,574 SMS text messages. The corpus is labeled as either spam or ham (legitimate messages) with 4,827 as ham and 747 as spam. Using Sci-kit Learn and the Multinomial Naive Bayes model to classify messages as spam and ham.
We will look at various options to tune the model to see if we can get to 0 false positives in which legitimate messages are labled as spam. It is expected that a small percentage of spam messages making it through the spam filter is preferable to legitimate messages being excluded.
Sources:
https://archive.ics.uci.edu/ml/datasets/sms+spam+collection
https://radimrehurek.com/data_science_python/
http://adataanalyst.com/scikit-learn/countvectorizer-sklearn-example/
End of explanation
URL = 'https://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip'
SMS_PATH = os.path.join('datasets', 'sms')
file_name = requests.get(URL)
zipfile = ZipFile(BytesIO(file_name.content))
zip_names = zipfile.namelist()
def fetch_data(file='SMSSPamCollection'):
for file in zip_names:
if not os.path.isdir(SMS_PATH):
os.makedirs(SMS_PATH)
outpath = os.path.join(SMS_PATH, file)
extracted_file = zipfile.read(file)
with open(outpath, 'wb') as f:
f.write(extracted_file)
return outpath
DATA = fetch_data()
df = pd.read_csv(DATA, sep='\t', header=None)
df.columns = ['Label', 'Text']
Explanation: Loading the data from the UCI Machine Learning Repository
End of explanation
pd.set_option('max_colwidth', 220)
df.head(20)
df.describe()
df.info()
Explanation: Data Exploration
End of explanation
# Add a field to our dataframe with the length of each message.
df['Length'] = df['Text'].apply(len)
df.head()
df.groupby('Label').describe()
Explanation: Since the data is labeled for us, we can do further data exploration by taking a look at how spam and ham differ.
End of explanation
df.Length.plot(bins=100, kind='hist')
df.hist(column='Length', by='Label', bins=50, figsize=(10,4))
Explanation: In addition to the difference in the number of ham vs. spam, it appears that spam messages are generally longer than spam messages and more normally distributed than ham messages.
End of explanation
text_data = df['Text']
text_data.shape
# Give our target labels numbers.
df['Label_'] = df['Label'].map({'ham': 0, 'spam': 1})
#stop_words = text.ENGLISH_STOP_WORDS
#Adding stop words did not significantly improve the model.
#textWithoutNums = text_data.replace('\d+', 'NUM_', regex=True)
#Removing all of the numbers in the messages and replacing with a text string did not improve the model either.
vectorizer = CountVectorizer(analyzer='word') #, stop_words=stop_words)
#vectorizer.fit(textWithoutNums)
vectorizer.fit(text_data)
vectorizer.get_feature_names()
pd.DataFrame.from_dict(vectorizer.vocabulary_, orient='index').sort_values(by=0, ascending=False).head()
dtm = vectorizer.transform(text_data)
features = pd.DataFrame(dtm.toarray(), columns=vectorizer.get_feature_names())
features.shape
features.head()
X = features
y = np.array(df['Label_'].tolist())
from sklearn.cross_validation import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn import metrics
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
print(X_train.shape, y_train.shape)
model = MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)
model.fit(X_train, y_train)
y_pred_class = model.predict(X_test)
print(metrics.classification_report(y_test, y_pred_class))
print('Accuracy Score: ', metrics.accuracy_score(y_test, y_pred_class))
Explanation: Define the feature set through vectorization.
End of explanation
from yellowbrick.classifier import ClassificationReport
bayes = MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)
visualizer = ClassificationReport(bayes, classes=['ham', 'spam'])
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
g = visualizer.poof()
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred_class)
sns.set(font_scale=1.5)
ax = plt.subplot()
sns.heatmap(cm, annot=True, ax=ax, fmt='g', cbar=False)
ax.set_xlabel('Predicted')
ax.set_ylabel('Actual')
ax.set_title('Confusion Matrix')
ax.xaxis.set_ticklabels(['Ham', 'Spam'])
ax.yaxis.set_ticklabels(['Ham', 'Spam'])
plt.show()
Explanation: Using Yellowbrick
End of explanation
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
# Split the dataset in two equal parts
X_train_, X_test_, y_train_, y_test_ = train_test_split(
X, y, test_size=0.5, random_state=1)
# Set the parameters by cross-validation
tuned_parameters = [{'alpha': [0.5, 1.0, 1.5, 2.0, 2.5, 3.0], 'class_prior':[None], 'fit_prior': [True, False]}]
scores = ['precision', 'recall']
for score in scores:
print("### Tuning hyper-parameters for %s ###" % score)
print()
clf = GridSearchCV(MultinomialNB(), tuned_parameters, cv=5,
scoring='%s_macro' % score)
clf.fit(X_train_, y_train_)
print("Best parameters set found on development set:")
print()
print(clf.best_params_)
print()
print("Grid scores on development set:")
print()
means = clf.cv_results_['mean_test_score']
stds = clf.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, clf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r"
% (mean, std * 2, params))
print()
print("Detailed classification report:")
print()
print("The model is trained on the full development set.")
print("The scores are computed on the full evaluation set.")
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred))
print()
print('Accuracy Score: ', metrics.accuracy_score(y_test, y_pred))
print()
Explanation: Using the default settings for our model does a pretty good job predicting spam and ham although not perfect. The confusion matrix shows us that there are 12 false positives ( 5 actual spam messages that are predicted to be ham with 7 actual ham message predicted as spam).
I think it is more important to a user to receive 100% of their real messages while tolerating a few spam messages. So let's see if we can tune the model to eliminate the false positives that are tagged as spam but are really ham.
We can use grid search with cross-validation to find the optimal alpha value.
End of explanation
from yellowbrick.classifier import ClassificationReport
bayes = MultinomialNB(alpha=3.0, class_prior=None, fit_prior=True)
visualizer = ClassificationReport(bayes, classes=['ham', 'spam'])
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
g = visualizer.poof()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
print(X_train.shape, y_train.shape)
model = MultinomialNB(alpha=3.0, class_prior=None, fit_prior=True)
model.fit(X_train, y_train)
y_pred_class = model.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred_class)
sns.set(font_scale=1.5)
ax = plt.subplot()
sns.heatmap(cm, annot=True, ax=ax, fmt='g', cbar=False)
ax.set_xlabel('Predicted')
ax.set_ylabel('Actual')
ax.set_title('Confusion Matrix')
ax.xaxis.set_ticklabels(['Ham', 'Spam'])
ax.yaxis.set_ticklabels(['Ham', 'Spam'])
plt.show()
Explanation: Since we are more concerned with minimizing the false positives especially with ham classified as spam, we will use an alpha value of 3.0 with fit_prior = True.
End of explanation |
4,197 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Economics with Jupyter Notebooks
Jupyter Notebook is "a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include
Step1: (in the future we will follow best practice and place library imports at the top of our notebooks).
Scatterplot of log GDP per capita and average growth 1960-2000
Step2: Same plot but excluding African countries
Step3: Interactive plots
There are ways to make plots like these interactive.
* On the next slide I use ipywidgets | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
import seaborn as sns
from ipywidgets import interact
df = pd.read_stata(".\data\country.dta")
Explanation: Economics with Jupyter Notebooks
Jupyter Notebook is "a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more."
Open-source, browser-based
Evolved from ipython notebook to leverage huge scientific python ecosystem.
Now a 'language agnostic' platform so that you can use any of 50+ other kernels including MATLAB, Octave, Stata, Julia, etc.
Open-source, fast evolving, large community: Widely used in academic and scientific computing community.
Ties projects together: Code, data, documentation, output and analysis all in one place.
Encourages reproducible science:
Easy workflow from exploratory analysis to publish.
Works well with github and other open-source sharing tools.
Ways to view and run jupyter notebooks
Jupyter server for interactive computing Run on a local machine or cloud server to modify code and results on the fly.
On your computer:
Jupyter notebook on your local machine. I recommend using Anaconda to install Jupyter and scientific python. A good instalation guide here
nteract. A good one-click install solution for running notebooks. Provides you with standalone program that installs scientific python and runs jupyter notebooks (not quite full functionality).
Jupyter notebooks are big in the data science space. Lots of free cloud server solutions are emerging:
Microsoft Azure notebooks: Setup a free account for cloud hosted jupyter notebooks.
Google Colaboratory: Run notebooks stored on your google drive.
Cocalc: jupyter notebooks, SageMath and other cloud hosted services.
Try Jupyter: another cloud server, but you can't save work at all.
Static rendering for presentations and publishing.
Jupyter notebooks can be rendered in different ways for example as a styled HTML slideshow or page or as a PDF book by using tools and services such as github, nbconvert, Sphinx and Read the Docs.
This very notebook is:
hosted on the Dev-II repository on github where it is rendered in simple HTML (though underlying format is json).
viewable in HTML or as javascript slideshow via nbviewer. To then see in slideshow mode click on 'present' icon on top right.
Tied together with other documents via Sphinx to create a website on readthedocs
Also viewable as a PDF book on readthedocs
A simple jupyter notebook example:
We can combine text, math, code and graph outputs in one place. Let's study a simple economics question:
Are average incomes per capita converging?
Neoclassical growth theory:
Solow-growth model with Cobb-Douglas technology $f(k)=k^\alpha$.
Technology, saving rate $s$, capital depreciation rate $\delta$, population growth rate $n$ and technological change rate $g$ assumed same across countries.
Steady-state capital per worker to which countries are converging:
$$k^{*} = (g/s)^\frac{1}{\alpha-1} $$
Transitional dynamics:
$$\dot{k}(t) = s k(t)^{t} -(n+g+\delta)k(t)$$
Diminishing returns to the accumulated factor $k$ implies convergence:
Lower initial capital stock implies lower initial per-capita GDP.
Country that starts with lower capital stock and GDP per capita 'catches up' by growing faster.
Convergence plots
Did countries with low levels of income per capita in 1960 grow faster?
I found a dataset from World Penn Tables on this website (original data source here).
Let us import useful python libraries for data handling and plots and load the dataset into a pandas dataframe:
End of explanation
g = sns.jointplot("lgdp60", "ggdp", data=df, kind="reg",
color ="b", size=7)
Explanation: (in the future we will follow best practice and place library imports at the top of our notebooks).
Scatterplot of log GDP per capita and average growth 1960-2000:
End of explanation
g = sns.jointplot("lgdp60", "ggdp", data=df[df.cont !="Africa"], kind="reg",
color ="r", size=7)
Explanation: Same plot but excluding African countries:
End of explanation
def jplot(region):
sns.jointplot("lgdp60", "ggdp", data=df[df.cont == region],
kind="reg", color ="g", size=7)
plt.show();
interact(jplot, region=list(df.cont.unique()))
Explanation: Interactive plots
There are ways to make plots like these interactive.
* On the next slide I use ipywidgets: When the notebook is run on a jupyter server 'radio buttons' above the plot allow quick re-plotting for selected country regions.
Other libraries such as Bokeh and Plotly create plots with embedded javascript code that allow interactive plot elements even on HTML renderings (i.e. in most browsers even if you do not have a jupyter server running).
Here is how do do a dropdown menu. First we write a function to take a 'region' as argument that plots data only for that region (by specifying that filter in the pandas dataframe). We then use interact from the ipywidgets library to switch quickly between regions.
You'll only see this as truly interactive on a live notebook, not in a static HTML rendering of the same notebook.
End of explanation |
4,198 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In the last chapter, our tests failed. This time we'll go about fixing them.
Our First Django App, and Our First Unit Test
Django encourages you to structure your code into apps
Step1: Unit Tests, and How They Differ from Functional Tests
The difference boils down to
Step2: Django has helpfully suggested we use a special version of TestCase, which it provides. It’s an augmented version of the standard unittest.TestCase, with some additional Django-specific features, which we’ll discover over the next few chapters.
You’ve already seen that the TDD cycle involves starting with a test that fails, then writing code to get it to pass. Well, before we can even get that far, we want to know that the unit test we’re writing will definitely be run by our automated test runner, whatever it is. In the case of functional_tests.py, we’re running it directly, but this file made by Django is a bit more like magic. So, just to make sure, let’s make a deliberately silly failing test
Step3: Run our new django test
Step4: Everything seems to be working! (This would be a good time to commit!)
$ git status # should show you lists/ is untracked
$ git add lists
$ git diff --staged # will show you the diff that you're about to commit
$ git commit -m "Add app for lists, with deliberately failing unit test"
Django's MVC, URLs, and View Functions
Django is broadly structured along a classic Model-View-Controller (MVC) pattern. Well, broadly. It definitely does have models, but its views are more like a controller, and it’s the templates that are actually the view part, but the general idea is there. If you’re interested, you can look up the finer points of the discussion in the Django FAQs.
Irrespective of any of that, like any web server, Django’s main job is to decide what to do when a user asks for a particular URL on our site. Django’s workflow goes something like this
Step5: What’s going on here?
What function is that? It’s the view function we’re going to write next, which will actually return the HTML we want. You can see from the import that we’re planning to store it in lists/views.py.
and
resolve is the function Django uses internally to resolve URLs, and find what view function they should map to. We’re checking that resolve, when called with “/”, the root of the site, finds a function called home_page.
So, what do you think will happen when we run the tests?
Step6: It’s a very predictable and uninteresting error
Step8: we interpret the traceback as telling us that, when trying to resolve “/”, Django raised a 404 error—in other words, Django can’t find a URL mapping for “/”. Let’s help it out.
urls.py
Django uses a file called urls.py to define how URLs map to view functions. There’s a main urls.py for the whole site in the superlists/superlists folder. Let’s go take a look
Step10: The first example entry has the regular expression ^$, which means an empty string—could this be the same as the root of our site, which we’ve been testing with “/”? Let’s find out—what happens if we include it?
Step11: That’s progress! We’re no longer getting a 404.
The message is slightly cryptic, but the unit tests have actually made the link between the URL / and the home_page = None in lists/views.py, and are now complaining that home_page is a NoneType. And that gives us a justification for changing it from being None to being an actual function. Every single code change is driven by the tests!
Step12: Hooray! Our first ever unit test pass! That’s so momentous that I think it’s worthy of a commit
Step13: What’s going on in this new test?
We create an HttpRequest object, which is what Django will see when a user’s browser asks for a page.
We pass it to our home_page view, which gives us a response. You won’t be surprised to hear that this object is an instance of a class called HttpResponse. Then, we assert that the .content of the response—which is the HTML that we send to the user—has certain properties.
& 5. We want it to start with an <html> tag which gets closed at the end. Notice that response.content is raw bytes, not a Python string, so we have to use the b'' syntax to compare them. More info is available in Django’s Porting to Python 3 docs.
And we want a <title> tag somewhere in the middle, with the words "To-Do lists" in it—because that’s what we specified in our functional test.
Once again, the unit test is driven by the functional test, but it’s also much closer to the actual code—we’re thinking like programmers now.
Let’s run the unit tests now and see how we get on
Step14: The Unit-Test/Code Cycle
We can start to settle into the TDD unit-test/code cycle now | Python Code:
%cd ../examples/superlists/
# Make a new app called lists
!python3 manage.py startapp lists
!tree .
Explanation: In the last chapter, our tests failed. This time we'll go about fixing them.
Our First Django App, and Our First Unit Test
Django encourages you to structure your code into apps: the theory is that one project can have many apps, you can use third-party apps developed by other people, and you might even reuse one of your own apps in a different project … although I admit I’ve never actually managed it myself! Still, apps are a good way to keep your code organised.
Let’s start an app for our to-do lists:
End of explanation
# %load lists/tests.py
from django.test import TestCase
# Create your tests here.
Explanation: Unit Tests, and How They Differ from Functional Tests
The difference boils down to:
* Functional tests test from the perspective of the user
* Unit tests test from the point of view of the developer
The TDD approach I’m following wants our application to be covered by both types of test. Our workflow will look a bit like this:
We start by writing a functional test, describing the new functionality from the user’s point of view.
Once we have a functional test that fails, we start to think about how to write code that can get it to pass (or at least to get past its current failure). We now use one or more unit tests to define how we want our code to behave—the idea is that each line of production code we write should be tested by (at least) one of our unit tests.
Once we have a failing unit test, we write the smallest amount of application code we can, just enough to get the unit test to pass. We may iterate between steps 2 and 3 a few times, until we think the functional test will get a little further.
Now we can rerun our functional tests and see if they pass, or get a little further. That may prompt us to write some new unit tests, and some new code, and so on.
Functional tests should help you build an application with the right functionality, and guarantee you never accidentally break it. Unit tests should help you to write code that’s clean and bug free.
Unit Testing in Django
Let’s see how to write a unit test for our home page view. Open up the new file at lists/tests.py, and you’ll see something like this:
End of explanation
%%writefile lists/tests.py
from django.test import TestCase
class SmokeTest(TestCase):
def test_bad_maths(self):
self.assertEqual(1 + 1, 3)
Explanation: Django has helpfully suggested we use a special version of TestCase, which it provides. It’s an augmented version of the standard unittest.TestCase, with some additional Django-specific features, which we’ll discover over the next few chapters.
You’ve already seen that the TDD cycle involves starting with a test that fails, then writing code to get it to pass. Well, before we can even get that far, we want to know that the unit test we’re writing will definitely be run by our automated test runner, whatever it is. In the case of functional_tests.py, we’re running it directly, but this file made by Django is a bit more like magic. So, just to make sure, let’s make a deliberately silly failing test:
End of explanation
!python3 manage.py test
Explanation: Run our new django test
End of explanation
%%writefile lists/tests.py
from django.core.urlresolvers import resolve
from django.test import TestCase
from lists.views import home_page #1
class HomePageTest(TestCase):
def test_root_url_resolves_to_home_page_view(self):
found = resolve('/') #2
self.assertEqual(found.func, home_page) #3
Explanation: Everything seems to be working! (This would be a good time to commit!)
$ git status # should show you lists/ is untracked
$ git add lists
$ git diff --staged # will show you the diff that you're about to commit
$ git commit -m "Add app for lists, with deliberately failing unit test"
Django's MVC, URLs, and View Functions
Django is broadly structured along a classic Model-View-Controller (MVC) pattern. Well, broadly. It definitely does have models, but its views are more like a controller, and it’s the templates that are actually the view part, but the general idea is there. If you’re interested, you can look up the finer points of the discussion in the Django FAQs.
Irrespective of any of that, like any web server, Django’s main job is to decide what to do when a user asks for a particular URL on our site. Django’s workflow goes something like this:
An HTTP request comes in for a particular URL.
Django uses some rules to decide which view function should deal with the request (this is referred to as resolving the URL).
The view function processes the request and returns an HTTP response.
So we want to test two things:
Can we resolve the URL for the root of the site (“/”) to a particular view function we’ve made?
Can we make this view function return some HTML which will get the functional test to pass?
Let’s start with the first. Open up lists/tests.py, and change our silly test to something like this:
End of explanation
!python3 manage.py test
Explanation: What’s going on here?
What function is that? It’s the view function we’re going to write next, which will actually return the HTML we want. You can see from the import that we’re planning to store it in lists/views.py.
and
resolve is the function Django uses internally to resolve URLs, and find what view function they should map to. We’re checking that resolve, when called with “/”, the root of the site, finds a function called home_page.
So, what do you think will happen when we run the tests?
End of explanation
%%writefile lists/views.py
from django.shortcuts import render
# Create your views here.
home_page = None
!python3 manage.py test
Explanation: It’s a very predictable and uninteresting error: we tried to import something we haven’t even written yet. But it’s still good news—for the purposes of TDD, an exception which was predicted counts as an expected failure. Since we have both a failing functional test and a failing unit test, we have the Testing Goat’s full blessing to code away.
At Last! We Actually Write Some Application Code!
It is exciting isn’t it? Be warned, TDD means that long periods of anticipation are only defused very gradually, and by tiny increments. Especially since we’re learning and only just starting out, we only allow ourselves to change (or add) one line of code at a time—and each time, we make just the minimal change required to address the current test failure.
I’m being deliberately extreme here, but what’s our current test failure? We can’t import home_page from lists.views? OK, let’s fix that—and only that. In lists/views.py:
End of explanation
# %load superlists/urls.py
superlists URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/1.8/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')
Including another URLconf
1. Add an import: from blog import urls as blog_urls
2. Add a URL to urlpatterns: url(r'^blog/', include(blog_urls))
from django.conf.urls import include, url
from django.contrib import admin
urlpatterns = [
url(r'^admin/', include(admin.site.urls)),
]
Explanation: we interpret the traceback as telling us that, when trying to resolve “/”, Django raised a 404 error—in other words, Django can’t find a URL mapping for “/”. Let’s help it out.
urls.py
Django uses a file called urls.py to define how URLs map to view functions. There’s a main urls.py for the whole site in the superlists/superlists folder. Let’s go take a look:
End of explanation
%%writefile superlists/urls.py
superlists URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/1.8/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')
Including another URLconf
1. Add an import: from blog import urls as blog_urls
2. Add a URL to urlpatterns: url(r'^blog/', include(blog_urls))
from django.conf.urls import include, url
from django.contrib import admin
from lists import views
urlpatterns = [
url(r'^$', views.home_page, name='home'),
#url(r'^admin/', include(admin.site.urls)),
]
!python3 manage.py test
Explanation: The first example entry has the regular expression ^$, which means an empty string—could this be the same as the root of our site, which we’ve been testing with “/”? Let’s find out—what happens if we include it?
End of explanation
%%writefile lists/views.py
from django.shortcuts import render
# Create your views here.
def home_page():
pass
!python3 manage.py test
Explanation: That’s progress! We’re no longer getting a 404.
The message is slightly cryptic, but the unit tests have actually made the link between the URL / and the home_page = None in lists/views.py, and are now complaining that home_page is a NoneType. And that gives us a justification for changing it from being None to being an actual function. Every single code change is driven by the tests!
End of explanation
%%writefile lists/tests.py
from django.core.urlresolvers import resolve
from django.test import TestCase
from django.http import HttpRequest
from lists.views import home_page
class HomePageTest(TestCase):
def test_root_url_resolves_to_home_page_view(self):
found = resolve('/')
self.assertEqual(found.func, home_page)
def test_home_page_returns_correct_html(self):
request = HttpRequest() #1
response = home_page(request) #2
self.assertTrue(response.content.startswith(b'<html>')) #3
self.assertIn(b'<title>To-Do lists</title>', response.content) #4
self.assertTrue(response.content.endswith(b'</html>')) #5
Explanation: Hooray! Our first ever unit test pass! That’s so momentous that I think it’s worthy of a commit:
$ git diff # should show changes to urls.py, tests.py, and views.py
$ git commit -am "First unit test and url mapping, dummy view"
Unit Testing a View
On to writing a test for our view, so that it can be something more than a do-nothing function, and instead be a function that returns a real response with HTML to the browser. Open up lists/tests.py, and add a new test method. I’ll explain each bit:
End of explanation
!python3 manage.py test
Explanation: What’s going on in this new test?
We create an HttpRequest object, which is what Django will see when a user’s browser asks for a page.
We pass it to our home_page view, which gives us a response. You won’t be surprised to hear that this object is an instance of a class called HttpResponse. Then, we assert that the .content of the response—which is the HTML that we send to the user—has certain properties.
& 5. We want it to start with an <html> tag which gets closed at the end. Notice that response.content is raw bytes, not a Python string, so we have to use the b'' syntax to compare them. More info is available in Django’s Porting to Python 3 docs.
And we want a <title> tag somewhere in the middle, with the words "To-Do lists" in it—because that’s what we specified in our functional test.
Once again, the unit test is driven by the functional test, but it’s also much closer to the actual code—we’re thinking like programmers now.
Let’s run the unit tests now and see how we get on:
End of explanation
%%writefile lists/views.py
from django.shortcuts import render
from django.http import HttpResponse
# Create your views here.
def home_page(request):
return HttpResponse('<html><title>To-Do lists</title></html>')
!python3 manage.py test
Explanation: The Unit-Test/Code Cycle
We can start to settle into the TDD unit-test/code cycle now:
In the terminal, run the unit tests and see how they fail.
In the editor, make a minimal code change to address the current test failure.
And repeat!
The more nervous we are about getting our code right, the smaller and more minimal we make each code change—the idea is to be absolutely sure that each bit of code is justified by a test. It may seem laborious, but once you get into the swing of things, it really moves quite fast—so much so that, at work, we usually keep our code changes microscopic even when we’re confident we could skip ahead.
Let’s see how fast we can get this cycle going:
Minimal code change:
lists/views.py.
python
def home_page(request):
pass
Tests:
self.assertTrue(response.content.startswith(b'<html>'))
AttributeError: 'NoneType' object has no attribute 'content'
Code—we use django.http.HttpResponse, as predicted:
lists/views.py.
```python
from django.http import HttpResponse
Create your views here.
def home_page(request):
return HttpResponse()
Tests again:
self.assertTrue(response.content.startswith(b'<html>'))
AssertionError: False is not true
Code again:
lists/views.py.
def home_page(request):
return HttpResponse('<html>')
```
Tests:
AssertionError: b'<title>To-Do lists</title>' not found in b'<html>'
Code:
lists/views.py.
python
def home_page(request):
return HttpResponse('<html><title>To-Do lists</title>')
Tests—almost there?
```
self.assertTrue(response.content.endswith(b'</html>'))
AssertionError: False is not true
Come on, one last effort:
lists/views.py.python
def home_page(request):
return HttpResponse('<html><title>To-Do lists</title></html>')
Surely?
$ python3 manage.py test
Creating test database for alias 'default'...
..
Ran 2 tests in 0.001s
OK
Destroying test database for alias 'default'...
```
Failed? What? Oh, it’s just our little reminder? Yes? Yes! We have a web page!
Ahem. Well, I thought it was a thrilling end to the chapter. You may still be a little baffled, perhaps keen to hear a justification for all these tests, and don’t worry, all that will come, but I hope you felt just a tinge of excitement near the end there.
Just a little commit to calm down, and reflect on what we’ve covered:
$ git diff # should show our new test in tests.py, and the view in views.py
$ git commit -am "Basic view now returns minimal HTML"
That was quite a chapter! Why not try typing git log, possibly using the --oneline flag, for a reminder of what we got up to:
$ git log --oneline
a6e6cc9 Basic view now returns minimal HTML
450c0f3 First unit test and url mapping, dummy view
ea2b037 Add app for lists, with deliberately failing unit test
[...]
Not bad—we covered:
Starting a Django app
The Django unit test runner
*The difference between FTs and unit tests
Django URL resolving and urls.py
Django view functions, request and response objects
And returning basic HTML
End of explanation |
4,199 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spatial Model fitting in GLS
In this exercise we will fit a linear model using a Spatial structure as covariance matrix.
We will use GLS to get better estimators.
As always we will need to load the necessary libraries.
Step1: Use this to automate the process. Be carefull it can overwrite current results
run ../HEC_runs/fit_fia_logbiomass_logspp_GLS.py /RawDataCSV/idiv_share/plotsClimateData_11092017.csv /apps/external_plugins/spystats/HEC_runs/results/logbiomas_logsppn_res.csv -85 -80 30 35
Importing data
We will use the FIA dataset and for exemplary purposes we will take a subsample of this data.
Also important.
The empirical variogram has been calculated for the entire data set using the residuals of an OLS model.
We will use some auxiliary functions defined in the fit_fia_logbiomass_logspp_GLS.
You can inspect the functions using the ?? symbol.
Step2: Now we will obtain the data from the calculated empirical variogram.
Step3: restricted w/ all data spatial correlation parameters
Log-Likelihood
Step4: Instantiating the variogram object
Step5: Instantiating theoretical variogram model | Python Code:
# Load Biospytial modules and etc.
%matplotlib inline
import sys
sys.path.append('/apps')
sys.path.append('..')
sys.path.append('../spystats')
import django
django.setup()
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
## Use the ggplot style
plt.style.use('ggplot')
import tools
Explanation: Spatial Model fitting in GLS
In this exercise we will fit a linear model using a Spatial structure as covariance matrix.
We will use GLS to get better estimators.
As always we will need to load the necessary libraries.
End of explanation
from HEC_runs.fit_fia_logbiomass_logspp_GLS import prepareDataFrame,loadVariogramFromData,buildSpatialStructure, calculateGLS, initAnalysis, fitGLSRobust
section = initAnalysis("/RawDataCSV/idiv_share/FIA_Plots_Biomass_11092017.csv",
"/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv",
-130,-60,30,40)
import rpy2
import rpy2.robjects as ro
from rpy2.robjects import r, pandas2ri
pandas2ri.activate()
r_section = pandas2ri.pandas2ri(section)
M = r.lm('logBiomass~logSppN', data=r_section)
print(r.summary(M).rx2('coefficients'))
r.library('nlme')
#section = initAnalysis("/RawDataCSV/idiv_share/plotsClimateData_11092017.csv",
# "/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv",
# -85,-80,30,35)
# IN HEC
#section = initAnalysis("/home/hpc/28/escamill/csv_data/idiv/FIA_Plots_Biomass_11092017.csv","/home/hpc/28/escamill/spystats/HEC_runs/results/variogram/data_envelope.csv",-85,-80,30,35)
section.shape
Explanation: Use this to automate the process. Be carefull it can overwrite current results
run ../HEC_runs/fit_fia_logbiomass_logspp_GLS.py /RawDataCSV/idiv_share/plotsClimateData_11092017.csv /apps/external_plugins/spystats/HEC_runs/results/logbiomas_logsppn_res.csv -85 -80 30 35
Importing data
We will use the FIA dataset and for exemplary purposes we will take a subsample of this data.
Also important.
The empirical variogram has been calculated for the entire data set using the residuals of an OLS model.
We will use some auxiliary functions defined in the fit_fia_logbiomass_logspp_GLS.
You can inspect the functions using the ?? symbol.
End of explanation
gvg,tt = loadVariogramFromData("/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv",section)
gvg.plot(refresh=False,with_envelope=True)
corrm = gvg.calculateCovarianceMatrix()
C = r.corSymm(corrm)
mod4 = r.gls('logBiomass ~ logSppN', data=r_section,correlation = C)
resum,gvgn,resultspd,results = fitGLSRobust(section,gvg,num_iterations=1,distance_threshold=1000000)
resum.as_text
Explanation: Now we will obtain the data from the calculated empirical variogram.
End of explanation
plt.plot(resultspd.rsq)
plt.title("GLS feedback algorithm")
plt.xlabel("Number of iterations")
plt.ylabel("R-sq fitness estimator")
resultspd.columns
a = map(lambda x : x.to_dict(), resultspd['params'])
paramsd = pd.DataFrame(a)
paramsd
plt.plot(paramsd.Intercept.loc[1:])
plt.get_yaxis().get_major_formatter().set_useOffset(False)
fig = plt.figure(figsize=(10,10))
plt.plot(paramsd.logSppN.iloc[1:])
variogram_data_path = "/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv"
thrs_dist = 100000
emp_var_log_log = pd.read_csv(variogram_data_path)
Explanation: restricted w/ all data spatial correlation parameters
Log-Likelihood: -16607
AIC: 3.322e+04
restricted w/ restricted spatial correlation parameters
Log-Likelihood: -16502.
AIC: 3.301e+04
End of explanation
gvg = tools.Variogram(section,'logBiomass',using_distance_threshold=thrs_dist)
gvg.envelope = emp_var_log_log
gvg.empirical = emp_var_log_log.variogram
gvg.lags = emp_var_log_log.lags
#emp_var_log_log = emp_var_log_log.dropna()
#vdata = gvg.envelope.dropna()
Explanation: Instantiating the variogram object
End of explanation
matern_model = tools.MaternVariogram(sill=0.34,range_a=100000,nugget=0.33,kappa=4)
whittle_model = tools.WhittleVariogram(sill=0.34,range_a=100000,nugget=0.0,alpha=3)
exp_model = tools.ExponentialVariogram(sill=0.34,range_a=100000,nugget=0.33)
gaussian_model = tools.GaussianVariogram(sill=0.34,range_a=100000,nugget=0.33)
spherical_model = tools.SphericalVariogram(sill=0.34,range_a=100000,nugget=0.33)
gvg.model = whittle_model
#gvg.model = matern_model
#models = map(lambda model : gvg.fitVariogramModel(model),[matern_model,whittle_model,exp_model,gaussian_model,spherical_model])
gvg.fitVariogramModel(whittle_model)
import numpy as np
xx = np.linspace(0,1000000,1000)
gvg.plot(refresh=False,with_envelope=True)
plt.plot(xx,whittle_model.f(xx),lw=2.0,c='k')
plt.title("Empirical Variogram with fitted Whittle Model")
def randomSelection(n,p):
idxs = np.random.choice(n,p,replace=False)
random_sample = new_data.iloc[idxs]
return random_sample
#################
n = len(new_data)
p = 3000 # The amount of samples taken (let's do it without replacement)
random_sample = randomSelection(n,100)
Explanation: Instantiating theoretical variogram model
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.