Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
3,800 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SARIMAX
Step1: Model Selection
As in Durbin and Koopman, we force a number of the values to be missing.
Step2: Then we can consider model selection using the Akaike information criteria (AIC), but running the model for each variant and selecting the model with the lowest AIC value.
There are a couple of things to note here
Step3: For the models estimated over the full (non-missing) dataset, the AIC chooses ARMA(1,1) or ARMA(3,0). Durbin and Koopman suggest the ARMA(1,1) specification is better due to parsimony.
$$
\text{Replication of | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
from scipy.stats import norm
import statsmodels.api as sm
import matplotlib.pyplot as plt
import requests
from io import BytesIO
from zipfile import ZipFile
# Download the dataset
dk = requests.get('http://www.ssfpack.com/files/DK-data.zip').content
f = BytesIO(dk)
zipped = ZipFile(f)
df = pd.read_table(
BytesIO(zipped.read('internet.dat')),
skiprows=1, header=None, sep='\s+', engine='python',
names=['internet','dinternet']
)
Explanation: SARIMAX: Model selection, missing data
The example mirrors Durbin and Koopman (2012), Chapter 8.4 in application of Box-Jenkins methodology to fit ARMA models. The novel feature is the ability of the model to work on datasets with missing values.
End of explanation
# Get the basic series
dta_full = df.dinternet[1:].values
dta_miss = dta_full.copy()
# Remove datapoints
missing = np.r_[6,16,26,36,46,56,66,72,73,74,75,76,86,96]-1
dta_miss[missing] = np.nan
Explanation: Model Selection
As in Durbin and Koopman, we force a number of the values to be missing.
End of explanation
import warnings
aic_full = pd.DataFrame(np.zeros((6,6), dtype=float))
aic_miss = pd.DataFrame(np.zeros((6,6), dtype=float))
warnings.simplefilter('ignore')
# Iterate over all ARMA(p,q) models with p,q in [0,6]
for p in range(6):
for q in range(6):
if p == 0 and q == 0:
continue
# Estimate the model with no missing datapoints
mod = sm.tsa.statespace.SARIMAX(dta_full, order=(p,0,q), enforce_invertibility=False)
try:
res = mod.fit(disp=False)
aic_full.iloc[p,q] = res.aic
except:
aic_full.iloc[p,q] = np.nan
# Estimate the model with missing datapoints
mod = sm.tsa.statespace.SARIMAX(dta_miss, order=(p,0,q), enforce_invertibility=False)
try:
res = mod.fit(disp=False)
aic_miss.iloc[p,q] = res.aic
except:
aic_miss.iloc[p,q] = np.nan
Explanation: Then we can consider model selection using the Akaike information criteria (AIC), but running the model for each variant and selecting the model with the lowest AIC value.
There are a couple of things to note here:
When running such a large batch of models, particularly when the autoregressive and moving average orders become large, there is the possibility of poor maximum likelihood convergence. Below we ignore the warnings since this example is illustrative.
We use the option enforce_invertibility=False, which allows the moving average polynomial to be non-invertible, so that more of the models are estimable.
Several of the models do not produce good results, and their AIC value is set to NaN. This is not surprising, as Durbin and Koopman note numerical problems with the high order models.
End of explanation
# Statespace
mod = sm.tsa.statespace.SARIMAX(dta_miss, order=(1,0,1))
res = mod.fit(disp=False)
print(res.summary())
# In-sample one-step-ahead predictions, and out-of-sample forecasts
nforecast = 20
predict = res.get_prediction(end=mod.nobs + nforecast)
idx = np.arange(len(predict.predicted_mean))
predict_ci = predict.conf_int(alpha=0.5)
# Graph
fig, ax = plt.subplots(figsize=(12,6))
ax.xaxis.grid()
ax.plot(dta_miss, 'k.')
# Plot
ax.plot(idx[:-nforecast], predict.predicted_mean[:-nforecast], 'gray')
ax.plot(idx[-nforecast:], predict.predicted_mean[-nforecast:], 'k--', linestyle='--', linewidth=2)
ax.fill_between(idx, predict_ci[:, 0], predict_ci[:, 1], alpha=0.15)
ax.set(title='Figure 8.9 - Internet series');
Explanation: For the models estimated over the full (non-missing) dataset, the AIC chooses ARMA(1,1) or ARMA(3,0). Durbin and Koopman suggest the ARMA(1,1) specification is better due to parsimony.
$$
\text{Replication of:}\
\textbf{Table 8.1} ~~ \text{AIC for different ARMA models.}\
\newcommand{\r}[1]{{\color{red}{#1}}}
\begin{array}{lrrrrrr}
\hline
q & 0 & 1 & 2 & 3 & 4 & 5 \
\hline
p & {} & {} & {} & {} & {} & {} \
0 & 0.00 & 549.81 & 519.87 & 520.27 & 519.38 & 518.86 \
1 & 529.24 & \r{514.30} & 516.25 & 514.58 & 515.10 & 516.28 \
2 & 522.18 & 516.29 & 517.16 & 515.77 & 513.24 & 514.73 \
3 & \r{511.99} & 513.94 & 515.92 & 512.06 & 513.72 & 514.50 \
4 & 513.93 & 512.89 & nan & nan & 514.81 & 516.08 \
5 & 515.86 & 517.64 & nan & nan & nan & nan \
\hline
\end{array}
$$
For the models estimated over missing dataset, the AIC chooses ARMA(1,1)
$$
\text{Replication of:}\
\textbf{Table 8.2} ~~ \text{AIC for different ARMA models with missing observations.}\
\begin{array}{lrrrrrr}
\hline
q & 0 & 1 & 2 & 3 & 4 & 5 \
\hline
p & {} & {} & {} & {} & {} & {} \
0 & 0.00 & 488.93 & 464.01 & 463.86 & 462.63 & 463.62 \
1 & 468.01 & \r{457.54} & 459.35 & 458.66 & 459.15 & 461.01 \
2 & 469.68 & nan & 460.48 & 459.43 & 459.23 & 460.47 \
3 & 467.10 & 458.44 & 459.64 & 456.66 & 459.54 & 460.05 \
4 & 469.00 & 459.52 & nan & 463.04 & 459.35 & 460.96 \
5 & 471.32 & 461.26 & nan & nan & 461.00 & 462.97 \
\hline
\end{array}
$$
Note: the AIC values are calculated differently than in Durbin and Koopman, but show overall similar trends.
Postestimation
Using the ARMA(1,1) specification selected above, we perform in-sample prediction and out-of-sample forecasting.
End of explanation |
3,801 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'sandbox-3', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: BCC
Source ID: SANDBOX-3
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:39
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
3,802 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Manipulating FileFunction Spectra
This tutorial demonstrates some of the methods that can be used to manipulate FileFunction sources in fermipy. For this example we'll use the draco analysis.
Step1: By default all sources are initialized with parametric spectral models (PowerLaw, etc.). The spectral model of a source an be updated by calling set_source_spectrum().
Step2: Running set_source_spectrum() with no additional arguments will substitute the source spectrum with a FileFunction with the same distribution in differential flux. The normalization parameter is defined such that 1.0 corresponds to the normalization of the original source spectrum.
Step3: The differential flux of a FileFunction source can be accessed or modified at runtime by calling the get_source_dfde() and set_source_dfde() methods
Step4: Calling set_source_spectrum() with the optional dictionary argument can be used to explicitly set the parameters of the new spectral model. | Python Code:
import os
if os.path.isfile('../data/draco.tar.gz'):
!tar xzf ../data/draco.tar.gz
else:
!curl -OL https://raw.githubusercontent.com/fermiPy/fermipy-extras/master/data/draco.tar.gz
!tar xzf draco.tar.gz
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from fermipy.gtanalysis import GTAnalysis
gta = GTAnalysis('draco/config.yaml')
gta.setup()
Explanation: Manipulating FileFunction Spectra
This tutorial demonstrates some of the methods that can be used to manipulate FileFunction sources in fermipy. For this example we'll use the draco analysis.
End of explanation
print gta.roi['3FGL J1725.3+5853']
Explanation: By default all sources are initialized with parametric spectral models (PowerLaw, etc.). The spectral model of a source an be updated by calling set_source_spectrum().
End of explanation
gta.set_source_spectrum('3FGL J1725.3+5853','FileFunction')
print gta.roi['3FGL J1725.3+5853']
Explanation: Running set_source_spectrum() with no additional arguments will substitute the source spectrum with a FileFunction with the same distribution in differential flux. The normalization parameter is defined such that 1.0 corresponds to the normalization of the original source spectrum.
End of explanation
x, y = gta.get_source_dfde('3FGL J1725.3+5853')
y1 = 1E-12*10**(-2.0*(x-3.0))*np.exp(-10**(x-3.0))
plt.figure()
plt.plot(x,y)
plt.plot(x,y1)
plt.gca().set_yscale('log')
plt.gca().set_ylim(1E-17,1E-5)
print gta.like()
gta.set_source_dfde('3FGL J1725.3+5853',y1)
print gta.like()
Explanation: The differential flux of a FileFunction source can be accessed or modified at runtime by calling the get_source_dfde() and set_source_dfde() methods:
End of explanation
gta.set_source_spectrum('3FGL J1725.3+5853','PowerLaw',{'Index' : 2.179, 'Scale' : 1701, 'Prefactor' : 1.627e-13})
gta.like()
Explanation: Calling set_source_spectrum() with the optional dictionary argument can be used to explicitly set the parameters of the new spectral model.
End of explanation |
3,803 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ex3
Step1: To create a VectorFitting instance, a Network containing the frequency responses of the N-port is passed. In this example a copy of Agilent_E5071B.s4p from the skrf/tests folder is used
Step2: Now, the vector fit can be performed. The number and type of poles has to be specified, which depends on the behaviour of the responses. As a rule of thumb for an initial guess, one can count the number of resonances or "bumps" in the individual responses. In this case, the 4-port network has 16 responses to be fitted. As shown in the magnitude plots below, $S_{11}$ and some other responses are quite spiky and have roughly 15 local maxima each and about the same number of local minima in between. Other responses have only 5-6 local maxima, or they are very noisy with very small magnitudes (like $S_{24}$ and $S_{42}$). Assuming that most of the 15 maxima of $S_{11}$ occur at the same frequencies as the maxima of the other responses, one can expect to require 15 complex-conjugate poles for a fit. As this is probably not completely the case, trying with 20-30 poles should be a good start to fit all of the resonances in all of the responses.
Step3: After trying different numbers of real and complex-conjugate poles, the following setup was found to result in a very good fit. Other setups also work well (e.g. 0-2 real poles and 25-26 cc poles)
Step4: The convergence can also be checked with the convergence plot
Step5: The fitted model parameters are now stored in the class attributes poles, residues, proportional_coeff and constant_coeff for further use. To verify the result, the fitted model responses can be compared to the original network responses. As the model will return a response at any given frequency, it makes sense to also check its response outside the frequency range of the original samples. In this case, the original network was measured from 0.5 GHz to 4.5 GHz, so we can plot the fit from dc to 10 GHz | Python Code:
import skrf
import numpy as np
import matplotlib.pyplot as mplt
Explanation: Ex3: Fitting spiky responses
The Vector Fitting feature is demonstrated using a 4-port example network copied from the scikit-rf tests folder. This network is a bit tricky to fit because of its many resonances in the individual response. Additional explanations and background information can be found in the Vector Fitting tutorial.
End of explanation
nw = skrf.network.Network('./Agilent_E5071B.s4p')
vf = skrf.VectorFitting(nw)
Explanation: To create a VectorFitting instance, a Network containing the frequency responses of the N-port is passed. In this example a copy of Agilent_E5071B.s4p from the skrf/tests folder is used:
End of explanation
# plot magnitudes of all 16 responses in the 4-port network
fig, ax = mplt.subplots(4, 4)
fig.set_size_inches(12, 8)
for i in range(4):
for j in range(4):
nw.plot_s_mag(i, j, ax=ax[i][j])
ax[i][j].get_legend().remove()
fig.tight_layout()
mplt.show()
Explanation: Now, the vector fit can be performed. The number and type of poles has to be specified, which depends on the behaviour of the responses. As a rule of thumb for an initial guess, one can count the number of resonances or "bumps" in the individual responses. In this case, the 4-port network has 16 responses to be fitted. As shown in the magnitude plots below, $S_{11}$ and some other responses are quite spiky and have roughly 15 local maxima each and about the same number of local minima in between. Other responses have only 5-6 local maxima, or they are very noisy with very small magnitudes (like $S_{24}$ and $S_{42}$). Assuming that most of the 15 maxima of $S_{11}$ occur at the same frequencies as the maxima of the other responses, one can expect to require 15 complex-conjugate poles for a fit. As this is probably not completely the case, trying with 20-30 poles should be a good start to fit all of the resonances in all of the responses.
End of explanation
vf.vector_fit(n_poles_real=1, n_poles_cmplx=26)
Explanation: After trying different numbers of real and complex-conjugate poles, the following setup was found to result in a very good fit. Other setups also work well (e.g. 0-2 real poles and 25-26 cc poles):
End of explanation
vf.plot_convergence()
Explanation: The convergence can also be checked with the convergence plot:
End of explanation
freqs = np.linspace(0, 10e9, 501)
fig, ax = mplt.subplots(4, 4)
fig.set_size_inches(12, 8)
for i in range(4):
for j in range(4):
vf.plot_s_mag(i, j, freqs=freqs, ax=ax[i][j])
ax[i][j].get_legend().remove()
fig.tight_layout()
mplt.show()
vf.get_rms_error()
Explanation: The fitted model parameters are now stored in the class attributes poles, residues, proportional_coeff and constant_coeff for further use. To verify the result, the fitted model responses can be compared to the original network responses. As the model will return a response at any given frequency, it makes sense to also check its response outside the frequency range of the original samples. In this case, the original network was measured from 0.5 GHz to 4.5 GHz, so we can plot the fit from dc to 10 GHz:
End of explanation |
3,804 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Goal
Step1: Load some data
I'm going to work with the data from the combined data sets. The analysis for this data set is in analysis\Cf072115_to_Cf072215b.
The one limitation here is that this data has already cut out the fission chamber neighbors.
det_df without fission chamber neighbors
Step2: Specify energy range
Step3: singles_hist_e_n.npz
Step4: Load bhp_nn_e for all pairs
I'm going to skip a few steps in order to save memory. This data was produced in analysis_build_bhp_nn_by_pair_1_ns.ipynb and is stored in datap\bhp_nn_by_pair_1ns.npz. Load it now, as explained in the notebook.
Step5: Set up det_df columns and singles_df
Step6: Calculate and fill doubles sums
Step7: Calculate singles sums
Step8: Calculate W values
Step9: Condense to angle bin
Step10: Plot it
Step11: Save to disk
In order to compare datasets, it would be nice to save these results to disk and reload in another notebook for comparison. These results are pretty easy, format-wise, so I'll just use the built-in pandas methods.
Step12: Reload | Python Code:
import os
import sys
import matplotlib.pyplot as plt
import numpy as np
import imageio
import pandas as pd
import seaborn as sns
sns.set(style='ticks')
sys.path.append('../scripts/')
import bicorr as bicorr
import bicorr_e as bicorr_e
import bicorr_plot as bicorr_plot
import bicorr_sums as bicorr_sums
import bicorr_math as bicorr_math
%load_ext autoreload
%autoreload 2
Explanation: Goal: Correct for singles rate with $W$ calculation
In order to correct for differences in detection efficiencies and solid angles, we will divide all of the doubles rates by the singles rates of the two detectors as follows:
$ W_{i,j} = \frac{D_{i,j}}{S_i*S_j}$
This requires calculating $S_i$ and $S_j$ from the cced files. I need to rewrite my analysis from the beginning, or write another function that parses the cced file.
In this file, I will import the singles and bicorr data and calculate all $D_{i,j}$, $S_i$, $S_j$, and $W_{i,j}$.
This notebook does the analysis in energy space.
End of explanation
det_df = bicorr.load_det_df('../meas_info/det_df_pairs_angles.csv')
det_df.head()
chList, fcList, detList, num_dets, num_det_pairs = bicorr.build_ch_lists()
dict_pair_to_index, dict_index_to_pair, dict_pair_to_angle = bicorr.build_dict_det_pair(det_df)
num_fissions = 2194651200.00
Explanation: Load some data
I'm going to work with the data from the combined data sets. The analysis for this data set is in analysis\Cf072115_to_Cf072215b.
The one limitation here is that this data has already cut out the fission chamber neighbors.
det_df without fission chamber neighbors
End of explanation
e_min = 0.62
e_max = 12
Explanation: Specify energy range
End of explanation
singles_hist_e_n, e_bin_edges, dict_det_to_index, dict_index_to_det = bicorr_e.load_singles_hist_both(filepath = '../analysis/Cf072115_to_Cf072215b/datap/',plot_flag=True, show_flag=True)
bicorr_plot.plot_singles_hist_e_n(singles_hist_e_n, e_bin_edges, show_flag=False, clear_flag=False)
for e in [e_min, e_max]:
plt.axvline(e,c='r')
plt.show()
singles_hist_e_n.shape
Explanation: singles_hist_e_n.npz
End of explanation
bhm_e, e_bin_edges, note = bicorr_e.load_bhm_e('../analysis/Cf072115_to_Cf072215b/datap')
bhm_e.shape
bhp_e = np.zeros((len(det_df),len(e_bin_edges)-1,len(e_bin_edges)-1))
bhp_e.shape
for index in det_df.index.values: # index is same as in `bhm`
bhp_e[index,:,:] = bicorr_e.build_bhp_e(bhm_e,e_bin_edges,pair_is=[index])[0]
bicorr_plot.bhp_e_plot(np.sum(bhp_e,axis=0),e_bin_edges, show_flag=True)
Explanation: Load bhp_nn_e for all pairs
I'm going to skip a few steps in order to save memory. This data was produced in analysis_build_bhp_nn_by_pair_1_ns.ipynb and is stored in datap\bhp_nn_by_pair_1ns.npz. Load it now, as explained in the notebook.
End of explanation
det_df.head()
det_df = bicorr_sums.init_det_df_sums(det_df)
det_df.head()
singles_e_df = bicorr_sums.init_singles_e_df(dict_index_to_det)
singles_e_df.head()
Explanation: Set up det_df columns and singles_df
End of explanation
bhp_e.shape
det_df, energies_real = bicorr_sums.fill_det_df_doubles_e_sums(det_df, bhp_e, e_bin_edges, e_min, e_max, True)
det_df.head()
bicorr_plot.counts_vs_angle_all(det_df, save_flag=False)
Explanation: Calculate and fill doubles sums
End of explanation
singles_e_df.head()
bicorr_plot.Sd_vs_ch_all(singles_e_df, save_flag=False)
det_df = bicorr_sums.fill_det_df_singles_sums(det_df, singles_e_df)
det_df.head()
plt.figure(figsize=(4,4))
ax = plt.gca()
sc = ax.scatter(det_df['d1'],det_df['d2'],s=13,marker='s',
edgecolor = 'none', c=det_df['Cd'],cmap='viridis')
ax.set_xlabel('d1 channel')
ax.set_ylabel('d2 channel')
ax.set_title('Doubles counts')
cbar = plt.colorbar(sc,fraction=0.043,pad=0.1)
cbar.set_label('Doubles counts')
plt.show()
plt.figure(figsize=(4,4))
ax = plt.gca()
sc = ax.scatter(det_df['d1'],det_df['d2'],s=13,marker='s',
edgecolor = 'none', c=det_df['Sd1'],cmap='viridis')
ax.set_xlabel('d1 channel')
ax.set_ylabel('d2 channel')
ax.set_title('D1 singles counts')
cbar = plt.colorbar(sc,fraction=0.043,pad=0.1)
cbar.set_label('D1 singles counts')
plt.show()
plt.figure(figsize=(4,4))
ax = plt.gca()
sc = ax.scatter(det_df['d1'],det_df['d2'],s=13,marker='s',
edgecolor = 'none', c=det_df['Sd2'],cmap='viridis')
ax.set_xlabel('d1 channel')
ax.set_ylabel('d2 channel')
ax.set_title('D2 singles counts')
cbar = plt.colorbar(sc,fraction=0.043,pad=0.1)
cbar.set_label('D2 Singles counts')
plt.show()
Explanation: Calculate singles sums
End of explanation
det_df = bicorr_sums.calc_det_df_W(det_df)
det_df.head()
plt.figure(figsize=(4,4))
ax = plt.gca()
sc = ax.scatter(det_df['d1'],det_df['d2'],s=13,marker='s',
edgecolor = 'none', c=det_df['W'],cmap='viridis')
ax.set_xlabel('d1 channel')
ax.set_ylabel('d2 channel')
ax.set_title('W')
cbar = plt.colorbar(sc,fraction=0.043,pad=0.1)
cbar.set_label('W')
plt.show()
chIgnore = [1,17,33]
det_df_ignore = det_df[~det_df['d1'].isin(chIgnore) & ~det_df['d2'].isin(chIgnore)]
bicorr_plot.W_vs_angle_all(det_df_ignore, save_flag=False)
bicorr_plot.W_vs_angle_all?
Explanation: Calculate W values
End of explanation
angle_bin_edges = np.arange(8,190,10)
print(angle_bin_edges)
by_angle_df = bicorr_sums.condense_det_df_by_angle(det_df_ignore, angle_bin_edges)
by_angle_df.head()
Explanation: Condense to angle bin
End of explanation
bicorr_plot.W_vs_angle(det_df_ignore, by_angle_df, save_flag=False)
Explanation: Plot it
End of explanation
singles_e_df.to_csv('singles_e_df_filled.csv')
det_df.to_csv(r'det_df_e_filled.csv')
by_angle_df.to_csv(r'by_angle_e_df.csv')
Explanation: Save to disk
In order to compare datasets, it would be nice to save these results to disk and reload in another notebook for comparison. These results are pretty easy, format-wise, so I'll just use the built-in pandas methods.
End of explanation
det_df_filled = pd.read_csv(r'det_df_e_filled.csv',index_col=0)
det_df_filled.head()
chIgnore = [1,17,33]
det_df_ignore = det_df_filled[~det_df_filled['d1'].isin(chIgnore) & ~det_df_filled['d2'].isin(chIgnore)]
det_df_ignore.head()
singles_e_df_filled = pd.read_csv(r'singles_e_df_filled.csv',index_col=0)
singles_e_df_filled.head()
by_angle_e_df = pd.read_csv(r'by_angle_e_df.csv',index_col=0)
by_angle_e_df.head()
bicorr_plot.W_vs_angle(det_df_ignore, by_angle_e_df, save_flag=False)
Explanation: Reload
End of explanation |
3,805 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic Text Classification with Naive Bayes
In the mini-project, you'll learn the basics of text analysis using a subset of movie reviews from the rotten tomatoes database. You'll also use a fundamental technique in Bayesian inference, called Naive Bayes. This mini-project is based on Lab 10 of Harvard's CS109 class. Please free to go to the original lab for additional exercises and solutions.
Step1: Table of Contents
Rotten Tomatoes Dataset
Explore
The Vector Space Model and a Search Engine
In Code
Naive Bayes
Multinomial Naive Bayes and Other Likelihood Functions
Picking Hyperparameters for Naive Bayes and Text Maintenance
Interpretation
Rotten Tomatoes Dataset
Step2: Explore
Step3: <div class="span5 alert alert-info">
<h3>Exercise Set I</h3>
<br/>
<b>Exercise
Step4: The Vector Space Model and a Search Engine
All the diagrams here are snipped from Introduction to Information Retrieval by Manning et. al. which is a great resource on text processing. For additional information on text mining and natural language processing, see Foundations of Statistical Natural Language Processing by Manning and Schutze.
Also check out Python packages nltk, spaCy, pattern, and their associated resources. Also see word2vec.
Let us define the vector derived from document $d$ by $\bar V(d)$. What does this mean? Each document is treated as a vector containing information about the words contained in it. Each vector has the same length and each entry "slot" in the vector contains some kind of data about the words that appear in the document such as presence/absence (1/0), count (an integer) or some other statistic. Each vector has the same length because each document shared the same vocabulary across the full collection of documents -- this collection is called a corpus.
To define the vocabulary, we take a union of all words we have seen in all documents. We then just associate an array index with them. So "hello" may be at index 5 and "world" at index 99.
Suppose we have the following corpus
Step5: Naive Bayes
From Bayes' Theorem, we have that
$$P(c \vert f) = \frac{P(c \cap f)}{P(f)}$$
where $c$ represents a class or category, and $f$ represents a feature vector, such as $\bar V(d)$ as above. We are computing the probability that a document (or whatever we are classifying) belongs to category c given the features in the document. $P(f)$ is really just a normalization constant, so the literature usually writes Bayes' Theorem in context of Naive Bayes as
$$P(c \vert f) \propto P(f \vert c) P(c) $$
$P(c)$ is called the prior and is simply the probability of seeing class $c$. But what is $P(f \vert c)$? This is the probability that we see feature set $f$ given that this document is actually in class $c$. This is called the likelihood and comes from the data. One of the major assumptions of the Naive Bayes model is that the features are conditionally independent given the class. While the presence of a particular discriminative word may uniquely identify the document as being part of class $c$ and thus violate general feature independence, conditional independence means that the presence of that term is independent of all the other words that appear within that class. This is a very important distinction. Recall that if two events are independent, then
Step6: Picking Hyperparameters for Naive Bayes and Text Maintenance
We need to know what value to use for $\alpha$, and we also need to know which words to include in the vocabulary. As mentioned earlier, some words are obvious stopwords. Other words appear so infrequently that they serve as noise, and other words in addition to stopwords appear so frequently that they may also serve as noise.
First, let's find an appropriate value for min_df for the CountVectorizer. min_df can be either an integer or a float/decimal. If it is an integer, min_df represents the minimum number of documents a word must appear in for it to be included in the vocabulary. If it is a float, it represents the minimum percentage of documents a word must appear in to be included in the vocabulary. From the documentation
Step7: The parameter $\alpha$ is chosen to be a small value that simply avoids having zeros in the probability computations. This value can sometimes be chosen arbitrarily with domain expertise, but we will use K-fold cross validation. In K-fold cross-validation, we divide the data into $K$ non-overlapping parts. We train on $K-1$ of the folds and test on the remaining fold. We then iterate, so that each fold serves as the test fold exactly once. The function cv_score performs the K-fold cross-validation algorithm for us, but we need to pass a function that measures the performance of the algorithm on each fold.
Step8: We use the log-likelihood as the score here in scorefunc. The higher the log-likelihood, the better. Indeed, what we do in cv_score above is to implement the cross-validation part of GridSearchCV.
The custom scoring function scorefunc allows us to use different metrics depending on the decision risk we care about (precision, accuracy, profit etc.) directly on the validation set. You will often find people using roc_auc, precision, recall, or F1-score as the scoring function.
Step9: We'll cross-validate over the regularization parameter $\alpha$.
Let's set up the train and test masks first, and then we can run the cross-validation procedure.
Step10: <div class="span5 alert alert-info">
<h3>Exercise Set IV</h3>
<p><b>Exercise
Step11: <div class="span5 alert alert-info">
<h3>Exercise Set V
Step12: Interpretation
What are the strongly predictive features?
We use a neat trick to identify strongly predictive features (i.e. words).
first, create a data set such that each row has exactly one feature. This is represented by the identity matrix.
use the trained classifier to make predictions on this matrix
sort the rows by predicted probabilities, and pick the top and bottom $K$ rows
Step13: <div class="span5 alert alert-info">
<h3>Exercise Set VI</h3>
<p><b>Exercise
Step14: The above exercise is an example of feature selection. There are many other feature selection methods. A list of feature selection methods available in sklearn is here. The most common feature selection technique for text mining is the chi-squared $\left( \chi^2 \right)$ method.
Prediction Errors
We can see mis-predictions as well.
Step15: <div class="span5 alert alert-info">
<h3>Exercise Set VII
Step16: Aside
Step17: <div class="span5 alert alert-info">
<h3>Exercise Set VIII
Step18: 2. RandomForest and Logistic regression
Step19: 5. TF-IDF weighting | Python Code:
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from six.moves import range
import seaborn as sns
# Setup Pandas
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
# Setup Seaborn
sns.set_style("whitegrid")
sns.set_context("poster")
Explanation: Basic Text Classification with Naive Bayes
In the mini-project, you'll learn the basics of text analysis using a subset of movie reviews from the rotten tomatoes database. You'll also use a fundamental technique in Bayesian inference, called Naive Bayes. This mini-project is based on Lab 10 of Harvard's CS109 class. Please free to go to the original lab for additional exercises and solutions.
End of explanation
critics = pd.read_csv('./critics.csv')
#let's drop rows with missing quotes
critics = critics[~critics.quote.isnull()]
critics.head()
Explanation: Table of Contents
Rotten Tomatoes Dataset
Explore
The Vector Space Model and a Search Engine
In Code
Naive Bayes
Multinomial Naive Bayes and Other Likelihood Functions
Picking Hyperparameters for Naive Bayes and Text Maintenance
Interpretation
Rotten Tomatoes Dataset
End of explanation
n_reviews = len(critics)
n_movies = critics.rtid.unique().size
n_critics = critics.critic.unique().size
print("Number of reviews: {:d}".format(n_reviews))
print("Number of critics: {:d}".format(n_critics))
print("Number of movies: {:d}".format(n_movies))
df = critics.copy()
df['fresh'] = df.fresh == 'fresh'
grp = df.groupby('critic')
counts = grp.critic.count() # number of reviews by each critic
means = grp.fresh.mean() # average freshness for each critic
means[counts > 100].hist(bins=10, edgecolor='w', lw=1)
plt.xlabel("Average Rating per critic")
plt.ylabel("Number of Critics")
plt.yticks([0, 2, 4, 6, 8, 10]);
Explanation: Explore
End of explanation
# Most critics give on average more times the qualification 'fresh' than 'rotten'.
# None of them give only 'fresh' or 'rotten'.
# There is a remarkable dip at 55-60% and peak at 60-65%.
# The distribution looks a bit like a Bernouilli distribution (2 separate ones).
# The dip and peak is interesting and no extremes.
# Nobody reviews only films he/she likes or hates.
# There a few sour people who write more reviews about films they hate than like,
# but most rather make a review about a film they like or most people like more movies they watch than hate.
# The dip/peak could be that there are people who prefer to write about what they like
# and people who prefer to write about what they hate. There are not that many poeple in between 55-60%.
Explanation: <div class="span5 alert alert-info">
<h3>Exercise Set I</h3>
<br/>
<b>Exercise:</b> Look at the histogram above. Tell a story about the average ratings per critic. What shape does the distribution look like? What is interesting about the distribution? What might explain these interesting things?
</div>
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
text = ['Hop on pop', 'Hop off pop', 'Hop Hop hop']
print("Original text is\n{}".format('\n'.join(text)))
vectorizer = CountVectorizer(min_df=0)
# call `fit` to build the vocabulary
vectorizer.fit(text)
# call `transform` to convert text to a bag of words
x = vectorizer.transform(text)
# CountVectorizer uses a sparse array to save memory, but it's easier in this assignment to
# convert back to a "normal" numpy array
x = x.toarray()
print("")
print("Transformed text vector is \n{}".format(x))
# `get_feature_names` tracks which word is associated with each column of the transformed x
print("")
print("Words for each feature:")
print(vectorizer.get_feature_names())
# Notice that the bag of words treatment doesn't preserve information about the *order* of words,
# just their frequency
def make_xy(critics, vectorizer=None):
#Your code here
if vectorizer is None:
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(critics.quote)
X = X.tocsc() # some versions of sklearn return COO format
y = (critics.fresh == 'fresh').values.astype(np.int)
return X, y
X, y = make_xy(critics)
Explanation: The Vector Space Model and a Search Engine
All the diagrams here are snipped from Introduction to Information Retrieval by Manning et. al. which is a great resource on text processing. For additional information on text mining and natural language processing, see Foundations of Statistical Natural Language Processing by Manning and Schutze.
Also check out Python packages nltk, spaCy, pattern, and their associated resources. Also see word2vec.
Let us define the vector derived from document $d$ by $\bar V(d)$. What does this mean? Each document is treated as a vector containing information about the words contained in it. Each vector has the same length and each entry "slot" in the vector contains some kind of data about the words that appear in the document such as presence/absence (1/0), count (an integer) or some other statistic. Each vector has the same length because each document shared the same vocabulary across the full collection of documents -- this collection is called a corpus.
To define the vocabulary, we take a union of all words we have seen in all documents. We then just associate an array index with them. So "hello" may be at index 5 and "world" at index 99.
Suppose we have the following corpus:
A Fox one day spied a beautiful bunch of ripe grapes hanging from a vine trained along the branches of a tree. The grapes seemed ready to burst with juice, and the Fox's mouth watered as he gazed longingly at them.
Suppose we treat each sentence as a document $d$. The vocabulary (often called the lexicon) is the following:
$V = \left{\right.$ a, along, and, as, at, beautiful, branches, bunch, burst, day, fox, fox's, from, gazed, grapes, hanging, he, juice, longingly, mouth, of, one, ready, ripe, seemed, spied, the, them, to, trained, tree, vine, watered, with$\left.\right}$
Then the document
A Fox one day spied a beautiful bunch of ripe grapes hanging from a vine trained along the branches of a tree
may be represented as the following sparse vector of word counts:
$$\bar V(d) = \left( 4,1,0,0,0,1,1,1,0,1,1,0,1,0,1,1,0,0,0,0,2,1,0,1,0,0,1,0,0,0,1,1,0,0 \right)$$
or more succinctly as
[(0, 4), (1, 1), (5, 1), (6, 1), (7, 1), (9, 1), (10, 1), (12, 1), (14, 1), (15, 1), (20, 2), (21, 1), (23, 1),
(26, 1), (30, 1), (31, 1)]
along with a dictionary
{
0: a, 1: along, 5: beautiful, 6: branches, 7: bunch, 9: day, 10: fox, 12: from, 14: grapes,
15: hanging, 19: mouth, 20: of, 21: one, 23: ripe, 24: seemed, 25: spied, 26: the,
30: tree, 31: vine,
}
Then, a set of documents becomes, in the usual sklearn style, a sparse matrix with rows being sparse arrays representing documents and columns representing the features/words in the vocabulary.
Notice that this representation loses the relative ordering of the terms in the document. That is "cat ate rat" and "rat ate cat" are the same. Thus, this representation is also known as the Bag-Of-Words representation.
Here is another example, from the book quoted above, although the matrix is transposed here so that documents are columns:
Such a matrix is also catted a Term-Document Matrix. Here, the terms being indexed could be stemmed before indexing; for instance, jealous and jealousy after stemming are the same feature. One could also make use of other "Natural Language Processing" transformations in constructing the vocabulary. We could use Lemmatization, which reduces words to lemmas: work, working, worked would all reduce to work. We could remove "stopwords" from our vocabulary, such as common words like "the". We could look for particular parts of speech, such as adjectives. This is often done in Sentiment Analysis. And so on. It all depends on our application.
From the book:
The standard way of quantifying the similarity between two documents $d_1$ and $d_2$ is to compute the cosine similarity of their vector representations $\bar V(d_1)$ and $\bar V(d_2)$:
$$S_{12} = \frac{\bar V(d_1) \cdot \bar V(d_2)}{|\bar V(d_1)| \times |\bar V(d_2)|}$$
There is a far more compelling reason to represent documents as vectors: we can also view a query as a vector. Consider the query q = jealous gossip. This query turns into the unit vector $\bar V(q)$ = (0, 0.707, 0.707) on the three coordinates below.
The key idea now: to assign to each document d a score equal to the dot product:
$$\bar V(q) \cdot \bar V(d)$$
Then we can use this simple Vector Model as a Search engine.
In Code
End of explanation
#your turn
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=123)
clf = MultinomialNB()
clf.fit(X_train, y_train)
print('train:', round(clf.score(X_train, y_train) * 100, 2), '%')
print('test:', round(clf.score(X_test, y_test) * 100, 2), '%')
# The classifier overfitted a lot on the training set, it's not general.
# The accuracy score is a lot higher on the training set.
Explanation: Naive Bayes
From Bayes' Theorem, we have that
$$P(c \vert f) = \frac{P(c \cap f)}{P(f)}$$
where $c$ represents a class or category, and $f$ represents a feature vector, such as $\bar V(d)$ as above. We are computing the probability that a document (or whatever we are classifying) belongs to category c given the features in the document. $P(f)$ is really just a normalization constant, so the literature usually writes Bayes' Theorem in context of Naive Bayes as
$$P(c \vert f) \propto P(f \vert c) P(c) $$
$P(c)$ is called the prior and is simply the probability of seeing class $c$. But what is $P(f \vert c)$? This is the probability that we see feature set $f$ given that this document is actually in class $c$. This is called the likelihood and comes from the data. One of the major assumptions of the Naive Bayes model is that the features are conditionally independent given the class. While the presence of a particular discriminative word may uniquely identify the document as being part of class $c$ and thus violate general feature independence, conditional independence means that the presence of that term is independent of all the other words that appear within that class. This is a very important distinction. Recall that if two events are independent, then:
$$P(A \cap B) = P(A) \cdot P(B)$$
Thus, conditional independence implies
$$P(f \vert c) = \prod_i P(f_i | c) $$
where $f_i$ is an individual feature (a word in this example).
To make a classification, we then choose the class $c$ such that $P(c \vert f)$ is maximal.
There is a small caveat when computing these probabilities. For floating point underflow we change the product into a sum by going into log space. This is called the LogSumExp trick. So:
$$\log P(f \vert c) = \sum_i \log P(f_i \vert c) $$
There is another caveat. What if we see a term that didn't exist in the training data? This means that $P(f_i \vert c) = 0$ for that term, and thus $P(f \vert c) = \prod_i P(f_i | c) = 0$, which doesn't help us at all. Instead of using zeros, we add a small negligible value called $\alpha$ to each count. This is called Laplace Smoothing.
$$P(f_i \vert c) = \frac{N_{ic}+\alpha}{N_c + \alpha N_i}$$
where $N_{ic}$ is the number of times feature $i$ was seen in class $c$, $N_c$ is the number of times class $c$ was seen and $N_i$ is the number of times feature $i$ was seen globally. $\alpha$ is sometimes called a regularization parameter.
Multinomial Naive Bayes and Other Likelihood Functions
Since we are modeling word counts, we are using variation of Naive Bayes called Multinomial Naive Bayes. This is because the likelihood function actually takes the form of the multinomial distribution.
$$P(f \vert c) = \frac{\left( \sum_i f_i \right)!}{\prod_i f_i!} \prod_{f_i} P(f_i \vert c)^{f_i} \propto \prod_{i} P(f_i \vert c)$$
where the nasty term out front is absorbed as a normalization constant such that probabilities sum to 1.
There are many other variations of Naive Bayes, all which depend on what type of value $f_i$ takes. If $f_i$ is continuous, we may be able to use Gaussian Naive Bayes. First compute the mean and variance for each class $c$. Then the likelihood, $P(f \vert c)$ is given as follows
$$P(f_i = v \vert c) = \frac{1}{\sqrt{2\pi \sigma^2_c}} e^{- \frac{\left( v - \mu_c \right)^2}{2 \sigma^2_c}}$$
<div class="span5 alert alert-info">
<h3>Exercise Set II</h3>
<p><b>Exercise:</b> Implement a simple Naive Bayes classifier:</p>
<ol>
<li> split the data set into a training and test set
<li> Use `scikit-learn`'s `MultinomialNB()` classifier with default parameters.
<li> train the classifier over the training set and test on the test set
<li> print the accuracy scores for both the training and the test sets
</ol>
What do you notice? Is this a good classifier? If not, why not?
</div>
End of explanation
# Your turn.
X_df = pd.DataFrame(X.toarray())
print(X_df.shape)
freq = X_df.sum()
print(max(freq)) # to know the nr of bins so each nr of words is in one bin
print(sum(freq==1)/len(freq)) # to check if the plot is ok (gives this proportion at unique words for docs)
plt.hist(freq, cumulative=True, normed=True, bins=16805)
plt.hist(freq, cumulative=True, normed=True, bins=16805)
plt.xlim([0,100]) # to see where the plateau starts
# I would put max_df at 20
plt.hist(freq, cumulative=True, normed=True, bins=16805)
plt.xlim([0,20]) # to see the steep climb
# It starts to climb steeply immediately so I would choose 2
Explanation: Picking Hyperparameters for Naive Bayes and Text Maintenance
We need to know what value to use for $\alpha$, and we also need to know which words to include in the vocabulary. As mentioned earlier, some words are obvious stopwords. Other words appear so infrequently that they serve as noise, and other words in addition to stopwords appear so frequently that they may also serve as noise.
First, let's find an appropriate value for min_df for the CountVectorizer. min_df can be either an integer or a float/decimal. If it is an integer, min_df represents the minimum number of documents a word must appear in for it to be included in the vocabulary. If it is a float, it represents the minimum percentage of documents a word must appear in to be included in the vocabulary. From the documentation:
min_df: When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.
<div class="span5 alert alert-info">
<h3>Exercise Set III</h3>
<p><b>Exercise:</b> Construct the cumulative distribution of document frequencies (df). The $x$-axis is a document count $x_i$ and the $y$-axis is the percentage of words that appear less than $x_i$ times. For example, at $x=5$, plot a point representing the percentage or number of words that appear in 5 or fewer documents.</p>
<p><b>Exercise:</b> Look for the point at which the curve begins climbing steeply. This may be a good value for `min_df`. If we were interested in also picking `max_df`, we would likely pick the value where the curve starts to plateau. What value did you choose?</p>
</div>
End of explanation
from sklearn.model_selection import KFold
def cv_score(clf, X, y, scorefunc):
result = 0.
nfold = 5
for train, test in KFold(nfold).split(X): # split data into train/test groups, 5 times
clf.fit(X[train], y[train]) # fit the classifier, passed is as clf.
result += scorefunc(clf, X[test], y[test]) # evaluate score function on held-out data
return result / nfold # average
Explanation: The parameter $\alpha$ is chosen to be a small value that simply avoids having zeros in the probability computations. This value can sometimes be chosen arbitrarily with domain expertise, but we will use K-fold cross validation. In K-fold cross-validation, we divide the data into $K$ non-overlapping parts. We train on $K-1$ of the folds and test on the remaining fold. We then iterate, so that each fold serves as the test fold exactly once. The function cv_score performs the K-fold cross-validation algorithm for us, but we need to pass a function that measures the performance of the algorithm on each fold.
End of explanation
def log_likelihood(clf, x, y):
prob = clf.predict_log_proba(x)
rotten = y == 0
fresh = ~rotten
return prob[rotten, 0].sum() + prob[fresh, 1].sum()
Explanation: We use the log-likelihood as the score here in scorefunc. The higher the log-likelihood, the better. Indeed, what we do in cv_score above is to implement the cross-validation part of GridSearchCV.
The custom scoring function scorefunc allows us to use different metrics depending on the decision risk we care about (precision, accuracy, profit etc.) directly on the validation set. You will often find people using roc_auc, precision, recall, or F1-score as the scoring function.
End of explanation
from sklearn.model_selection import train_test_split
_, itest = train_test_split(range(critics.shape[0]), train_size=0.7)
mask = np.zeros(critics.shape[0], dtype=np.bool)
mask[itest] = True
Explanation: We'll cross-validate over the regularization parameter $\alpha$.
Let's set up the train and test masks first, and then we can run the cross-validation procedure.
End of explanation
# The log likelihood function calculates the summed logged probabilities and adds these for both classes.
# A higher loglikelihood means a better algorithm.
# Namely, we try to optimize that the algorithm predicts with a high probability that the sample
# belongs to the class it should belong too.
# Then the data becomes less important and the regularization has more influence.
# Hence the algorithm will have a harder time to learn from the data and will become more random.
from sklearn.naive_bayes import MultinomialNB
#the grid of parameters to search over
alphas = [.1, 1, 5, 10, 50]
best_min_df = 2 # YOUR TURN: put your value of min_df here.
#Find the best value for alpha and min_df, and the best classifier
best_alpha = 1
maxscore=-np.inf
for alpha in alphas:
vectorizer = CountVectorizer(min_df=best_min_df)
Xthis, ythis = make_xy(critics, vectorizer)
Xtrainthis = Xthis[mask]
ytrainthis = ythis[mask]
# your turn
clf = MultinomialNB(alpha=alpha)
print(alpha, cv_score(clf, Xtrainthis, ytrainthis, log_likelihood))
print("alpha: {}".format(best_alpha))
Explanation: <div class="span5 alert alert-info">
<h3>Exercise Set IV</h3>
<p><b>Exercise:</b> What does using the function `log_likelihood` as the score mean? What are we trying to optimize for?</p>
<p><b>Exercise:</b> Without writing any code, what do you think would happen if you choose a value of $\alpha$ that is too high?</p>
<p><b>Exercise:</b> Using the skeleton code below, find the best values of the parameter `alpha`, and use the value of `min_df` you chose in the previous exercise set. Use the `cv_score` function above with the `log_likelihood` function for scoring.</p>
</div>
End of explanation
# Old accuracies: train: 92.17 %, test: 77.28 %
# So the training is very slightly better, but test is worse by 3%.
# And still hugely overfits. Even though we used CV for the alpha selection.
# The alpha we picked was the default so no difference there.
# The min_df only seems to slightly better the result of the test set.
# So the difference seems to be in the train/test split and not so much in the algorithm.
# Maybe we should have tried more alpha's to get to a better result.
# Picking the default will of course change nothing.
vectorizer = CountVectorizer(min_df=best_min_df)
X, y = make_xy(critics, vectorizer)
xtrain=X[mask]
ytrain=y[mask]
xtest=X[~mask]
ytest=y[~mask]
clf = MultinomialNB(alpha=best_alpha).fit(xtrain, ytrain)
#your turn. Print the accuracy on the test and training dataset
training_accuracy = clf.score(xtrain, ytrain)
test_accuracy = clf.score(xtest, ytest)
print("Accuracy on training data: {:2f}".format(training_accuracy))
print("Accuracy on test data: {:2f}".format(test_accuracy))
from sklearn.metrics import confusion_matrix
print(confusion_matrix(ytest, clf.predict(xtest)))
Explanation: <div class="span5 alert alert-info">
<h3>Exercise Set V: Working with the Best Parameters</h3>
<p><b>Exercise:</b> Using the best value of `alpha` you just found, calculate the accuracy on the training and test sets. Is this classifier better? Why (not)?</p>
</div>
End of explanation
words = np.array(vectorizer.get_feature_names())
x = np.eye(xtest.shape[1])
probs = clf.predict_log_proba(x)[:, 0]
ind = np.argsort(probs)
good_words = words[ind[:10]]
bad_words = words[ind[-10:]]
good_prob = probs[ind[:10]]
bad_prob = probs[ind[-10:]]
print("Good words\t P(fresh | word)")
for w, p in zip(good_words, good_prob):
print("{:>20}".format(w), "{:.2f}".format(1 - np.exp(p)))
print("Bad words\t P(fresh | word)")
for w, p in zip(bad_words, bad_prob):
print("{:>20}".format(w), "{:.2f}".format(1 - np.exp(p)))
Explanation: Interpretation
What are the strongly predictive features?
We use a neat trick to identify strongly predictive features (i.e. words).
first, create a data set such that each row has exactly one feature. This is represented by the identity matrix.
use the trained classifier to make predictions on this matrix
sort the rows by predicted probabilities, and pick the top and bottom $K$ rows
End of explanation
# You test every word that way for what the probability of that word is that it belongs to class 'fresh'.
# It works because it tests each word once without other words and the classifier was already trained on
# the training set with all the words.
# Words that have a high probability to belong to the class 'fresh' are predictive of this class.
# Words with a low probability are more predictive for the other class.
Explanation: <div class="span5 alert alert-info">
<h3>Exercise Set VI</h3>
<p><b>Exercise:</b> Why does this method work? What does the probability for each row in the identity matrix represent</p>
</div>
End of explanation
x, y = make_xy(critics, vectorizer)
prob = clf.predict_proba(x)[:, 0]
predict = clf.predict(x)
bad_rotten = np.argsort(prob[y == 0])[:5]
bad_fresh = np.argsort(prob[y == 1])[-5:]
print("Mis-predicted Rotten quotes")
print('---------------------------')
for row in bad_rotten:
print(critics[y == 0].quote.iloc[row])
print("")
print("Mis-predicted Fresh quotes")
print('--------------------------')
for row in bad_fresh:
print(critics[y == 1].quote.iloc[row])
print("")
Explanation: The above exercise is an example of feature selection. There are many other feature selection methods. A list of feature selection methods available in sklearn is here. The most common feature selection technique for text mining is the chi-squared $\left( \chi^2 \right)$ method.
Prediction Errors
We can see mis-predictions as well.
End of explanation
#your turn
vectorizer = CountVectorizer(min_df=best_min_df)
X, y = make_xy(critics, vectorizer)
clf = MultinomialNB(alpha=best_alpha).fit(X, y)
line = 'This movie is not remarkable, touching, or superb in any way'
print(clf.predict_proba(vectorizer.transform([line]))[:, 1])
clf.predict(vectorizer.transform([line]))
# It predicts 'fresh' because almost all words are positive.
# The 'not' is hard to give the weight it would need in this sentence
# aka reversing the probabilities of all the words belonging to the 'not'.
Explanation: <div class="span5 alert alert-info">
<h3>Exercise Set VII: Predicting the Freshness for a New Review</h3>
<br/>
<div>
<b>Exercise:</b>
<ul>
<li> Using your best trained classifier, predict the freshness of the following sentence: *'This movie is not remarkable, touching, or superb in any way'*
<li> Is the result what you'd expect? Why (not)?
</ul>
</div>
</div>
End of explanation
# http://scikit-learn.org/dev/modules/feature_extraction.html#text-feature-extraction
# http://scikit-learn.org/dev/modules/classes.html#text-feature-extraction-ref
from sklearn.feature_extraction.text import TfidfVectorizer
tfidfvectorizer = TfidfVectorizer(min_df=1, stop_words='english')
Xtfidf=tfidfvectorizer.fit_transform(critics.quote)
Explanation: Aside: TF-IDF Weighting for Term Importance
TF-IDF stands for
Term-Frequency X Inverse Document Frequency.
In the standard CountVectorizer model above, we used just the term frequency in a document of words in our vocabulary. In TF-IDF, we weight this term frequency by the inverse of its popularity in all documents. For example, if the word "movie" showed up in all the documents, it would not have much predictive value. It could actually be considered a stopword. By weighing its counts by 1 divided by its overall frequency, we downweight it. We can then use this TF-IDF weighted features as inputs to any classifier. TF-IDF is essentially a measure of term importance, and of how discriminative a word is in a corpus. There are a variety of nuances involved in computing TF-IDF, mainly involving where to add the smoothing term to avoid division by 0, or log of 0 errors. The formula for TF-IDF in scikit-learn differs from that of most textbooks:
$$\mbox{TF-IDF}(t, d) = \mbox{TF}(t, d)\times \mbox{IDF}(t) = n_{td} \log{\left( \frac{\vert D \vert}{\vert d : t \in d \vert} + 1 \right)}$$
where $n_{td}$ is the number of times term $t$ occurs in document $d$, $\vert D \vert$ is the number of documents, and $\vert d : t \in d \vert$ is the number of documents that contain $t$
End of explanation
# If we try min_n_grams 1 and see up till 6 as max which is the best,
# we see that 1 will be selected and that's what we had already, hence not useful.
# If we select min and max n_grams the same, we can select (6,6), but the accuracies on the test set are very bad.
from sklearn.naive_bayes import MultinomialNB
#the grid of parameters to search over
n_grams = [1, 2, 3, 4, 5, 6]
best_min_df = 2 # YOUR TURN: put your value of min_df here.
best_alpha = 1
maxscore=-np.inf
for n_gram in n_grams:
vectorizer = CountVectorizer(min_df=best_min_df, ngram_range=(1, n_gram))
Xthis, ythis = make_xy(critics, vectorizer)
Xtrainthis = Xthis[mask]
ytrainthis = ythis[mask]
clf = MultinomialNB(alpha=best_alpha)
print(n_gram, cv_score(clf, Xtrainthis, ytrainthis, log_likelihood))
print()
for n_gram in n_grams:
vectorizer = CountVectorizer(min_df=best_min_df, ngram_range=(n_gram, n_gram))
Xthis, ythis = make_xy(critics, vectorizer)
Xtrainthis = Xthis[mask]
ytrainthis = ythis[mask]
clf = MultinomialNB(alpha=best_alpha)
print(n_gram, cv_score(clf, Xtrainthis, ytrainthis, log_likelihood))
vectorizer = CountVectorizer(min_df=best_min_df, ngram_range=(6, 6))
X, y = make_xy(critics, vectorizer)
xtrain=X[mask]
ytrain=y[mask]
xtest=X[~mask]
ytest=y[~mask]
clf = MultinomialNB(alpha=best_alpha).fit(xtrain, ytrain)
training_accuracy = clf.score(xtrain, ytrain)
test_accuracy = clf.score(xtest, ytest)
print("Accuracy on training data: {:2f}".format(training_accuracy))
print("Accuracy on test data: {:2f}".format(test_accuracy))
Explanation: <div class="span5 alert alert-info">
<h3>Exercise Set VIII: Enrichment</h3>
<p>
There are several additional things we could try. Try some of these as exercises:
<ol>
<li> Build a Naive Bayes model where the features are n-grams instead of words. N-grams are phrases containing n words next to each other: a bigram contains 2 words, a trigram contains 3 words, and 6-gram contains 6 words. This is useful because "not good" and "so good" mean very different things. On the other hand, as n increases, the model does not scale well since the feature set becomes more sparse.
<li> Try a model besides Naive Bayes, one that would allow for interactions between words -- for example, a Random Forest classifier.
<li> Try adding supplemental features -- information about genre, director, cast, etc.
<li> Use word2vec or [Latent Dirichlet Allocation](https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) to group words into topics and use those topics for prediction.
<li> Use TF-IDF weighting instead of word counts.
</ol>
</p>
<b>Exercise:</b> Try a few of these ideas to improve the model (or any other ideas of your own). Implement here and report on the result.
</div>
1. n_grams
End of explanation
# RF overtrained even more drammatically. Logistic regression did better than RF, but not better than we had.
from sklearn.ensemble import RandomForestClassifier
vectorizer = CountVectorizer(min_df=best_min_df)
X, y = make_xy(critics, vectorizer)
xtrain=X[mask]
ytrain=y[mask]
xtest=X[~mask]
ytest=y[~mask]
clf = RandomForestClassifier(n_estimators=100).fit(xtrain, ytrain)
training_accuracy = clf.score(xtrain, ytrain)
test_accuracy = clf.score(xtest, ytest)
print("Accuracy on training data: {:2f}".format(training_accuracy))
print("Accuracy on test data: {:2f}".format(test_accuracy))
from sklearn.linear_model import LogisticRegression
vectorizer = CountVectorizer(min_df=best_min_df)
X, y = make_xy(critics, vectorizer)
xtrain=X[mask]
ytrain=y[mask]
xtest=X[~mask]
ytest=y[~mask]
clf = LogisticRegression(penalty='l1').fit(xtrain, ytrain)
training_accuracy = clf.score(xtrain, ytrain)
test_accuracy = clf.score(xtest, ytest)
print("Accuracy on training data: {:2f}".format(training_accuracy))
print("Accuracy on test data: {:2f}".format(test_accuracy))
Explanation: 2. RandomForest and Logistic regression
End of explanation
# Also overtrained and worse than we had.
from sklearn.feature_extraction.text import TfidfVectorizer
tfidfvectorizer = TfidfVectorizer(min_df=2, stop_words='english')
X, y = make_xy(critics, tfidfvectorizer)
xtrain=X[mask]
ytrain=y[mask]
xtest=X[~mask]
ytest=y[~mask]
clf = MultinomialNB(alpha=best_alpha).fit(xtrain, ytrain)
training_accuracy = clf.score(xtrain, ytrain)
test_accuracy = clf.score(xtest, ytest)
print("Accuracy on training data: {:2f}".format(training_accuracy))
print("Accuracy on test data: {:2f}".format(test_accuracy))
Explanation: 5. TF-IDF weighting
End of explanation |
3,806 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Pragmatic Introduction to Perceptual Vector Quantization (part 1)
Luc Trudeau
Context
This guide explains Perceptual Vector Quantization by presenting it in a practical context. By pratical, I mean that we will implement the Perceptual Vector Quantization in python right here in this notebook. The intended audience is mainly programmers, not necessarily python programmers, and the idea is that you can leverage your programming skills to better understand Perceptual Vector Quantization.
Quantization
What is quantization anyway?
Let's load up an image an give an example of quantization
Step1: It's Quantizing time!
We will reduce the number of colors by a factor of 2. Each odd valued color will be replaced by the value of the even number right before it. For example, 43 becomes 42, we call this scalar quantization (scalar is just a fancy word for "one number").
Step2: I guess if you look hard enough you can see the difference, but the truth is your eye is not really good a differentiating even and odd shades of gray. We can remove more shades of gray and see what quantization does
Step3: Now I get Quantization! So let's "perceptually vector quantize" that tiger image!
Hold on there, turns out it's a lot more complicated, so we will keep that for part 2 in this series. For now, let's "perceptually vector quantize" something a little simpler, like this vector v
Step4: Vector quantization is a fancy way of saying that we want to convert the whole vector into a single code. This code is called a codeword. The idea is that all the codewords are defined in a codebook. Before we present the codebook, let's start with the concepts of gain and shape, which will be needed to better understand how codewords work.
Gain and Shape
To "perceptually vector quantize" v, we must first compute the norm of v
Step5: As you noticed, we refer to the norm of v as the gain of v, which represents the amount of energy in the v (I know there's a norm function in numpy, but doing it this way shows you why the gain measures energy).
The next thing we need is the unit vector of v
Step6: We refer to the unit vector of v as the shape of v. As its name suggest, the values of this vector show the shape of the energy distribution of v. The shape vector is in the same direction as v but is squaled to unit length.
We can get back v from shape like so
Step7: Instead of "perceptually vector quantizing" v, we will "perceptually vector quantize" shape and "scalar quantize" gain.
Wait a second, this requires "perceptually vector quantizing" 4 values, plus "scalar quantizing" the gain. How is this better than just "perceptually vector quantizing" the 4 values of v?
Step8: By using the shape of the v instead of the v itself, all vectors with the same shape will have the same codeword. Vectors with the same shape, are vectors that point in the same direction. In other words all different scales of the same vector. There's more to it than that, but for now let's focus on building the codebook.
Building the codebook
You might have imagined the codebook as a big book of codes, each code cherry picked by an engineer for optimal performance. This is not that type of codebook, basically a simple equation is used to generate all possible values. The equation is
Step9: Geometrically, we notice that the codewords form a diamond shape, where the edges are at values of k in one axis. When we perform vector quantization, we chose a the code word closest to v as the codeword representing v. The distance between the codeword and v is the error.
Let's examine some other codeboooks
Step10: Notice that when k=6, v is actually a codeword, because the absolute value of the elements of v sum to 6. In this case, there is no quantization error. Also notice that when k is greater than 6, there's also error.
I displayed these codebooks because the integer values make it more intuitive, but remember that we are "perceptually vector quantize" shape not v, so we need to normalize our codebook, like so
Step11: Woah! normalizing changed the shape looks like a circle.
Yes, and it's no ordinary circle it's a unit circle (I admit that's just a fancy word for a circle of radius 1)
Let's look at what happens when we increase k
Step12: Notice now, that by normalizing when k is a factor of 3 v is a valid code. Remember that example we did with v2? The absolute sum of its element is 3, and 3 is the smallest absolute sum of the shape of v.
Spoiler Alert
Step13: We did it!
We "perceptually vector quantized" v! You can check the codeword is in the codebook.
The burning question now is
Step14: Yikes! That's far from v.
Increasing k
We already know what happens when we increase k. If k = 3 the codeword should be a perfect match, let's try it out
Step15: By adding more codes to our codebook, we decrease the error. Let's draw a plot to better understand what happens to the error when we increase k. | Python Code:
%matplotlib inline
import numpy as np
from scipy.ndimage import imread
import matplotlib.pyplot as plt
def showImage(im):
plt.imshow(im, cmap = plt.get_cmap('gray'), vmin = 0, vmax = 255)
plt.title("This image has %d shades of gray." % len(np.unique(im)))
im = imread("images/tiger.png")
im = im[:,:,1]
showImage(im)
Explanation: A Pragmatic Introduction to Perceptual Vector Quantization (part 1)
Luc Trudeau
Context
This guide explains Perceptual Vector Quantization by presenting it in a practical context. By pratical, I mean that we will implement the Perceptual Vector Quantization in python right here in this notebook. The intended audience is mainly programmers, not necessarily python programmers, and the idea is that you can leverage your programming skills to better understand Perceptual Vector Quantization.
Quantization
What is quantization anyway?
Let's load up an image an give an example of quantization:
End of explanation
def quantize(im, step):
return np.floor(im / step) * step
showImage(quantize(im, 2))
Explanation: It's Quantizing time!
We will reduce the number of colors by a factor of 2. Each odd valued color will be replaced by the value of the even number right before it. For example, 43 becomes 42, we call this scalar quantization (scalar is just a fancy word for "one number").
End of explanation
plt.figure(figsize=(15,10))
plt.subplot(2,2,1)
showImage(quantize(im, 16))
plt.subplot(2,2,2)
showImage(quantize(im, 31))
plt.subplot(2,2,3)
showImage(quantize(im, 62))
plt.subplot(2,2,4)
showImage(quantize(im, 125))
Explanation: I guess if you look hard enough you can see the difference, but the truth is your eye is not really good a differentiating even and odd shades of gray. We can remove more shades of gray and see what quantization does
End of explanation
v = np.array([4,2])
Explanation: Now I get Quantization! So let's "perceptually vector quantize" that tiger image!
Hold on there, turns out it's a lot more complicated, so we will keep that for part 2 in this series. For now, let's "perceptually vector quantize" something a little simpler, like this vector v:
End of explanation
def gain(v):
return np.sqrt(np.dot(v,v))
print("Norm (aka gain) of v = %f" % (gain(v)))
Explanation: Vector quantization is a fancy way of saying that we want to convert the whole vector into a single code. This code is called a codeword. The idea is that all the codewords are defined in a codebook. Before we present the codebook, let's start with the concepts of gain and shape, which will be needed to better understand how codewords work.
Gain and Shape
To "perceptually vector quantize" v, we must first compute the norm of v:
End of explanation
def shape(v):
return np.true_divide(v, gain(v))
print("Unit vector (aka shape): %s" % (shape(v)))
Explanation: As you noticed, we refer to the norm of v as the gain of v, which represents the amount of energy in the v (I know there's a norm function in numpy, but doing it this way shows you why the gain measures energy).
The next thing we need is the unit vector of v:
End of explanation
print("v = %s" % (shape(v) * gain(v)))
Explanation: We refer to the unit vector of v as the shape of v. As its name suggest, the values of this vector show the shape of the energy distribution of v. The shape vector is in the same direction as v but is squaled to unit length.
We can get back v from shape like so:
End of explanation
v2 = [2,1]
print("Gain of v2 = %s" % (gain(v2)))
print("Shape of v2 = %s" % (shape(v2)))
assert(shape(v).all() == shape(v2).all())
Explanation: Instead of "perceptually vector quantizing" v, we will "perceptually vector quantize" shape and "scalar quantize" gain.
Wait a second, this requires "perceptually vector quantizing" 4 values, plus "scalar quantizing" the gain. How is this better than just "perceptually vector quantizing" the 4 values of v?
End of explanation
k = 2
def build2DCodeBook(k):
codebook = []
for x0 in range(-k,k+1):
for x1 in range(-k,k+1):
if abs(x0) + abs(x1) == k:
codebook.append([x0, x1])
return np.array(codebook)
def showCodeBook(codebook, k):
plt.scatter(codebook[:][:,0], codebook[:][:,1], c="blue", label="Codewords", alpha=0.5, s=50)
plt.scatter(v[0], v[1], c="red", label="v", alpha=0.5, s=50)
plt.title("Codebook for k=%d" % (k))
plt.legend(loc='lower right', shadow=True)
plt.axis('equal')
codebook = build2DCodeBook(k)
print("Codebook for k=%d: \n%s" %(k,codebook))
showCodeBook(codebook, 2)
Explanation: By using the shape of the v instead of the v itself, all vectors with the same shape will have the same codeword. Vectors with the same shape, are vectors that point in the same direction. In other words all different scales of the same vector. There's more to it than that, but for now let's focus on building the codebook.
Building the codebook
You might have imagined the codebook as a big book of codes, each code cherry picked by an engineer for optimal performance. This is not that type of codebook, basically a simple equation is used to generate all possible values. The equation is: The sum of absolute values of the codeword must sum to k.
Just for fun (because we won't need it later) let's build the codebook. This is probably not the fastest way to build the codebook, but it should be easy to understand. We have 2 nested loops because we have 2 elements in v. Since the absolute value operator is used, valid values range from -k to k.
End of explanation
plt.figure(figsize=(15,10))
plt.subplot(2,2,1)
showCodeBook(build2DCodeBook(3), 3)
plt.subplot(2,2,2)
showCodeBook(build2DCodeBook(6), 6)
plt.subplot(2,2,3)
showCodeBook(build2DCodeBook(9), 9)
plt.subplot(2,2,4)
showCodeBook(build2DCodeBook(12), 12)
Explanation: Geometrically, we notice that the codewords form a diamond shape, where the edges are at values of k in one axis. When we perform vector quantization, we chose a the code word closest to v as the codeword representing v. The distance between the codeword and v is the error.
Let's examine some other codeboooks
End of explanation
def shapeCodeBook(codebook):
shapedCodeBook = []
for codeword in codebook:
shapedCodeBook.append(shape(codeword))
return np.array(shapedCodeBook)
def showShapedCodeBook(codebook, k):
plt.scatter(codebook[:][:,0], codebook[:][:,1], c="blue", label="Codewords", alpha=0.5, s=50)
sh = shape(v)
plt.scatter(sh[0], sh[1], c="red", label="v", alpha=0.5, s=50)
plt.title("Codebook for k=%d" % (k))
plt.legend(loc='lower right', shadow=True)
plt.axis('equal')
shapedCodeBook = shapeCodeBook(codebook)
print("Normalized Codebook for k=%d: \n%s" %(k,shapedCodeBook))
showShapedCodeBook(shapedCodeBook, 2)
Explanation: Notice that when k=6, v is actually a codeword, because the absolute value of the elements of v sum to 6. In this case, there is no quantization error. Also notice that when k is greater than 6, there's also error.
I displayed these codebooks because the integer values make it more intuitive, but remember that we are "perceptually vector quantize" shape not v, so we need to normalize our codebook, like so:
End of explanation
plt.figure(figsize=(15,15))
plt.subplot(3,2,1)
showShapedCodeBook(shapeCodeBook(build2DCodeBook(3)), 3)
plt.subplot(3,2,2)
showShapedCodeBook(shapeCodeBook(build2DCodeBook(6)), 6)
plt.subplot(3,2,3)
showShapedCodeBook(shapeCodeBook(build2DCodeBook(7)), 7)
plt.subplot(3,2,4)
showShapedCodeBook(shapeCodeBook(build2DCodeBook(12)), 12)
plt.subplot(3,2,5)
showShapedCodeBook(shapeCodeBook(build2DCodeBook(15)), 15)
plt.subplot(3,2,6)
showShapedCodeBook(shapeCodeBook(build2DCodeBook(20)), 20)
Explanation: Woah! normalizing changed the shape looks like a circle.
Yes, and it's no ordinary circle it's a unit circle (I admit that's just a fancy word for a circle of radius 1)
Let's look at what happens when we increase k:
End of explanation
import sys
def findCode(v, k):
_shape = shape(v)
# Here we spread k of the shape. Without rounding it sums to k, but the elements of the codeword must be integers
codeword = np.round(_shape/sum(np.abs(_shape))*k)
sa = sum(np.abs(codeword))
if sa != k:
step = np.sign(k - sa)
while sa != k:
minsse = sys.maxsize
for i in range(0,2): # Iteratively apply step to every element and keep the best.
codeword[i] = codeword[i] + step
sse = sum((_shape - shape(codeword))**2)
if sse < minsse:
bestI = i
minsse = sse
codeword[i] = codeword[i] - step #Undo the step
codeword[bestI] = codeword[bestI] + step # Perform best step
sa = sa + step
return codeword
print("Perceptual Vector Quantization of v = %s " % (shape(findCode(v,2))))
Explanation: Notice now, that by normalizing when k is a factor of 3 v is a valid code. Remember that example we did with v2? The absolute sum of its element is 3, and 3 is the smallest absolute sum of the shape of v.
Spoiler Alert: Also notice how the dots are distributed over the circle. They concentrate near the axis. This indicates the we have less precision when code uniform vectors. The reason I mention this is that in part 2, we will use the DCT to decorolate the vectors so they will not be uniform.
Finding the code
Aren't we suppose to "vector quantize" something at some point?
Almost there, we just need to specify a value for k. The "vector quantize" value of v depends on k. For this example, let's continue with k = 2.
A nice feature of our codebook is that we don't need it (best feature ever). We know the rule: the sum of the absolute values of the elements of the codeword must be equal to k. We don't need the codebook, all we can do is to find the smallest change to v so that sum of the absolute value of its elemens is k.
One way of doing this is to spread k over the distribution and round it out. If the sum of the absolute values is not k, we step closer to k with every iteration.
End of explanation
def computeError(codeword, v):
recon = shape(codeword) * gain(v)
print("Reconstructed v = %s" % (recon))
print("Sum of Absolute Difference: %f" % (sum(abs(recon - v))))
print("Sum of Squared Error: %f" % (sum((recon - v)**2)))
computeError(findCode(v,2), v)
Explanation: We did it!
We "perceptually vector quantized" v! You can check the codeword is in the codebook.
The burning question now is: "how good is our quantization of v?". Let's find out:
End of explanation
computeError(findCode(v,3), v)
Explanation: Yikes! That's far from v.
Increasing k
We already know what happens when we increase k. If k = 3 the codeword should be a perfect match, let's try it out:
End of explanation
plt.figure(figsize=(15,5))
sad = []
sse = []
numKs = 20
for k in range(2,numKs):
recon = shape(findCode(v,k)) * gain(v)
print("k = %d Reconstructed Codeword = %s" % (k, recon))
sad.append(sum(abs(recon - v)))
sse.append(sum((recon - v)**2))
plt.plot(range(2,numKs), sad, label='SAD', c="orange")
plt.plot(range(2,numKs), sse, label='SSE', c="red")
plt.xlabel("k")
legend = plt.legend(loc='upper center', shadow=True)
Explanation: By adding more codes to our codebook, we decrease the error. Let's draw a plot to better understand what happens to the error when we increase k.
End of explanation |
3,807 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Astronomical python packages
In this lecture we will introduce the astropy library and the
affiliated package astroquery.
The official documents of these packages are available at
Step1: To have some information about the file we just open, we can use the fits.info method
Step2: Then, we can decide to open an exstension, check its shape and eventually plot it with
matplolib imshow.
Step3: Reading a FITS file - the general way
In a more general way, it is better to define a HDU list (Header Data Unit list) which
contains the list of all the goodies stored in a FITS file.
Then, get only what we need. First, let's open the list
Step4: Next, we can extract header and data from the first extension.
We can do this in two ways
Step5: FITS files are read in such a way that the first axis (often the RA for astronomical images) is read in as the last axis in the numpy array. Be sure to double check that you have the axis you need.
We can, at this point, close the file.
Step6: Now, let's explore the header.
To print in a human readable way, it's useful to use the repr function which adds line breaks in between the keywords
Step7: We can access the list of keywords, values, a specific keyword or comment
Step8: To extract the astrometry, we will use wcs package inside astropy.
This will allow us to display the image with astronomical coordinates
Step9: In the following, a plot with astrometry and some label customization.
For more details, have a look at this page
Step10: Coordinate transformations
Astropy
provides a way of dealing with coordinates, and automatically deal with conversions
Step11: From pixel to coordinates and vice versa
The wcs object contains functions that conversion from pixel to world coordinates and vice versa.
```python
From pixel => world
Step12: Cutouts
It is not infrequent that we need only a part of an image. So, we would like to extract this part and save another FITS file with correct astrometry.
We can do this using the class Cutout2D.
This class allows one to create a cutout object from a 2D array. If a WCS object is input, then the returned object will also contain a copy of the original WCS, but updated for the cutout array.
Step13: Save the new FITS file
To save the new fits we have to create the header, then the extensions,
finally pack all the extensions in a list and write the list to a file.
```python
Making a Primary HDU (required)
Step14: Tables in astropy
While you can use the FITS interface to open tables,
Astropy makes it very easy and convienientwith the astropy.table interface.
For an extensive help on Tables, have a look to the documentation page
Step15: Once imported, a table can be shown with a fancy notebook interface
Step16: Or more simply printed
Step17: The format can be fixed
Step18: A table is both a
dictionary-like and numpy array-like data type that can either be accessed by key (for columns) or index (for rows)
Step19: Making a table
To make a table manually is easy with Numpy arrays
Step20: To show the table in a browser in a nicely formatted manner, you can do
Step21: Astronomical units
Astropy
provides a way to manipulate quantities, automatically taking care of unit conversions automatically.
```python
from astropy import units as u
Defining Quantities with units
Step22: Astronomical constants
Astropy
also provides constants (with units).
```python
from astropy import constants as c
Some constants
c.k_B, c.c, c.M_sun, c.L_sun
Can use with units
energy = c.h* 30 * u.Ghz
Can convert units
mass = (3.2E13 * u.kg).to(c.M_sun)
```
The list of available constant is on
Step23: Most constant can be converted in cgs units simply using the "cgs" method
Step24: Astronomical query
There are lots of possible databases to query with astroquery. Let's see an example with
the SDSS query.
To access the SDSS, there is a package called astroquery.sdss.
We will import this and also the coordinate package from astropy.
Let's look for a particular object and explore its FITS files for imaging and spectroscopy.
We require the object to be selected only if the spectrum is available in SDSS.
Step25: Now, we can get the spectra and images for this list of objects using the following commands. We will obtain a list with as many objects a the list from xid.
In this case, only one object.
Step26: We can also access the SDSS template library. For instance, we will get qso template with the command
Step27: Let's go back to our image.
In this case the HDU list is the first element of the list.
We can explore what is inside using the .info method
Step28: Now, let's get the data.
Step29: In the case we want to display the histogram of intensity values
Step30: We can be interested in displaying the image with astrometry.
Let's consider a cutout around the target galaxy and overlap the contours.
Step31: Overplot two images with different astrometry
Step32: Versions | Python Code:
from astropy.utils.data import download_file
from astropy.io import fits
image_file = download_file('http://data.astropy.org/tutorials/FITS-images/HorseHead.fits',
cache=True)
Explanation: Astronomical python packages
In this lecture we will introduce the astropy library and the
affiliated package astroquery.
The official documents of these packages are available at:
http://docs.astropy.org/en/stable/index.html#
https://astroquery.readthedocs.io/en/latest/
To install these packages with conda:
bash
conda install astropy
conda install -c astropy astroquery
There are many more packages affiliated to astropy which can be of interest
to you. You can find the list at:
http://www.astropy.org/affiliated/
FITS files
FITS files are by far the favorite format to store and distribute astronomical data.
They come in two flavors:
images
tables
To read these two types of files we will need to import from astropy two different
sub-libraries:
python
from astropy.io import fits
from astropy.table import Table
FITS files can store multi-dimensional data (commonly 2 or 3 dimensions).
Any given FITS file can contain multiple images (or tables) called extensions.
Every FITS extension contains a header and data.
FITS headers can contain World Coordinate System (wcs) information that indicates where a given pixel is on the sky.
Unlike Python, the FITS convention is indexing starting at 1.
Generally, astropy takes this into account.
Reading a FITS file
Convenience functions make reading FITS images easy.
python
from astropy.io import fits
img1 = fits.getdata(filename) # Getting the image
head1 = fits.getheader(filename) # and the Header
This opens the image as a Numpy array, and the header as a
“dictionary-like” object (i.e., you can access the individual header keywords through “head1[‘key’]” ).
To open other extensions in the fits file:
python
img1 = fits.getdata(filename, 0) # Primary Ext
img2 = fits.getdata(filename, 1) # Second Ext
img2 = fits.getdata(filename, ext=1) # Equivalent
It is possible to import a FITS file also using an URL.
This is done with the download_file function.
End of explanation
fits.info(image_file)
Explanation: To have some information about the file we just open, we can use the fits.info method:
End of explanation
image_data = fits.getdata(image_file, ext=0)
image_data.shape
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure()
plt.imshow(image_data, cmap='gist_heat',origin='lower')
plt.colorbar();
Explanation: Then, we can decide to open an exstension, check its shape and eventually plot it with
matplolib imshow.
End of explanation
hdulist = fits.open(image_file)
hdulist.info()
Explanation: Reading a FITS file - the general way
In a more general way, it is better to define a HDU list (Header Data Unit list) which
contains the list of all the goodies stored in a FITS file.
Then, get only what we need. First, let's open the list:
End of explanation
header = hdulist['PRIMARY'].header
data = hdulist['PRIMARY'].data
Explanation: Next, we can extract header and data from the first extension.
We can do this in two ways:
by specifying the extension number
by specifying the extension name, if defined
End of explanation
hdulist.close()
Explanation: FITS files are read in such a way that the first axis (often the RA for astronomical images) is read in as the last axis in the numpy array. Be sure to double check that you have the axis you need.
We can, at this point, close the file.
End of explanation
print(repr(header[:10])) # Beginning of the header
Explanation: Now, let's explore the header.
To print in a human readable way, it's useful to use the repr function which adds line breaks in between the keywords:
End of explanation
print (header[:10].keys())
print (header[:10].values())
print (header['ORIGIN'])
print (header.comments['ORIGIN'])
Explanation: We can access the list of keywords, values, a specific keyword or comment:
End of explanation
from astropy.wcs import WCS
wcs = WCS(header)
print wcs
Explanation: To extract the astrometry, we will use wcs package inside astropy.
This will allow us to display the image with astronomical coordinates
End of explanation
fig = plt.figure()
ax = fig.add_axes([0.1, 0.1, 0.8, 0.8], projection=wcs)
#ax = plt.subplot(projection=wcs)
ax.set_xlabel('RA')
ax.set_ylabel('Dec')
ax.imshow(data, cmap='gist_heat',origin='lower')
ra = ax.coords[0]
ra.set_major_formatter('hh:mm:ss')
dec = ax.coords[1]
dec.set_major_formatter('dd:mm:ss');
Explanation: In the following, a plot with astrometry and some label customization.
For more details, have a look at this page:
https://github.com/astropy/astropy-api/blob/master/wcs_axes/wcs_api.md
End of explanation
from astropy.coordinates import SkyCoord
c0 = SkyCoord('5h41m00s','-2d27m00s',frame='icrs')
print c0
Explanation: Coordinate transformations
Astropy
provides a way of dealing with coordinates, and automatically deal with conversions:
```python
from astropy.coordinates import SkyCoord
Making Coordinates:
c1 = SkyCoord(ra, dec, frame=‘icrs’, unit=‘deg’)
c2 = SkyCoord(l, b, frame=‘galactic’, unit=‘deg’)
c3 = SkyCoord(’00h12m30s’, ‘+42d12m00s’)
Printing and Conversions:
c1.ra, c1.dec, c1.ra.hour, c2.ra.hms, c3.dec.dms
c2.fk5, c1.galactic # Converting Coordinates
c2.to_string(‘decimal’), c1.to_string(‘hmsdms’)
```
For instance, let's compute the coordinates of the center of the horse head:
End of explanation
center = wcs.all_world2pix(c0.ra,c0.dec,0)
print (center)
Explanation: From pixel to coordinates and vice versa
The wcs object contains functions that conversion from pixel to world coordinates and vice versa.
```python
From pixel => world:
ra, dec= w.all_pix2world(xpx, ypx, 0)# Can be lists
The third parameter indicates if you’re starting
from 0 (Python-standard) or 1 (FITS-standard)
From world => pixel:
xpx, ypx= w.all_world2pix(ra, dec, 0)
```
End of explanation
from astropy.nddata import Cutout2D
size=400
cutout = Cutout2D(data, center, size, wcs=wcs)
print cutout.bbox_original
fig = plt.figure()
ax = fig.add_axes([0.1, 0.1, 0.8, 0.8],
projection=cutout.wcs)
#ax = plt.subplot(projection=wcs)
ax.set_xlabel('RA')
ax.set_ylabel('Dec')
ax.imshow(cutout.data, cmap='gist_heat',origin='lower')
ra = ax.coords[0]
ra.set_major_formatter('hh:mm:ss')
dec = ax.coords[1]
dec.set_major_formatter('dd:mm:ss');
Explanation: Cutouts
It is not infrequent that we need only a part of an image. So, we would like to extract this part and save another FITS file with correct astrometry.
We can do this using the class Cutout2D.
This class allows one to create a cutout object from a 2D array. If a WCS object is input, then the returned object will also contain a copy of the original WCS, but updated for the cutout array.
End of explanation
cheader = cutout.wcs.to_header()
primaryhdu = fits.PrimaryHDU(cutout.data, cheader)
hdulist = fits.HDUList([primaryhdu])
hdulist.writeto('horse.fits', overwrite=True)
Explanation: Save the new FITS file
To save the new fits we have to create the header, then the extensions,
finally pack all the extensions in a list and write the list to a file.
```python
Making a Primary HDU (required):
primaryhdu= fits.PrimaryHDU(arr1)# Makes a header
or if you have a header that you’ve created:
primaryhdu= fits.PrimaryHDU(arr1, header=head1)
If you have additional extensions:
secondhdu= fits.ImageHDU(arr2)
Making a new HDU List:
hdulist1 = fits.HDUList([primaryhdu, secondhdu])
Writing the file:
hdulist1.writeto(filename, clobber=True)
```
The clobber=True instruction is given to allow the rewriting of a file over an existing file. Otherwise, Python refuses to overwrite.
End of explanation
hdulist = fits.open(image_file)
hdulist.info()
Explanation: Tables in astropy
While you can use the FITS interface to open tables,
Astropy makes it very easy and convienientwith the astropy.table interface.
For an extensive help on Tables, have a look to the documentation page:
http://docs.astropy.org/en/stable/table/
```python
from astropy.table import Table
Getting the first table
t1 = Table.read(filename.fits)
Getting the second table
t2 = Table.read(filename.fits, hdu=2)
```
This provides a
really flexible Table object that is a pleasure to deal with. It is easy to access different types of data, and read in and output to a wide variety of formats (not just FITS). Let's open the table in the extension 1 of the previous file:
End of explanation
from astropy.table import Table
t = Table.read(image_file, hdu=1)
t[:10].show_in_notebook()
Explanation: Once imported, a table can be shown with a fancy notebook interface:
End of explanation
print(t[:10])
Explanation: Or more simply printed:
End of explanation
t['ETA'].format = '4.1f'
print(t[:10])
Explanation: The format can be fixed:
End of explanation
print t[np.where(t['ETA_CORR'] > 0.8)]
Explanation: A table is both a
dictionary-like and numpy array-like data type that can either be accessed by key (for columns) or index (for rows):
```python
Getting column names, number of rows:
t1.colnames, len(t1)
Getting specific columns:
t1[‘name1’], t1[[‘name1’, ‘name2’]]
Getting specific rows (all normal indexing works):
t1[0], t1[:3], t1[::-1]
Where searching also works:
inds= np.where(t1[‘name1’] > 5)
subtable= t1[inds] # Gets all columns
```
For instance:
End of explanation
import numpy as np
from astropy.table import Table
%matplotlib inline
import matplotlib.pyplot as plt
a = np.arange(0,10,0.1)
b = a**2
t1 = Table([a, b], names=('a', 'b'))
plt.plot(t1['a'],t1['b']);
Explanation: Making a table
To make a table manually is easy with Numpy arrays:
```python
Given two columns (1D) arr1 and arr2:
t1 = Table([arr1, arr2], names=('a', 'b'))
The columns are named “a” and “b”.
Adding an additional column:
col1 = Table.Column(name='c', data=arr3)
t1.add_column(col1)
Adding an additional row:
row = np.array([1, 2, 3])
t1.add_row(row)
```
End of explanation
t1.write('table.txt',format='ascii.tab',overwrite=True)
Explanation: To show the table in a browser in a nicely formatted manner, you can do:
python
t1.show_in_browser()
Saving a table
Writing out a table is also quite simple:
```python
Writing out FITS table:
t1.write(filename.fits)
Writing out specific text type:
t1.write(filename.txt,format=‘ascii.tab’)
Can even write out to LaTeX:
t1.write(filename.tex, format=‘ascii.latex’)
```
End of explanation
from astropy import units as u
val = 30.0 * u.cm
print val.to(u.km)
# convert
val1 = 10 * u.km
val2 = 100. * u.m
# simplify
print (val1/val2).decompose()
Explanation: Astronomical units
Astropy
provides a way to manipulate quantities, automatically taking care of unit conversions automatically.
```python
from astropy import units as u
Defining Quantities with units:
val1, val2 = 30.2 * u.cm, 2.2E4 * u.s
val3 = val1/val2 # Will be units cm / s
Converting Units
val3km = val3.to(u.km/u.s)
Simplifying Units
val4 = (10.3 * u.s/ (3 * u.Hz)).decompose()
```
End of explanation
from astropy import constants as c
print 'solar mass: ', c.M_sun.value, c.M_sun.unit,'\n'
print (c.c)
Explanation: Astronomical constants
Astropy
also provides constants (with units).
```python
from astropy import constants as c
Some constants
c.k_B, c.c, c.M_sun, c.L_sun
Can use with units
energy = c.h* 30 * u.Ghz
Can convert units
mass = (3.2E13 * u.kg).to(c.M_sun)
```
The list of available constant is on: http://docs.astropy.org/en/stable/constants/
End of explanation
print c.c.cgs
Explanation: Most constant can be converted in cgs units simply using the "cgs" method:
End of explanation
from astroquery.sdss import SDSS
from astropy import coordinates as coords
pos = coords.SkyCoord('13h10m27.46s +18d26m17.4s',
frame='icrs')
xid = SDSS.query_region(pos, spectro=True)
xid
Explanation: Astronomical query
There are lots of possible databases to query with astroquery. Let's see an example with
the SDSS query.
To access the SDSS, there is a package called astroquery.sdss.
We will import this and also the coordinate package from astropy.
Let's look for a particular object and explore its FITS files for imaging and spectroscopy.
We require the object to be selected only if the spectrum is available in SDSS.
End of explanation
sp = SDSS.get_spectra(matches=xid)
im = SDSS.get_images(matches=xid, band='r')
print len(sp), len(im)
Explanation: Now, we can get the spectra and images for this list of objects using the following commands. We will obtain a list with as many objects a the list from xid.
In this case, only one object.
End of explanation
template = SDSS.get_spectral_template('qso')
print len(template)
Explanation: We can also access the SDSS template library. For instance, we will get qso template with the command:
End of explanation
hdulist = im[0]
hdulist.info()
Explanation: Let's go back to our image.
In this case the HDU list is the first element of the list.
We can explore what is inside using the .info method:
End of explanation
header = hdulist[0].header
data = hdulist[0].data # image in 1st extension
print (data.shape, data.dtype.name)
#data = hdulist['PRIMARY'].data
#print (data.shape, data.dtype.name)
import numpy as np
plt.imshow(np.sqrt(data+1.),origin='lower',
cmap='gist_heat',vmax=1.1,vmin=0.9)
plt.colorbar();
Explanation: Now, let's get the data.
End of explanation
# How to display an histogram of the intensity values
fig,ax = plt.subplots()
ax.set_yscale('log')
ax.hist(data.ravel(),200)
ax.set_xlim([0,100]);
Explanation: In the case we want to display the histogram of intensity values:
End of explanation
c0 = SkyCoord('13h10m27.46s','18d26m17.4s',frame='icrs')
wcs = WCS(header)
center = wcs.all_world2pix(c0.ra,c0.dec,0)
size=400
cutout = Cutout2D(data, center, size, wcs=wcs)
ax = plt.subplot(projection=cutout.wcs)
ra = ax.coords[0]
ra.set_major_formatter('hh:mm:ss')
dec = ax.coords[1]
dec.set_major_formatter('dd:mm:ss')
ax.set_xlabel('RA')
ax.set_ylabel('Dec')
ax.imshow(np.sqrt(cutout.data+1.), cmap='gist_heat',
origin='lower',vmax=1.1,vmin=0.9,aspect='auto');
a = np.sqrt(cutout.data+1.)
mina=np.min(a)
maxa=np.max(a)
levels = np.arange(mina,maxa,(maxa-mina)/20.)
labels = [item.get_text() for item in
ax.get_xticklabels()]
ax.contour(a,levels,color='cyan');
from astroquery.ukidss import Ukidss
import astropy.units as u
import astropy.coordinates as coord
image_ulrs = Ukidss.get_image_list(c0,frame_type='interleave',radius=5 * u.arcmin, waveband='K',programme_id='LAS')
Explanation: We can be interested in displaying the image with astrometry.
Let's consider a cutout around the target galaxy and overlap the contours.
End of explanation
from astroquery.skyview import SkyView
survey = 'WISE 12'
sv = SkyView()
paths = sv.get_images(position='M 82',
survey=['WISE 12','GALEX Near UV'])
from astropy.wcs import WCS
wcs1 = WCS(paths[0][0].header)
wcs2 = WCS(paths[1][0].header)
fig = plt.figure()
ax = fig.add_axes([0.1, 0.1, 0.8, 0.8], projection=wcs1)
ax.imshow(paths[0][0].data, origin='lower',
cmap='gist_heat_r')
ima2 = paths[1][0].data
levels = np.arange(np.nanmin(ima2),np.nanmax(ima2), 1.)
levels = np.nanmin(ima2)+[0.02,0.09,0.2]
ax.contour(ima2,levels, transform=ax.get_transform(wcs2),
colors='r')
plt.xlabel('RA')
plt.ylabel('Dec')
plt.show()
Explanation: Overplot two images with different astrometry
End of explanation
%reload_ext version_information
%version_information numpy, astropy
Explanation: Versions
End of explanation |
3,808 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bidirectional A$^*$ Search
Step1: The function search takes three arguments to solve a search problem
Step2: Given a state and a parent dictionary Parent, the function path_to returns a path leading to the given state.
Step3: The function combinePath takes three parameters
Step4: Let's draw the start state and animate the solution that has been found.
Step5: Let's try the real thing. | Python Code:
import sys
sys.path.append('..')
from Set import Set
Explanation: Bidirectional A$^*$ Search
End of explanation
def search(start, goal, next_states, heuristic):
estimate = heuristic(start, goal)
ParentA = { start: start }
ParentB = { goal : goal }
DistanceA = { start: 0 }
DistanceB = { goal : 0 }
EstimateA = { start: estimate }
EstimateB = { goal : estimate }
FrontierA = Set()
FrontierB = Set()
FrontierA.insert( (estimate, start) )
FrontierB.insert( (estimate, goal ) )
while FrontierA and FrontierB:
guessA, stateA = FrontierA.pop()
guessB, stateB = FrontierB.pop()
stateADist = DistanceA[stateA]
stateBDist = DistanceB[stateB]
if guessA <= guessB:
FrontierB.insert( (guessB, stateB) )
for ns in next_states(stateA):
oldEstimate = EstimateA.get(ns, None)
newEstimate = stateADist + 1 + heuristic(ns, goal)
if oldEstimate is None or newEstimate < oldEstimate:
ParentA [ns] = stateA
DistanceA[ns] = stateADist + 1
EstimateA[ns] = newEstimate
FrontierA.insert( (newEstimate, ns) )
if oldEstimate is not None:
FrontierA.delete( (oldEstimate, ns) )
if DistanceB.get(ns, None) is not None:
stateNum = len(DistanceA) + len(DistanceB)
print('number of states:', stateNum)
return combinePaths(ns, ParentA, ParentB)
else:
FrontierA.insert( (guessA, stateA) )
for ns in next_states(stateB):
oldEstimate = EstimateB.get(ns, None)
newEstimate = stateBDist + 1 + heuristic(start, ns)
if oldEstimate is None or newEstimate < oldEstimate:
ParentB [ns] = stateB
DistanceB[ns] = stateBDist + 1
EstimateB[ns] = newEstimate
FrontierB.insert( (newEstimate, ns) )
if oldEstimate is not None:
FrontierB.delete( (oldEstimate, ns) )
if DistanceA.get(ns, None) is not None:
stateNum = len(DistanceA) + len(DistanceB)
print('number of states:', stateNum)
return combinePaths(ns, ParentA, ParentB)
Explanation: The function search takes three arguments to solve a search problem:
- start is the start state of the search problem,
- goal is the goal state, and
- next_states is a function with signature $\texttt{next_states}:Q \rightarrow 2^Q$, where $Q$ is the set of states.
For every state $s \in Q$, $\texttt{next_states}(s)$ is the set of states that can be reached from $s$ in one step.
- heuristic is a function that takes two states as arguments. It returns an estimate of the
length of the shortest path between these states.
If successful, search returns a path from start to goal that is a solution of the search problem
$$ \langle Q, \texttt{next_states}, \texttt{start}, \texttt{goal} \rangle. $$
The function search implements bidirectional A$^*$ search.
End of explanation
def path_to(state, Parent):
p = Parent[state]
if p == state:
return [state]
return path_to(p, Parent) + [state]
Explanation: Given a state and a parent dictionary Parent, the function path_to returns a path leading to the given state.
End of explanation
def combinePaths(state, ParentA, ParentB):
Path1 = path_to(state, ParentA)
Path2 = path_to(state, ParentB)
return Path1[:-1] + Path2[::-1] # Path2 is reversed
Explanation: The function combinePath takes three parameters:
- state is a state that has been reached in bidirectional BFS from both start and goal.
- ParentA is the parent dictionary that has been build when searching from start.
If $\texttt{ParentA}[s_1] = s_2$ holds, then either $s_1 = s2 = \texttt{start}$ or
$s_1 \in \texttt{next_states}(s_2)$.
- ParentB is the parent dictionary that has been build when searching from goal.
If $\texttt{ParentB}[s_1] = s_2$ holds, then either $s_1 = s2 = \texttt{goal}$ or
$s_1 \in \texttt{next_states}(s_2)$.
The function returns a path from start to goal.
End of explanation
%run Sliding-Puzzle.ipynb
%load_ext memory_profiler
%%time
%memit Path = search(start, goal, next_states, manhattan)
print(len(Path)-1)
animation(Path)
Explanation: Let's draw the start state and animate the solution that has been found.
End of explanation
%%time
Path = search(start2, goal2, next_states, manhattan)
print(len(Path)-1)
animation(Path)
Explanation: Let's try the real thing.
End of explanation |
3,809 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-mm', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: NCC
Source ID: NORESM2-MM
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:24
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
3,810 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Splitting dataset and writing TF Records
This notebook shows you how to split a dataset into training, validation, testing and write those images into TensorFlow Record files.
Step1: Writing TF Records using Apache Beam
For speed, we'll illustrate writing just 5 records
Step2: Running on Dataflow
Apache Beam code can be executed in a serverless way using Cloud Dataflow.
The key thing is to
Step3: <img src="dataflow_pipeline.png" width="75%"/> | Python Code:
import pandas as pd
df = pd.read_csv('gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/all_data.csv', names=['image','label'])
df.head()
import numpy as np
np.random.seed(10)
rnd = np.random.rand(len(df))
train = df[ rnd < 0.8 ]
valid = df[ (rnd >= 0.8) & (rnd < 0.9) ]
test = df[ rnd >= 0.9 ]
print(len(df), len(train), len(valid), len(test))
%%bash
rm -rf output
mkdir output
train.to_csv('output/train.csv', header=False, index=False)
valid.to_csv('output/valid.csv', header=False, index=False)
test.to_csv('output/test.csv', header=False, index=False)
!head output/test.csv
Explanation: Splitting dataset and writing TF Records
This notebook shows you how to split a dataset into training, validation, testing and write those images into TensorFlow Record files.
End of explanation
outdf = test.head()
len(outdf)
outdf.values
!gsutil cat gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dict.txt
import tensorflow as tf
with tf.io.gfile.GFile('gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dict.txt', 'r') as f:
LABELS = [line.rstrip() for line in f]
print('Read in {} labels, from {} to {}'.format(
len(LABELS), LABELS[0], LABELS[-1]))
import apache_beam as beam
import tensorflow as tf
def _string_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value.encode('utf-8')]))
def _int64_feature(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=value))
def _float_feature(value):
return tf.train.Feature(float_list=tf.train.FloatList(value=value))
def read_and_decode(filename):
IMG_CHANNELS = 3
img = tf.io.read_file(filename)
img = tf.image.decode_jpeg(img, channels=IMG_CHANNELS)
img = tf.image.convert_image_dtype(img, tf.float32)
return img
def create_tfrecord(filename, label, label_int):
print(filename)
img = read_and_decode(filename)
dims = img.shape
img = tf.reshape(img, [-1]) # flatten to 1D array
return tf.train.Example(features=tf.train.Features(feature={
'image': _float_feature(img),
'shape': _int64_feature([dims[0], dims[1], dims[2]]),
'label': _string_feature(label),
'label_int': _int64_feature([label_int])
})).SerializeToString()
with beam.Pipeline() as p:
(p
| 'input_df' >> beam.Create(outdf.values)
| 'create_tfrecord' >> beam.Map(lambda x: create_tfrecord(x[0], x[1], LABELS.index(x[1])))
| 'write' >> beam.io.tfrecordio.WriteToTFRecord('output/train')
)
!ls -l output/train*
## splitting in Apache Beam
def hardcoded(x, desired_split):
split, rec = x
print('hardcoded: ', split, rec, desired_split, split == desired_split)
if split == desired_split:
yield rec
with beam.Pipeline() as p:
splits = (p
| 'input_df' >> beam.Create([
('train', 'a'),
('train', 'b'),
('valid', 'c'),
('valid', 'd')
]))
split = 'train'
_ = (splits
| 'h_only_{}'.format(split) >> beam.FlatMap(
lambda x: hardcoded(x, 'train'))
)
split = 'valid'
_ = (splits
| 'h_only_{}'.format(split) >> beam.FlatMap(
lambda x: hardcoded(x, 'valid'))
)
Explanation: Writing TF Records using Apache Beam
For speed, we'll illustrate writing just 5 records
End of explanation
%%bash
PROJECT=$(gcloud config get-value project)
BUCKET=${PROJECT}
python3 -m jpeg_to_tfrecord \
--all_data gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/all_data.csv \
--labels_file gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dict.txt \
--project_id $PROJECT \
--output_dir gs://${BUCKET}/data/flower_tfrecords
Explanation: Running on Dataflow
Apache Beam code can be executed in a serverless way using Cloud Dataflow.
The key thing is to:
<br/>
Replace beam.Pipeline() by:
<pre>
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': JOBNAME,
'project': PROJECT,
'teardown_policy': 'TEARDOWN_ALWAYS',
'save_main_session': True
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
with beam.Pipeline(RUNNER, options=opts) as p:
</pre>
End of explanation
!gsutil ls -l gs://ai-analytics-solutions/data/flower_tfrecords/*-00001-*
Explanation: <img src="dataflow_pipeline.png" width="75%"/>
End of explanation |
3,811 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recurrent neural networks
Import various modules that we need for this notebook (now using Keras 1.0.0)
Step1: Load the MNIST dataset, flatten the images, convert the class labels, and scale the data.
I. Load IMDB
Load the full IMBD dataset, from raw text. The first step is to load each text snippet into a python list.
Step2: Next, we construct a tokenizer object, initialized with the number of total terms we want. I then use the training data to find the top most used words.
Step3: The tokenizer makes getting the words themeselves out oddly difficult, but this will do it for us
Step4: We can now use the tokenizer to construct data matricies that look like the ones pre-supplied by keras.
Step5: To reconstruct the text, which will have any words not in our vocabulary removed, we can use this function
Step6: Notice that much of the original context is gone given our aggressive filtering, but the main tone (in this case, at least) remains. We would probably want to filter out things like br (line breaks) if we were doing this more carefully.
I. Basic RNN example
Using this new dataset, let's build a plain, vanilliar RNN.
Step7: I think it is incredibly important to make sure the shape of the weights and bias make sense to you. If they do, you probably understand a large part of what is going on.
Step8: Fitting the model works exactly the same as with CNNs or dense neural networks.
Step9: II. LSTM
We can replicate this RNN, but substitute out the SimpleRNN with LSTM. In Keras, this is made (almost too) easy; we just plug in a different layer type.
Step10: The weights in the LSTM layer are quite a bit more complex, with four triples of W, U, and b. All three have the same dimension, however.
Step11: We'll train the model the same as with the SimpleRNN, but the computational time will be significantly higher. The algorithm needs to backpropagate the complex mechanism inside of the LSTM unit through the entire time series, so this does not seem too surprising.
Step12: III. GRU
And, similarly, here is a GRU layer. Again, from the perspective of using it in Keras, it only requires a minor change to the code.
Step13: GRU's have one fewer sets of weights (W,U,b).
Step14: IV. Evaluating a sequence of inputs
Now, RNNs can be made to actually output results after every cycle. When the upper levels are trained on just the final one, these can be turned on to track the output of the model through the sequence of text.
Step15: Now that we've trained on the final output, we want to take the same weights as before, but to make SimpleRNN return the entire sequence. The output layer will then return a result of size 100, rather than size 1; this is the result of the algorithm after seeing just the first k terms. The last value will be the same as using model.
To do this, as far as I can tell, one needs to create a new model from scratch and then load the weights from the old model. We have to wrap any layers with learnable weights above the SimpleRNN in the wrapper TimeDistributed, so that it knows to apply the weights seperately to the time-components from the prior level.
Step16: Notice that the dimensions of the weights are exactly the same; the input sizes are larger, but with weight sharing we can use the same weight matricies. This is akin to the OverFeat paper for CNNs where a convolution is applied to a larger image; the output's dimensions just increase.
Let's now predict the sequence of values for the entire training set.
Step17: Here is a nice visualization of the progress of the algorithm for various input texts. | Python Code:
%pylab inline
import copy
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sys
import os
import xml.etree.ElementTree as ET
from keras.datasets import imdb, reuters
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.optimizers import SGD, RMSprop
from keras.utils import np_utils
from keras.layers.convolutional import Convolution1D, MaxPooling1D, ZeroPadding1D, AveragePooling1D
from keras.callbacks import EarlyStopping
from keras.layers.normalization import BatchNormalization
from keras.preprocessing import sequence
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import SimpleRNN, LSTM, GRU
from keras.layers.wrappers import TimeDistributed
from keras.preprocessing.text import Tokenizer
Explanation: Recurrent neural networks
Import various modules that we need for this notebook (now using Keras 1.0.0)
End of explanation
path = "../../../class_data/aclImdb/"
ff = [path + "train/pos/" + x for x in os.listdir(path + "train/pos")] + \
[path + "train/neg/" + x for x in os.listdir(path + "train/neg")] + \
[path + "test/pos/" + x for x in os.listdir(path + "test/pos")] + \
[path + "test/neg/" + x for x in os.listdir(path + "test/neg")]
def remove_tags(text):
return ''.join(ET.fromstring(text).itertext())
input_label = ([1] * 12500 + [0] * 12500) * 2
input_text = []
for f in ff:
with open(f) as fin:
input_text += [remove_tags(" ".join(fin.readlines()))]
Explanation: Load the MNIST dataset, flatten the images, convert the class labels, and scale the data.
I. Load IMDB
Load the full IMBD dataset, from raw text. The first step is to load each text snippet into a python list.
End of explanation
tok = Tokenizer(500)
tok.fit_on_texts(input_text[:25000])
Explanation: Next, we construct a tokenizer object, initialized with the number of total terms we want. I then use the training data to find the top most used words.
End of explanation
words = []
for iter in range(500):
words += [key for key,value in tok.word_index.items() if value==iter+1]
words[:10]
Explanation: The tokenizer makes getting the words themeselves out oddly difficult, but this will do it for us:
End of explanation
X_train = tok.texts_to_sequences(input_text[:25000])
X_test = tok.texts_to_sequences(input_text[25000:])
y_train = input_label[:25000]
y_test = input_label[25000:]
X_train = sequence.pad_sequences(X_train, maxlen=100)
X_test = sequence.pad_sequences(X_test, maxlen=100)
Explanation: We can now use the tokenizer to construct data matricies that look like the ones pre-supplied by keras.
End of explanation
def reconstruct_text(index, words):
text = []
for ind in index:
if ind != 0:
text += [words[ind-1]]
else:
text += [""]
return text
print(input_text[100])
print(reconstruct_text(X_train[100][:40], words))
Explanation: To reconstruct the text, which will have any words not in our vocabulary removed, we can use this function:
End of explanation
model = Sequential()
model.add(Embedding(500, 32, input_length=100))
model.add(Dropout(0.25))
model.add(SimpleRNN(16, return_sequences=False))
model.add(AveragePooling1D(16))
model.add(Flatten())
model.add(Dense(256))
model.add(Dropout(0.25))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
Explanation: Notice that much of the original context is gone given our aggressive filtering, but the main tone (in this case, at least) remains. We would probably want to filter out things like br (line breaks) if we were doing this more carefully.
I. Basic RNN example
Using this new dataset, let's build a plain, vanilliar RNN.
End of explanation
print(model.layers[2].get_weights()[0].shape) # W - input weights
print(model.layers[2].get_weights()[1].shape) # U - recurrent weights
print(model.layers[2].get_weights()[2].shape) # b - bias
Explanation: I think it is incredibly important to make sure the shape of the weights and bias make sense to you. If they do, you probably understand a large part of what is going on.
End of explanation
model.fit(X_train, y_train, batch_size=32, nb_epoch=10, verbose=1,
validation_data=(X_test, y_test))
Explanation: Fitting the model works exactly the same as with CNNs or dense neural networks.
End of explanation
model = Sequential()
model.add(Embedding(500, 50))
model.add(Dropout(0.25))
model.add(LSTM(32))
model.add(Dense(256))
model.add(Dropout(0.25))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
Explanation: II. LSTM
We can replicate this RNN, but substitute out the SimpleRNN with LSTM. In Keras, this is made (almost too) easy; we just plug in a different layer type.
End of explanation
print(model.layers[2].get_weights()[0].shape) # W_i input gate weights
print(model.layers[2].get_weights()[1].shape) # U_i
print(model.layers[2].get_weights()[2].shape) # b_i
print(model.layers[2].get_weights()[3].shape) # W_f forget weights
print(model.layers[2].get_weights()[4].shape) # U_f
print(model.layers[2].get_weights()[5].shape) # b_f
print(model.layers[2].get_weights()[6].shape) # W_c cell weights
print(model.layers[2].get_weights()[7].shape) # U_c
print(model.layers[2].get_weights()[8].shape) # b_c
print(model.layers[2].get_weights()[9].shape) # W_o output weights
print(model.layers[2].get_weights()[10].shape) # U_o
print(model.layers[2].get_weights()[11].shape) # b_o
Explanation: The weights in the LSTM layer are quite a bit more complex, with four triples of W, U, and b. All three have the same dimension, however.
End of explanation
model.fit(X_train, y_train, batch_size=1, nb_epoch=10, verbose=1,
validation_data=(X_test, y_test))
Explanation: We'll train the model the same as with the SimpleRNN, but the computational time will be significantly higher. The algorithm needs to backpropagate the complex mechanism inside of the LSTM unit through the entire time series, so this does not seem too surprising.
End of explanation
model = Sequential()
model.add(Embedding(500, 32, input_length=100))
model.add(Dropout(0.25))
model.add(GRU(32,activation='relu'))
model.add(Dense(256))
model.add(Dropout(0.25))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
Explanation: III. GRU
And, similarly, here is a GRU layer. Again, from the perspective of using it in Keras, it only requires a minor change to the code.
End of explanation
print(model.layers[2].get_weights()[0].shape) # W_z update weights
print(model.layers[2].get_weights()[1].shape) # U_z
print(model.layers[2].get_weights()[2].shape) # b_z
print(model.layers[2].get_weights()[3].shape) # W_r reset weights
print(model.layers[2].get_weights()[4].shape) # U_r
print(model.layers[2].get_weights()[5].shape) # b_r
print(model.layers[2].get_weights()[6].shape) # W_h output weights
print(model.layers[2].get_weights()[7].shape) # U_h
print(model.layers[2].get_weights()[8].shape) # b_h
model.fit(X_train, y_train, batch_size=32, nb_epoch=20, verbose=1,
validation_data=(X_test, y_test))
Explanation: GRU's have one fewer sets of weights (W,U,b).
End of explanation
model = Sequential()
model.add(Embedding(500, 32, input_length=100))
model.add(Dropout(0.25))
model.add(SimpleRNN(16, return_sequences=False))
model.add(Dense(256))
model.add(Dropout(0.25))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=32, nb_epoch=5, verbose=1,
validation_data=(X_test, y_test))
Explanation: IV. Evaluating a sequence of inputs
Now, RNNs can be made to actually output results after every cycle. When the upper levels are trained on just the final one, these can be turned on to track the output of the model through the sequence of text.
End of explanation
model2 = Sequential()
model2.add(Embedding(500, 32, input_length=100))
model2.add(Dropout(0.25))
model2.add(SimpleRNN(16, return_sequences=True))
model2.add(TimeDistributed(Dense(256)))
model2.add(Dropout(0.25))
model2.add(Activation('relu'))
model2.add(TimeDistributed(Dense(1)))
model2.add(Activation('sigmoid'))
model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model2.set_weights(model.get_weights())
Explanation: Now that we've trained on the final output, we want to take the same weights as before, but to make SimpleRNN return the entire sequence. The output layer will then return a result of size 100, rather than size 1; this is the result of the algorithm after seeing just the first k terms. The last value will be the same as using model.
To do this, as far as I can tell, one needs to create a new model from scratch and then load the weights from the old model. We have to wrap any layers with learnable weights above the SimpleRNN in the wrapper TimeDistributed, so that it knows to apply the weights seperately to the time-components from the prior level.
End of explanation
y_hat2 = model2.predict(X_train)
y_hat2.shape
Explanation: Notice that the dimensions of the weights are exactly the same; the input sizes are larger, but with weight sharing we can use the same weight matricies. This is akin to the OverFeat paper for CNNs where a convolution is applied to a larger image; the output's dimensions just increase.
Let's now predict the sequence of values for the entire training set.
End of explanation
ind = 100
tokens = reconstruct_text(X_train[ind], words)
print(input_text[ind])
plt.figure(figsize=(16, 10))
plt.plot(y_hat2[iter],alpha=0.5)
for i in range(len(tokens)):
plt.text(i,0.5,tokens[i],rotation=90)
ind = 22000
tokens = reconstruct_text(X_train[ind], words)
print(input_text[ind])
plt.figure(figsize=(16, 10))
plt.plot(y_hat2[ind],alpha=0.5)
for i in range(len(tokens)):
plt.text(i,0.5,tokens[i],rotation=90)
ind = 10000
tokens = reconstruct_text(X_train[ind], words)
print(input_text[ind])
plt.figure(figsize=(16, 10))
plt.plot(y_hat2[ind],alpha=0.5)
for i in range(len(tokens)):
plt.text(i,0.5,tokens[i],rotation=90)
Explanation: Here is a nice visualization of the progress of the algorithm for various input texts.
End of explanation |
3,812 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
02 - Reverse Time Migration
This notebook is the second in a series of tutorial highlighting various aspects of seismic inversion based on Devito operators. In this second example we aim to highlight the core ideas behind seismic inversion, where we create an image of the subsurface from field recorded data. This tutorial follows on the modelling tutorial and will reuse the modelling operator and velocity model.
Imaging requirement
Seismic imaging relies on two known parameters
Step1: Computational considerations
Seismic inversion algorithms are generally very computationally demanding and require a large amount of memory to store the forward wavefield. In order to keep this tutorial as lightweight as possible we are using a very simple
velocity model that requires low temporal and spatial resolution. For a more realistic model, a second set of preset parameters for a reduced version of the 2D Marmousi data set [1] is provided below in comments. This can be run to create some more realistic subsurface images. However, this second preset is more computationally demanding and requires a slightly more powerful workstation.
Step2: True and smooth velocity models
First, we create the model data for the "true" model from a given demonstration preset. This model represents the subsurface topology for the purposes of this example and we will later use it to generate our synthetic data readings. We also generate a second model and apply a smoothing filter to it, which represents our initial model for the imaging algorithm. The perturbation between these two models can be thought of as the image we are trying to recover.
Step3: Acquisition geometry
Next we define the positioning and the wave signal of our source, as well as the location of our receivers. To generate the wavelet for our source we require the discretized values of time that we are going to use to model a single "shot",
which again depends on the grid spacing used in our model. For consistency this initial setup will look exactly as in the previous modelling tutorial, although we will vary the position of our source later on during the actual imaging algorithm.
Step4: True and smooth data
We can now generate the shot record (receiver readings) corresponding to our true and initial models. The difference between these two records will be the basis of the imaging procedure.
For this purpose we will use the same forward modelling operator that was introduced in the previous tutorial, provided by the AcousticWaveSolver utility class. This object instantiates a set of pre-defined operators according to an initial definition of the acquisition geometry, consisting of source and receiver symbols. The solver objects caches the individual operators and provides a slightly more high-level API that allows us to invoke the modelling modelling operators from the initial tutorial in a single line. In the following cells we use this to generate shot data by only specifying the respective model symbol m to use, and the solver will create and return a new Receiver object the represents the readings at the previously defined receiver coordinates.
Step5: Imaging with back-propagation
As explained in the introduction of this tutorial, this method is based on back-propagation.
Adjoint wave equation
If we go back to the modelling part, we can rewrite the simulation as a linear system solve
Step6: Implementation of the imaging loop
As just explained, the forward wave-equation is solved forward in time while the adjoint wave-equation is solved in a reversed time order. Therefore, the correlation of these two fields over time requires to store one of the two fields. The computational procedure for imaging follows | Python Code:
import numpy as np
%matplotlib inline
from devito import configuration
configuration['log-level'] = 'WARNING'
Explanation: 02 - Reverse Time Migration
This notebook is the second in a series of tutorial highlighting various aspects of seismic inversion based on Devito operators. In this second example we aim to highlight the core ideas behind seismic inversion, where we create an image of the subsurface from field recorded data. This tutorial follows on the modelling tutorial and will reuse the modelling operator and velocity model.
Imaging requirement
Seismic imaging relies on two known parameters:
Field data - or also called recorded data. This is a shot record corresponding to the true velocity model. In practice this data is acquired as described in the first tutorial. In order to simplify this tutorial we will generate synthetic field data by modelling it with the true velocity model.
Background velocity model. This is a velocity model that has been obtained by processing and inverting the field data. We will look at this methods in the following tutorial as it relies on the method we are describing here. This velocity model is usually a smooth version of the true velocity model.
Imaging computational setup
In this tutorial, we will introduce the back-propagation operator. This operator simulates the adjoint wave-equation, that is a wave-equation solved in a reversed time order. This time reversal led to the naming of the method we present here, called Reverse Time Migration. The notion of adjoint in exploration geophysics is fundamental as most of the wave-equation based imaging and inversion methods rely on adjoint based optimization methods.
Notes on the operators
As we have already described the creation of a forward modelling operator, we will use a thin wrapper function instead. This wrapper is provided by a utility class called AcousticWaveSolver, which provides all the necessary operators for seismic modeling, imaging and inversion. The AcousticWaveSolver provides a more concise API for common wave propagation operators and caches the Devito Operator objects to avoid unnecessary recompilation. Operators introduced for the first time in this tutorial will be properly described.
As before we initialize printing and import some utilities. We also raise the Devito log level to avoid excessive logging for repeated operator invocations.
End of explanation
# Configure model presets
from examples.seismic import demo_model
# Enable model presets here:
preset = 'layers-isotropic' # A simple but cheap model (recommended)
# preset = 'marmousi2d-isotropic' # A larger more realistic model
# Standard preset with a simple two-layer model
if preset == 'layers-isotropic':
def create_model(grid=None):
return demo_model('layers-isotropic', origin=(0., 0.), shape=(101, 101),
spacing=(10., 10.), nbl=20, grid=grid, nlayers=2)
filter_sigma = (1, 1)
nshots = 21
nreceivers = 101
t0 = 0.
tn = 1000. # Simulation last 1 second (1000 ms)
f0 = 0.010 # Source peak frequency is 10Hz (0.010 kHz)
# A more computationally demanding preset based on the 2D Marmousi model
if preset == 'marmousi2d-isotropic':
def create_model(grid=None):
return demo_model('marmousi2d-isotropic', data_path='../../../../data/',
grid=grid, nbl=20)
filter_sigma = (6, 6)
nshots = 301 # Need good covergae in shots, one every two grid points
nreceivers = 601 # One recevier every grid point
t0 = 0.
tn = 3500. # Simulation last 3.5 second (3500 ms)
f0 = 0.025 # Source peak frequency is 25Hz (0.025 kHz)
Explanation: Computational considerations
Seismic inversion algorithms are generally very computationally demanding and require a large amount of memory to store the forward wavefield. In order to keep this tutorial as lightweight as possible we are using a very simple
velocity model that requires low temporal and spatial resolution. For a more realistic model, a second set of preset parameters for a reduced version of the 2D Marmousi data set [1] is provided below in comments. This can be run to create some more realistic subsurface images. However, this second preset is more computationally demanding and requires a slightly more powerful workstation.
End of explanation
#NBVAL_IGNORE_OUTPUT
from examples.seismic import plot_velocity, plot_perturbation
from devito import gaussian_smooth
# Create true model from a preset
model = create_model()
# Create initial model and smooth the boundaries
model0 = create_model(grid=model.grid)
gaussian_smooth(model0.vp, sigma=filter_sigma)
# Plot the true and initial model and the perturbation between them
plot_velocity(model)
plot_velocity(model0)
plot_perturbation(model0, model)
Explanation: True and smooth velocity models
First, we create the model data for the "true" model from a given demonstration preset. This model represents the subsurface topology for the purposes of this example and we will later use it to generate our synthetic data readings. We also generate a second model and apply a smoothing filter to it, which represents our initial model for the imaging algorithm. The perturbation between these two models can be thought of as the image we are trying to recover.
End of explanation
#NBVAL_IGNORE_OUTPUT
# Define acquisition geometry: source
from examples.seismic import AcquisitionGeometry
# First, position source centrally in all dimensions, then set depth
src_coordinates = np.empty((1, 2))
src_coordinates[0, :] = np.array(model.domain_size) * .5
src_coordinates[0, -1] = 20. # Depth is 20m
# Define acquisition geometry: receivers
# Initialize receivers for synthetic and imaging data
rec_coordinates = np.empty((nreceivers, 2))
rec_coordinates[:, 0] = np.linspace(0, model.domain_size[0], num=nreceivers)
rec_coordinates[:, 1] = 30.
# Geometry
geometry = AcquisitionGeometry(model, rec_coordinates, src_coordinates, t0, tn, f0=.010, src_type='Ricker')
# We can plot the time signature to see the wavelet
geometry.src.show()
Explanation: Acquisition geometry
Next we define the positioning and the wave signal of our source, as well as the location of our receivers. To generate the wavelet for our source we require the discretized values of time that we are going to use to model a single "shot",
which again depends on the grid spacing used in our model. For consistency this initial setup will look exactly as in the previous modelling tutorial, although we will vary the position of our source later on during the actual imaging algorithm.
End of explanation
# Compute synthetic data with forward operator
from examples.seismic.acoustic import AcousticWaveSolver
solver = AcousticWaveSolver(model, geometry, space_order=4)
true_d , _, _ = solver.forward(vp=model.vp)
# Compute initial data with forward operator
smooth_d, _, _ = solver.forward(vp=model0.vp)
#NBVAL_IGNORE_OUTPUT
# Plot shot record for true and smooth velocity model and the difference
from examples.seismic import plot_shotrecord
plot_shotrecord(true_d.data, model, t0, tn)
plot_shotrecord(smooth_d.data, model, t0, tn)
plot_shotrecord(smooth_d.data - true_d.data, model, t0, tn)
Explanation: True and smooth data
We can now generate the shot record (receiver readings) corresponding to our true and initial models. The difference between these two records will be the basis of the imaging procedure.
For this purpose we will use the same forward modelling operator that was introduced in the previous tutorial, provided by the AcousticWaveSolver utility class. This object instantiates a set of pre-defined operators according to an initial definition of the acquisition geometry, consisting of source and receiver symbols. The solver objects caches the individual operators and provides a slightly more high-level API that allows us to invoke the modelling modelling operators from the initial tutorial in a single line. In the following cells we use this to generate shot data by only specifying the respective model symbol m to use, and the solver will create and return a new Receiver object the represents the readings at the previously defined receiver coordinates.
End of explanation
# Define gradient operator for imaging
from devito import TimeFunction, Operator, Eq, solve
from examples.seismic import PointSource
def ImagingOperator(model, image):
# Define the wavefield with the size of the model and the time dimension
v = TimeFunction(name='v', grid=model.grid, time_order=2, space_order=4)
u = TimeFunction(name='u', grid=model.grid, time_order=2, space_order=4,
save=geometry.nt)
# Define the wave equation, but with a negated damping term
eqn = model.m * v.dt2 - v.laplace + model.damp * v.dt.T
# Use `solve` to rearrange the equation into a stencil expression
stencil = Eq(v.backward, solve(eqn, v.backward))
# Define residual injection at the location of the forward receivers
dt = model.critical_dt
residual = PointSource(name='residual', grid=model.grid,
time_range=geometry.time_axis,
coordinates=geometry.rec_positions)
res_term = residual.inject(field=v.backward, expr=residual * dt**2 / model.m)
# Correlate u and v for the current time step and add it to the image
image_update = Eq(image, image - u * v)
return Operator([stencil] + res_term + [image_update],
subs=model.spacing_map)
Explanation: Imaging with back-propagation
As explained in the introduction of this tutorial, this method is based on back-propagation.
Adjoint wave equation
If we go back to the modelling part, we can rewrite the simulation as a linear system solve:
\begin{equation}
\mathbf{A}(\mathbf{m}) \mathbf{u} = \mathbf{q}
\end{equation}
where $\mathbf{m}$ is the discretized square slowness, $\mathbf{q}$ is the discretized source and $\mathbf{A}(\mathbf{m})$ is the discretized wave-equation. The discretized wave-equation matricial representation is a lower triangular matrix that can be solve with forward substitution. The pointwise writing or the forward substitution leads to the time-stepping stencil.
On a small problem one could form the matrix explicitly and transpose it to obtain the adjoint discrete wave-equation:
\begin{equation}
\mathbf{A}(\mathbf{m})^T \mathbf{v} = \delta \mathbf{d}
\end{equation}
where $\mathbf{v}$ is the discrete adjoint wavefield and $\delta \mathbf{d}$ is the data residual defined as the difference between the field/observed data and the synthetic data $\mathbf{d}_s = \mathbf{P}_r \mathbf{u}$. In our case we derive the discrete adjoint wave-equation from the discrete forward wave-equation to get its stencil.
Imaging
Wave-equation based imaging relies on one simple concept:
If the background velocity model is cinematically correct, the forward wavefield $\mathbf{u}$ and the adjoint wavefield $\mathbf{v}$ meet at the reflectors position at zero time offset.
The sum over time of the zero time-offset correlation of these two fields then creates an image of the subsurface. Mathematically this leads to the simple imaging condition:
\begin{equation}
\text{Image} = \sum_{t=1}^{n_t} \mathbf{u}[t] \mathbf{v}[t]
\end{equation}
In the following tutorials we will describe a more advanced imaging condition that produces shaper and more accurate results.
Operator
We will now define the imaging operator that computes the adjoint wavefield $\mathbf{v}$ and correlates it with the forward wavefield $\mathbf{u}$. This operator essentially consists of three components:
* Stencil update of the adjoint wavefield v
* Injection of the data residual at the adjoint source (forward receiver) location
* Correlation of u and v to compute the image contribution at each timestep
End of explanation
#NBVAL_IGNORE_OUTPUT
# Prepare the varying source locations
source_locations = np.empty((nshots, 2), dtype=np.float32)
source_locations[:, 0] = np.linspace(0., 1000, num=nshots)
source_locations[:, 1] = 30.
plot_velocity(model, source=source_locations)
# Run imaging loop over shots
from devito import Function
# Create image symbol and instantiate the previously defined imaging operator
image = Function(name='image', grid=model.grid)
op_imaging = ImagingOperator(model, image)
for i in range(nshots):
print('Imaging source %d out of %d' % (i+1, nshots))
# Update source location
geometry.src_positions[0, :] = source_locations[i, :]
# Generate synthetic data from true model
true_d, _, _ = solver.forward(vp=model.vp)
# Compute smooth data and full forward wavefield u0
smooth_d, u0, _ = solver.forward(vp=model0.vp, save=True)
# Compute gradient from the data residual
v = TimeFunction(name='v', grid=model.grid, time_order=2, space_order=4)
residual = smooth_d.data - true_d.data
op_imaging(u=u0, v=v, vp=model0.vp, dt=model0.critical_dt,
residual=residual)
#NBVAL_IGNORE_OUTPUT
from examples.seismic import plot_image
# Plot the inverted image
plot_image(np.diff(image.data, axis=1))
from devito import norm
assert np.isclose(norm(image), 1e7, rtol=1e1)
Explanation: Implementation of the imaging loop
As just explained, the forward wave-equation is solved forward in time while the adjoint wave-equation is solved in a reversed time order. Therefore, the correlation of these two fields over time requires to store one of the two fields. The computational procedure for imaging follows:
Simulate the forward wave-equation with the background velocity model to get the synthetic data and save the full wavefield $\mathbf{u}$
Compute the data residual
Back-propagate the data residual and compute on the fly the image contribution at each time step.
This procedure is applied to multiple source positions (shots) and summed to obtain the full image of the subsurface. We can first visualize the varying locations of the sources that we will use.
End of explanation |
3,813 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex client library
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
Step8: Vertex constants
Setup up the following constants for Vertex
Step9: AutoML constants
Set constants unique to AutoML datasets and training
Step10: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for prediction.
Set the variable DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify
Step11: Container (Docker) image
For AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected.
Machine Type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VM you will use for prediction.
machine type
n1-standard
Step12: Tutorial
Now you are ready to start creating your own AutoML tabular classification model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving.
Step13: Dataset
Now that your clients are ready, your first step is to create a Dataset resource instance. This step differs from Vision, Video and Language. For those products, after the Dataset resource is created, one then separately imports the data, using the import_data method.
For tabular, importing of the data is deferred until the training pipeline starts training the model. What do we do different? Well, first you won't be calling the import_data method. Instead, when you create the dataset instance you specify the Cloud Storage location of the CSV file or BigQuery location of the data table, which contains your tabular data as part of the Dataset resource's metadata.
Cloud Storage
metadata = {"input_config"
Step14: Quick peek at your data
You will use a version of the Iris dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
You also need for training to know the heading name of the label column, which is save as label_column. For this dataset, it is the last column in the CSV file.
Step15: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following
Step16: Now save the unique dataset identifier for the Dataset resource instance you created.
Step17: Train the model
Now train an AutoML tabular classification model using your Vertex Dataset resource. To train the model, do the following steps
Step18: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields you need to specify are
Step19: Now save the unique identifier of the training pipeline you created.
Step20: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter
Step21: Deployment
Training the above model may take upwards of 30 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
Step22: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter
Step23: Deploy the Model resource
Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps
Step24: Now get the unique identifier for the Endpoint resource you created.
Step25: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests
Step26: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the model to the endpoint you created for serving predictions, with the following parameters
Step27: Make a online prediction request with explainability
Now do a online prediction with explainability to your deployed model. In this method, the predicted response will include an explanation on how the features contributed to the explanation.
Make test item
You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
Step28: Make a prediction with explanation
Ok, now you have a test item. Use this helper function explain_item, which takes the following parameters
Step29: Understanding the explanations response
First, you will look what your model predicted and compare it to the actual value.
Step30: Examine feature attributions
Next you will look at the feature attributions for this particular example. Positive attribution values mean a particular feature pushed your model prediction up by that amount, and vice versa for negative attribution values.
Step31: Check your explanations and baselines
To better make sense of the feature attributions you're getting, you should compare them with your model's baseline. In most cases, the sum of your attribution values + the baseline should be very close to your model's predicted value for each input. Also note that for regression models, the baseline_score returned from AI Explanations will be the same for each example sent to your model. For classification models, each class will have its own baseline.
In this section you'll send 10 test examples to your model for prediction in order to compare the feature attributions with the baseline. Then you'll run each test example's attributions through a sanity check in the sanity_check_explanations method.
Get explanations
Step32: Sanity check
In the function below you perform a sanity check on the explanations.
Step33: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters
Step34: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
Explanation: Vertex client library: AutoML tabular classification model for online prediction with explanation
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_tabular_classification_online_explain.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_tabular_classification_online_explain.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to create tabular classification models and do online prediction with explanation using Google Cloud's AutoML.
Dataset
The dataset used for this tutorial is the Iris dataset from TensorFlow Datasets. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of Iris flower species from a class of three species: setosa, virginica, or versicolor.
Objective
In this tutorial, you create an AutoML tabular classification model and deploy for online prediction with explainability from a Python script using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
View the model evaluation.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction with explainability.
Undeploy the Model.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
import time
import google.cloud.aiplatform_v1beta1 as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
# Tabular Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/tables_1.0.0.yaml"
# Tabular Labeling type
LABEL_SCHEMA = (
"gs://google-cloud-aiplatform/schema/dataset/ioformat/table_io_format_1.0.0.yaml"
)
# Tabular Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tables_1.0.0.yaml"
Explanation: AutoML constants
Set constants unique to AutoML datasets and training:
Dataset Schemas: Tells the Dataset resource service which type of dataset it is.
Data Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated).
Dataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for.
End of explanation
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
Explanation: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for prediction.
Set the variable DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
For GPU, available accelerators include:
- aip.AcceleratorType.NVIDIA_TESLA_K80
- aip.AcceleratorType.NVIDIA_TESLA_P100
- aip.AcceleratorType.NVIDIA_TESLA_P4
- aip.AcceleratorType.NVIDIA_TESLA_T4
- aip.AcceleratorType.NVIDIA_TESLA_V100
Otherwise specify (None, None) to use a container image to run on a CPU.
End of explanation
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Container (Docker) image
For AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected.
Machine Type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VM you will use for prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs
End of explanation
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
Explanation: Tutorial
Now you are ready to start creating your own AutoML tabular classification model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving.
End of explanation
IMPORT_FILE = "gs://cloud-samples-data/tables/iris_1000.csv"
Explanation: Dataset
Now that your clients are ready, your first step is to create a Dataset resource instance. This step differs from Vision, Video and Language. For those products, after the Dataset resource is created, one then separately imports the data, using the import_data method.
For tabular, importing of the data is deferred until the training pipeline starts training the model. What do we do different? Well, first you won't be calling the import_data method. Instead, when you create the dataset instance you specify the Cloud Storage location of the CSV file or BigQuery location of the data table, which contains your tabular data as part of the Dataset resource's metadata.
Cloud Storage
metadata = {"input_config": {"gcs_source": {"uri": [gcs_uri]}}}
The format for a Cloud Storage path is:
gs://[bucket_name]/[folder(s)/[file]
BigQuery
metadata = {"input_config": {"bigquery_source": {"uri": [gcs_uri]}}}
The format for a BigQuery path is:
bq://[collection].[dataset].[table]
Note that the uri field is a list, whereby you can input multiple CSV files or BigQuery tables when your data is split across files.
Data preparation
The Vertex Dataset resource for tabular has a couple of requirements for your tabular data.
Must be in a CSV file or a BigQuery query.
CSV
For tabular classification, the CSV file has a few requirements:
The first row must be the heading -- note how this is different from Vision, Video and Language where the requirement is no heading.
All but one column are features.
One column is the label, which you will specify when you subsequently create the training pipeline.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
End of explanation
count = ! gsutil cat $IMPORT_FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $IMPORT_FILE | head
heading = ! gsutil cat $IMPORT_FILE | head -n1
label_column = str(heading).split(",")[-1].split("'")[0]
print("Label Column Name", label_column)
if label_column is None:
raise Exception("label column missing")
Explanation: Quick peek at your data
You will use a version of the Iris dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
You also need for training to know the heading name of the label column, which is save as label_column. For this dataset, it is the last column in the CSV file.
End of explanation
TIMEOUT = 90
def create_dataset(name, schema, src_uri=None, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
if src_uri.startswith("gs://"):
metadata = {"input_config": {"gcs_source": {"uri": [src_uri]}}}
elif src_uri.startswith("bq://"):
metadata = {"input_config": {"bigquery_source": {"uri": [src_uri]}}}
dataset = aip.Dataset(
display_name=name,
metadata_schema_uri=schema,
labels=labels,
metadata=json_format.ParseDict(metadata, Value()),
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("iris-" + TIMESTAMP, DATA_SCHEMA, src_uri=IMPORT_FILE)
Explanation: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following:
Uses the dataset client service.
Creates an Vertex Dataset resource (aip.Dataset), with the following parameters:
display_name: The human-readable name you choose to give it.
metadata_schema_uri: The schema for the dataset type.
metadata: The Cloud Storage or BigQuery location of the tabular data.
Calls the client dataset service method create_dataset, with the following parameters:
parent: The Vertex location root path for your Database, Model and Endpoint resources.
dataset: The Vertex dataset object instance you created.
The method returns an operation object.
An operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.
You can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method:
| Method | Description |
| ----------- | ----------- |
| result() | Waits for the operation to complete and returns a result object in JSON format. |
| running() | Returns True/False on whether the operation is still running. |
| done() | Returns True/False on whether the operation is completed. |
| canceled() | Returns True/False on whether the operation was canceled. |
| cancel() | Cancels the operation (this may take up to 30 seconds). |
End of explanation
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
Explanation: Now save the unique dataset identifier for the Dataset resource instance you created.
End of explanation
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
Explanation: Train the model
Now train an AutoML tabular classification model using your Vertex Dataset resource. To train the model, do the following steps:
Create an Vertex training pipeline for the Dataset resource.
Execute the pipeline to start the training.
Create a training pipeline
You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:
Being reusable for subsequent training jobs.
Can be containerized and ran as a batch job.
Can be distributed.
All the steps are associated with the same pipeline job for tracking progress.
Use this helper function create_pipeline, which takes the following parameters:
pipeline_name: A human readable name for the pipeline job.
model_name: A human readable name for the model.
dataset: The Vertex fully qualified dataset identifier.
schema: The dataset labeling (annotation) training schema.
task: A dictionary describing the requirements for the training job.
The helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters:
parent: The Vertex location root path for your Dataset, Model and Endpoint resources.
training_pipeline: the full specification for the pipeline training job.
Let's look now deeper into the minimal requirements for constructing a training_pipeline specification:
display_name: A human readable name for the pipeline job.
training_task_definition: The dataset labeling (annotation) training schema.
training_task_inputs: A dictionary describing the requirements for the training job.
model_to_upload: A human readable name for the model.
input_data_config: The dataset specification.
dataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.
fraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
End of explanation
TRANSFORMATIONS = [
{"auto": {"column_name": "sepal_width"}},
{"auto": {"column_name": "sepal_length"}},
{"auto": {"column_name": "petal_length"}},
{"auto": {"column_name": "petal_width"}},
]
PIPE_NAME = "iris_pipe-" + TIMESTAMP
MODEL_NAME = "iris_model-" + TIMESTAMP
task = Value(
struct_value=Struct(
fields={
"target_column": Value(string_value=label_column),
"prediction_type": Value(string_value="classification"),
"train_budget_milli_node_hours": Value(number_value=1000),
"disable_early_stopping": Value(bool_value=False),
"transformations": json_format.ParseDict(TRANSFORMATIONS, Value()),
}
)
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
Explanation: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields you need to specify are:
prediction_type: Whether we are doing "classification" or "regression".
target_column: The CSV heading column name for the column we want to predict (i.e., the label).
train_budget_milli_node_hours: The maximum time to budget (billed) for training the model, where 1000 = 1 hour.
disable_early_stopping: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget.
transformations: Specifies the feature engineering for each feature column.
For transformations, the list must have an entry for each column. The outer key field indicates the type of feature engineering for the corresponding column. In this tutorial, you set it to "auto" to tell AutoML to automatically determine it.
Finally, create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object.
End of explanation
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
Explanation: Now save the unique identifier of the training pipeline you created.
End of explanation
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
Explanation: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter:
name: The Vertex fully qualified pipeline identifier.
When the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED.
End of explanation
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
Explanation: Deployment
Training the above model may take upwards of 30 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
End of explanation
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("logloss", metrics["logLoss"])
print("auPrc", metrics["auPrc"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
Explanation: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter:
name: The Vertex fully qualified model identifier for the Model resource.
This helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.
For each evaluation (you probably only have one) we then print all the key names for each metric in the evaluation, and for a small set (logLoss and auPrc) you will print the result.
End of explanation
ENDPOINT_NAME = "iris_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
Explanation: Deploy the Model resource
Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
Create an Endpoint resource
Use this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter:
display_name: A human readable name for the Endpoint resource.
The helper function uses the endpoint client service's create_endpoint method, which takes the following parameter:
display_name: A human readable name for the Endpoint resource.
Creating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name.
End of explanation
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
Explanation: Now get the unique identifier for the Endpoint resource you created.
End of explanation
MIN_NODES = 1
MAX_NODES = 1
Explanation: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
Single Instance: The online prediction requests are processed on a single compute instance.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.
Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.
Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.
Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.
End of explanation
DEPLOYED_NAME = "iris_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"dedicated_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
"machine_spec": machine_spec,
},
"enable_container_logging": False,
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
Explanation: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the model to the endpoint you created for serving predictions, with the following parameters:
model: The Vertex fully qualified identifier of the Model resource to upload (deploy) from the training pipeline.
deploy_model_display_name: A human readable name for the deployed model.
endpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to.
The helper function calls the Endpoint client service's method deploy_model, which takes the following parameters:
endpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to to.
deployed_model: The requirements for deploying the model.
traffic_split: Percent of traffic at endpoint that goes to this model, which is specified as a dictioney of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then specify as, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100. { "0": percent, model_id: percent, ... }
Let's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields:
model: The Vertex fully qualified identifier of the (upload) Model resource to deploy.
display_name: A human readable name for the deployed model.
dedicated_resources: This refers to how many compute instances (replicas) that are scaled for serving prediction requests.
machine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.
min_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES.
max_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES.
enable_container_logging: This enables logging of container events, such as execution failures (default is container logging is disabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.
Traffic Split
Let's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.
Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.
Response
The method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
End of explanation
INSTANCE = {
"petal_length": "1.4",
"petal_width": "1.3",
"sepal_length": "5.1",
"sepal_width": "2.8",
}
Explanation: Make a online prediction request with explainability
Now do a online prediction with explainability to your deployed model. In this method, the predicted response will include an explanation on how the features contributed to the explanation.
Make test item
You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
End of explanation
def explain_item(
data_items, endpoint, parameters_dict, deployed_model_id, silent=False
):
parameters = json_format.ParseDict(parameters_dict, Value())
# The format of each instance should conform to the deployed model's prediction input schema.
instances = [json_format.ParseDict(s, Value()) for s in data_items]
response = clients["prediction"].explain(
endpoint=endpoint,
instances=instances,
parameters=parameters,
deployed_model_id=deployed_model_id,
)
if silent:
return response
print("response")
print(" deployed_model_id:", response.deployed_model_id)
try:
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", dict(prediction))
except:
pass
explanations = response.explanations
print("explanations")
for explanation in explanations:
print(explanation)
return response
response = explain_item([INSTANCE], endpoint_id, None, None)
Explanation: Make a prediction with explanation
Ok, now you have a test item. Use this helper function explain_item, which takes the following parameters:
data_items: The test tabular data items.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
parameters_dict: Additional filtering parameters for serving prediction results -- in your case you will pass None.
This function uses the prediction client service and calls the explain method with the following parameters:
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
instances: A list of instances (data items) to predict.
parameters: Additional filtering parameters for serving prediction results. Note, image segmentation models do not support additional parameters.
deployed_model_id: The Vertex fully qualified identifier for the deployed model, when more than one model is deployed at the endpoint. Otherwise, if only one model deployed, can be set to None.
Request
The format of each instance is:
{ 'content': text_item }
Since the explain() method can take multiple items (instances), you send your single test item as a list of one test item. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the explain() method.
Response
The response object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction -- in this case there is just one:
deployed_model_id -- The Vertex fully qualified identifer for the Model resource that did the prediction/explanation.
predictions -- The predicated class and confidence level between 0 and 1.
confidences: Confidence level in the prediction.
displayNames: The predicted label.
explanations -- How each feature contributed to the prediction.
End of explanation
import numpy as np
try:
predictions = response.predictions
label = np.argmax(predictions[0]["scores"])
cls = predictions[0]["classes"][label]
print("Predicted Value:", cls, predictions[0]["scores"][label])
except:
pass
Explanation: Understanding the explanations response
First, you will look what your model predicted and compare it to the actual value.
End of explanation
from tabulate import tabulate
feature_names = ["petal_length", "petal_width", "sepal_length", "sepal_width"]
attributions = response.explanations[0].attributions[0].feature_attributions
rows = []
for i, val in enumerate(feature_names):
rows.append([val, INSTANCE[val], attributions[val]])
print(tabulate(rows, headers=["Feature name", "Feature value", "Attribution value"]))
Explanation: Examine feature attributions
Next you will look at the feature attributions for this particular example. Positive attribution values mean a particular feature pushed your model prediction up by that amount, and vice versa for negative attribution values.
End of explanation
import random
# Prepare 10 test examples to your model for prediction using a random distribution to generate
# test instances
instances = []
for i in range(10):
pl = str(random.uniform(1.0, 2.0))
pw = str(random.uniform(1.0, 2.0))
sl = str(random.uniform(4.0, 6.0))
sw = str(random.uniform(2.0, 4.0))
instances.append(
{"petal_length": pl, "petal_width": pw, "sepal_length": sl, "sepal_width": sw}
)
response = explain_item(instances, endpoint_id, None, None, silent=True)
Explanation: Check your explanations and baselines
To better make sense of the feature attributions you're getting, you should compare them with your model's baseline. In most cases, the sum of your attribution values + the baseline should be very close to your model's predicted value for each input. Also note that for regression models, the baseline_score returned from AI Explanations will be the same for each example sent to your model. For classification models, each class will have its own baseline.
In this section you'll send 10 test examples to your model for prediction in order to compare the feature attributions with the baseline. Then you'll run each test example's attributions through a sanity check in the sanity_check_explanations method.
Get explanations
End of explanation
def sanity_check_explanations(
explanation, prediction, mean_tgt_value=None, variance_tgt_value=None
):
passed_test = 0
total_test = 1
# `attributions` is a dict where keys are the feature names
# and values are the feature attributions for each feature
baseline_score = explanation.attributions[0].baseline_output_value
print("baseline:", baseline_score)
# Sanity check 1
# The prediction at the input is equal to that at the baseline.
# Please use a different baseline. Some suggestions are: random input, training
# set mean.
if abs(prediction - baseline_score) <= 0.05:
print("Warning: example score and baseline score are too close.")
print("You might not get attributions.")
else:
passed_test += 1
print("Sanity Check 1: Passed")
print(passed_test, " out of ", total_test, " sanity checks passed.")
i = 0
for explanation in response.explanations:
try:
prediction = np.max(response.predictions[i]["scores"])
except TypeError:
prediction = np.max(response.predictions[i])
sanity_check_explanations(explanation, prediction)
i += 1
Explanation: Sanity check
In the function below you perform a sanity check on the explanations.
End of explanation
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
Explanation: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to.
This function calls the endpoint client service's method undeploy_model, with the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed.
traffic_split: How to split traffic among the remaining deployed models on the Endpoint resource.
Since this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}.
End of explanation
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
3,814 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas
Step1: get_dummies converts a categorical variable into indicator variables, i.e. 1 or 0.
Step2: Join this new data frame with the original, row for row
Step3: We can accomplish something similar with concat
Step4: Let's combine multiple data sources.
Step5: Pivot Tables
Wikipedia
Step6: cut allows us to turn a column with continuous data into categoricals by specifying bins to place them in.
Step7: Get the mean residual sugar for each quality category/fixed acidity pair using a pivot_table. mean is the default agregation function.
Step8: Change the aggregation function to max
Step9: Change the aggregation function to min | Python Code:
df = pandas.read_csv('data/red_wine.csv', delimiter=';', parse_dates='time')
df.head()
df['quality'].unique()
Explanation: Pandas: Combining Datasets
Pandas documentation: Merging
Pandas allows us to combine two sets of data using merge, join, and concat.
End of explanation
quality_dummies = pandas.get_dummies(df['quality'], prefix='quality')
quality_dummies.head()
Explanation: get_dummies converts a categorical variable into indicator variables, i.e. 1 or 0.
End of explanation
joined_df = df.join(quality_dummies)
joined_df.head()
Explanation: Join this new data frame with the original, row for row:
End of explanation
joined_df2 = pandas.concat([quality_dummies, df], axis=1)
joined_df2.head()
Explanation: We can accomplish something similar with concat:
End of explanation
red_wines_df = pandas.read_csv('data/red_wine.csv', delimiter=';')
white_wines_df = pandas.read_csv('data/white_wine.csv', delimiter=';')
red_wines_quality_df = red_wines_df.groupby('quality').mean()['fixed acidity'].reset_index()
red_wines_quality_df
white_wines_quality_df = white_wines_df.groupby('quality').mean()['fixed acidity'].reset_index()
white_wines_quality_df
pandas.merge(red_wines_quality_df, white_wines_quality_df, on=['quality'], suffixes=[' red', ' white'])
Explanation: Let's combine multiple data sources.
End of explanation
red_wines_df['fixed acidity'].plot.hist()
Explanation: Pivot Tables
Wikipedia: Pivot Table
Pandas documentation: Reshaping and Pivot Tables
Let's take another look at the fixed acidity column.
End of explanation
fixed_acidity_class = pandas.cut(red_wines_df['fixed acidity'], bins=range(4, 17), labels=range(4, 16))
fixed_acidity_class.head(20)
fixed_acidity_class.name = 'fa_class'
red_wines_df = pandas.concat([red_wines_df, fixed_acidity_class], axis=1)
red_wines_df.head()
Explanation: cut allows us to turn a column with continuous data into categoricals by specifying bins to place them in.
End of explanation
pandas.pivot_table(red_wines_df, values='residual sugar', index='quality', columns='fa_class')
Explanation: Get the mean residual sugar for each quality category/fixed acidity pair using a pivot_table. mean is the default agregation function.
End of explanation
pandas.pivot_table(red_wines_df, values='residual sugar', index='quality',
columns='fa_class', aggfunc=max)
Explanation: Change the aggregation function to max:
End of explanation
pandas.pivot_table(red_wines_df, values='residual sugar', index='quality',
columns='fa_class', aggfunc=min)
Explanation: Change the aggregation function to min:
End of explanation |
3,815 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Facies classification using Convolutional Neural Networks
Team StoDIG - Statoil Deep-learning Interest Group
David Wade, John Thurmond & Eskil Kulseth Dahl
In this python notebook we propose a facies classification model, building on the simple Neural Network solution proposed by LA_Team in order to outperform the prediction model proposed in the predicting facies from well logs challenge.
Given the limited size of the training data set, Deep Learning is not likely to exceed the accuracy of results from refined Machine Learning techniques (such as Gradient Boosted Trees). However, we chose to use the opportunity to advance our understanding of Deep Learning network design, and have enjoyed participating in the contest. With a substantially larger training set and perhaps more facies ambiguity, Deep Learning could be a preferred approach to this sort of problem.
We use three key innovations
Step1: Data ingest
We load the training and testing data to preprocess it for further analysis, filling the missing data values in the PE field with zero and proceeding to normalize the data that will be fed into our model. We now incorporate the Imputation from Paolo Bestagini via LA_Team's Submission 5.
Step2: Split data into training data and blind data, and output as Numpy arrays
Step3: Data Augmentation
We expand the input data to be acted on by the convolutional layer.
Step4: Convolutional Neural Network
We build a CNN with the following layers (no longer using Sequential() model)
Step5: We train the CNN and evaluate it on precision/recall.
Step6: We display the learned 1D convolution kernels
Step7: In order to avoid overfitting, we evaluate our model by running a 5-fold stratified cross-validation routine.
Step8: Prediction
To predict the STUART and CRAWFORD blind wells we do the following
Step9: Run the model on the blind data
Output a CSV
Plot the wells in the notebook | Python Code:
%%sh
pip install pandas
pip install scikit-learn
pip install keras
pip install sklearn
from __future__ import print_function
import time
import numpy as np
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
from keras.preprocessing import sequence
from keras.models import Model, Sequential
from keras.constraints import maxnorm, nonneg
from keras.optimizers import SGD, Adam, Adamax, Nadam
from keras.regularizers import l2, activity_l2
from keras.layers import Input, Dense, Dropout, Activation, Convolution1D, Cropping1D, Cropping2D, Permute, Flatten, MaxPooling1D, merge
from keras.wrappers.scikit_learn import KerasClassifier
from keras.utils import np_utils
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold , StratifiedKFold
from classification_utilities import display_cm, display_adj_cm
from sklearn.metrics import confusion_matrix, f1_score
from sklearn import preprocessing
from sklearn.model_selection import GridSearchCV
Explanation: Facies classification using Convolutional Neural Networks
Team StoDIG - Statoil Deep-learning Interest Group
David Wade, John Thurmond & Eskil Kulseth Dahl
In this python notebook we propose a facies classification model, building on the simple Neural Network solution proposed by LA_Team in order to outperform the prediction model proposed in the predicting facies from well logs challenge.
Given the limited size of the training data set, Deep Learning is not likely to exceed the accuracy of results from refined Machine Learning techniques (such as Gradient Boosted Trees). However, we chose to use the opportunity to advance our understanding of Deep Learning network design, and have enjoyed participating in the contest. With a substantially larger training set and perhaps more facies ambiguity, Deep Learning could be a preferred approach to this sort of problem.
We use three key innovations:
- Inserting a convolutional layer as the first layer in the Neural Network
- Initializing the weights of this layer to detect gradients and extrema
- Adding Dropout regularization to prevent overfitting
Since our submission #2 we have:
- Added the distance to the next NM_M transition as a feature (thanks to geoLEARN where we spotted this)
- Removed Recruit F9 from training
Problem Modeling
The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a classifier to predict facies types.
This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are:
* Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10),
photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.
* Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)
The nine discrete facies (classes of rocks) are:
1. Nonmarine sandstone
2. Nonmarine coarse siltstone
3. Nonmarine fine siltstone
4. Marine siltstone and shale
5. Mudstone (limestone)
6. Wackestone (limestone)
7. Dolomite
8. Packstone-grainstone (limestone)
9. Phylloid-algal bafflestone (limestone)
These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.
Facies |Label| Adjacent Facies
:---: | :---: |:--:
1 |SS| 2
2 |CSiS| 1,3
3 |FSiS| 2
4 |SiSh| 5
5 |MS| 4,6
6 |WS| 5,7
7 |D| 6,8
8 |PS| 6,7,9
9 |BS| 7,8
Setup
Check we have all the libraries we need, and import the modules we require. Note that we have used the Theano backend for Keras, and to achieve a reasonable training time we have used an NVidia K20 GPU.
End of explanation
data = pd.read_csv('train_test_data.csv')
# Set 'Well Name' and 'Formation' fields as categories
data['Well Name'] = data['Well Name'].astype('category')
data['Formation'] = data['Formation'].astype('category')
# Parameters
feature_names = ['Depth', 'GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS','WS', 'D','PS', 'BS']
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
well_names_test = ['SHRIMPLIN', 'ALEXANDER D', 'SHANKLE', 'LUKE G U', 'KIMZEY A', 'CROSS H CATTLE', 'NOLAN', 'Recruit F9', 'NEWBY', 'CHURCHMAN BIBLE']
well_names_validate = ['STUART', 'CRAWFORD']
data_vectors = data[feature_names].values
correct_facies_labels = data['Facies'].values
nm_m = data['NM_M'].values
nm_m_dist = np.zeros((nm_m.shape[0],1), dtype=int)
for i in range(nm_m.shape[0]):
count=1
while (i+count<nm_m.shape[0]-1 and nm_m[i+count] == nm_m[i]):
count = count+1
nm_m_dist[i] = count
nm_m_dist.reshape(nm_m_dist.shape[0],1)
well_labels = data[['Well Name', 'Facies']].values
depth = data['Depth'].values
# Fill missing values and normalize for 'PE' field
imp = preprocessing.Imputer(missing_values='NaN', strategy='mean', axis=0)
imp.fit(data_vectors)
data_vectors = imp.transform(data_vectors)
data_vectors = np.hstack([data_vectors, nm_m_dist])
scaler = preprocessing.StandardScaler().fit(data_vectors)
scaled_features = scaler.transform(data_vectors)
data_out = np.hstack([well_labels, scaled_features])
Explanation: Data ingest
We load the training and testing data to preprocess it for further analysis, filling the missing data values in the PE field with zero and proceeding to normalize the data that will be fed into our model. We now incorporate the Imputation from Paolo Bestagini via LA_Team's Submission 5.
End of explanation
def preprocess(data_out):
data = data_out
X = data[0:4149,0:11]
y = np.concatenate((data[0:4149,0].reshape(4149,1), np_utils.to_categorical(correct_facies_labels[0:4149]-1)), axis=1)
X_test = data[4149:,0:11]
return X, y, X_test
X_train_in, y_train, X_test_in = preprocess(data_out)
Explanation: Split data into training data and blind data, and output as Numpy arrays
End of explanation
conv_domain = 11
# Reproducibility
np.random.seed(7)
# Load data
def expand_dims(input):
r = int((conv_domain-1)/2)
l = input.shape[0]
n_input_vars = input.shape[1]
output = np.zeros((l, conv_domain, n_input_vars))
for i in range(l):
for j in range(conv_domain):
for k in range(n_input_vars):
output[i,j,k] = input[min(i+j-r,l-1),k]
return output
X_train = np.empty((0,conv_domain,9), dtype=float)
X_test = np.empty((0,conv_domain,9), dtype=float)
y_select = np.empty((0,9), dtype=int)
well_names_train = ['SHRIMPLIN', 'ALEXANDER D', 'SHANKLE', 'LUKE G U', 'KIMZEY A', 'CROSS H CATTLE', 'NOLAN', 'NEWBY', 'CHURCHMAN BIBLE']
for wellId in well_names_train:
X_train_subset = X_train_in[X_train_in[:, 0] == wellId][:,2:11]
X_train_subset = expand_dims(X_train_subset)
X_train = np.concatenate((X_train,X_train_subset),axis=0)
y_select = np.concatenate((y_select, y_train[y_train[:, 0] == wellId][:,1:10]), axis=0)
for wellId in well_names_validate:
X_test_subset = X_test_in[X_test_in[:, 0] == wellId][:,2:11]
X_test_subset = expand_dims(X_test_subset)
X_test = np.concatenate((X_test,X_test_subset),axis=0)
y_train = y_select
print(X_train.shape)
print(X_test.shape)
print(y_select.shape)
Explanation: Data Augmentation
We expand the input data to be acted on by the convolutional layer.
End of explanation
# Set parameters
input_dim = 9
output_dim = 9
n_per_batch = 128
epochs = 100
crop_factor = int(conv_domain/2)
filters_per_log = 11
n_convolutions = input_dim*filters_per_log
starting_weights = [np.zeros((conv_domain, 1, input_dim, n_convolutions)), np.ones((n_convolutions))]
norm_factor=float(conv_domain)*2.0
for i in range(input_dim):
for j in range(conv_domain):
starting_weights[0][j, 0, i, i*filters_per_log+0] = j/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+1] = j/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+2] = (conv_domain-j)/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+3] = (conv_domain-j)/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+4] = (2*abs(crop_factor-j))/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+5] = (conv_domain-2*abs(crop_factor-j))/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+6] = 0.25
starting_weights[0][j, 0, i, i*filters_per_log+7] = 0.5 if (j%2 == 0) else 0.25
starting_weights[0][j, 0, i, i*filters_per_log+8] = 0.25 if (j%2 == 0) else 0.5
starting_weights[0][j, 0, i, i*filters_per_log+9] = 0.5 if (j%4 == 0) else 0.25
starting_weights[0][j, 0, i, i*filters_per_log+10] = 0.25 if (j%4 == 0) else 0.5
def dnn_model(init_dropout_rate=0.375, main_dropout_rate=0.5,
hidden_dim_1=20, hidden_dim_2=32,
max_norm=10, nb_conv=n_convolutions):
# Define the model
inputs = Input(shape=(conv_domain,input_dim,))
inputs_dropout = Dropout(init_dropout_rate)(inputs)
x1 = Convolution1D(nb_conv, conv_domain, border_mode='valid', weights=starting_weights, activation='tanh', input_shape=(conv_domain,input_dim), input_length=input_dim, W_constraint=nonneg())(inputs_dropout)
x1 = Flatten()(x1)
xn = Cropping1D(cropping=(crop_factor,crop_factor))(inputs_dropout)
xn = Flatten()(xn)
xA = merge([x1, xn], mode='concat')
xA = Dropout(main_dropout_rate)(xA)
xA = Dense(hidden_dim_1, init='uniform', activation='relu', W_constraint=maxnorm(max_norm))(xA)
x = merge([xA, xn], mode='concat')
x = Dropout(main_dropout_rate)(x)
x = Dense(hidden_dim_2, init='uniform', activation='relu', W_constraint=maxnorm(max_norm))(x)
predictions = Dense(output_dim, init='uniform', activation='softmax')(x)
model = Model(input=inputs, output=predictions)
optimizerNadam = Nadam(lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=1e-08, schedule_decay=0.004)
model.compile(loss='categorical_crossentropy', optimizer=optimizerNadam, metrics=['accuracy'])
return model
# Load the model
t0 = time.time()
model_dnn = dnn_model()
model_dnn.summary()
t1 = time.time()
print("Load time = %d" % (t1-t0) )
def plot_weights():
layerID=2
print(model_dnn.layers[layerID].get_weights()[0].shape)
print(model_dnn.layers[layerID].get_weights()[1].shape)
fig, ax = plt.subplots(figsize=(12,10))
for i in range(9):
plt.subplot(911+i)
plt.imshow(model_dnn.layers[layerID].get_weights()[0][:,0,i,:], interpolation='none')
plt.show()
plot_weights()
Explanation: Convolutional Neural Network
We build a CNN with the following layers (no longer using Sequential() model):
Dropout layer on input
One 1D convolutional layer (7-point radius)
One 1D cropping layer (just take actual log-value of interest)
Series of Merge layers re-adding result of cropping layer plus Dropout & Fully-Connected layers
Instead of running CNN with gradient features added, we initialize the Convolutional layer weights to achieve this
This allows the CNN to reject them, adjust them or turn them into something else if required
End of explanation
#Train model
t0 = time.time()
model_dnn.fit(X_train, y_train, batch_size=n_per_batch, nb_epoch=epochs, verbose=0)
t1 = time.time()
print("Train time = %d seconds" % (t1-t0) )
# Predict Values on Training set
t0 = time.time()
y_predicted = model_dnn.predict( X_train , batch_size=n_per_batch, verbose=2)
t1 = time.time()
print("Test time = %d seconds" % (t1-t0) )
# Print Report
# Format output [0 - 8 ]
y_ = np.zeros((len(y_train),1))
for i in range(len(y_train)):
y_[i] = np.argmax(y_train[i])
y_predicted_ = np.zeros((len(y_predicted), 1))
for i in range(len(y_predicted)):
y_predicted_[i] = np.argmax( y_predicted[i] )
# Confusion Matrix
conf = confusion_matrix(y_, y_predicted_)
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
# Print Results
print ("\nModel Report")
print ("-Accuracy: %.6f" % ( accuracy(conf) ))
print ("-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) ))
print ("\nConfusion Matrix")
display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True)
Explanation: We train the CNN and evaluate it on precision/recall.
End of explanation
plot_weights()
Explanation: We display the learned 1D convolution kernels
End of explanation
# Cross Validation
def cross_validate():
t0 = time.time()
estimator = KerasClassifier(build_fn=dnn_model, nb_epoch=epochs, batch_size=n_per_batch, verbose=0)
skf = StratifiedKFold(n_splits=5, shuffle=True)
results_dnn = cross_val_score(estimator, X_train, y_train, cv= skf.get_n_splits(X_train, y_train))
t1 = time.time()
print("Cross Validation time = %d" % (t1-t0) )
print(' Cross Validation Results')
print( results_dnn )
print(np.mean(results_dnn))
cross_validate()
Explanation: In order to avoid overfitting, we evaluate our model by running a 5-fold stratified cross-validation routine.
End of explanation
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
Explanation: Prediction
To predict the STUART and CRAWFORD blind wells we do the following:
Set up a plotting function to display the logs & facies.
End of explanation
# DNN model Prediction
y_test = model_dnn.predict( X_test , batch_size=n_per_batch, verbose=0)
predictions_dnn = np.zeros((len(y_test),1))
for i in range(len(y_test)):
predictions_dnn[i] = np.argmax(y_test[i]) + 1
predictions_dnn = predictions_dnn.astype(int)
# Store results
test_data = pd.read_csv('../validation_data_nofacies.csv')
test_data['Facies'] = predictions_dnn
test_data.to_csv('Prediction_StoDIG_3.csv')
for wellId in well_names_validate:
make_facies_log_plot( test_data[test_data['Well Name'] == wellId], facies_colors=facies_colors)
Explanation: Run the model on the blind data
Output a CSV
Plot the wells in the notebook
End of explanation |
3,816 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 3 - Training process and learning rate
In this chapter we will clean up our code and create a logistic classifier class that works much like many modern deep learning libraries do. We will also have a closer look at our first hyper parameter, the learning rate alpha.
Step1: The regressor class
Let's jump straight into the code. In this chapter, we will create a python class for our logistic regressor. If you are unfamiliar with classes in python, check out Jeff Knup's blogpost for a nice overview. Read the code below carefully, we will deconstruct the different functions afterwards
Step2: Using the regressor
To use the regressor, we define an instance of the class and can then train it. Here we will use the same data as in chapter 2.
Step3: Revisiting the training process
As you can see, our classifier still works! We have improved modularity and created an easier to debug classifier. Let's have a look at its overall structure. As you can see, we make use of three dictionaries
Step4: Looking at the data we see that it is possible to separate the two clouds quite well, but there is a lot of noise so we can not hope to achieve zero loss. But we can get close to it. Let's set up a regressor. Here we will use a learning rate of 10, which is quite high.
Step5: You will probably even get an error message mentioning an overflow and it doesn't look like the regressor converged smoothly. This was a bumpy ride.
Step6: As you can see, the loss first went up quite significantly before then coming down. At multiple instances it moves up again. This is a clear sign that the learning rate is too large, let's try a lower one
Step7: This looks a bit smoother already, and you can see that the error is nearly ten times lower in the end. Let's try an even lower learning rate to see where we can take this.
Step8: This is a very smooth gradient descent but also a very slow one. The error is more than twice as high as before in the end. If we would let this run for a few more epochs we probably could achieve a very good model but at a very large computing expense.
How to find a good value for the learning rate
A good learning rate converges fast and leads to low loss. But there is no silver bullet perfect learning rate that always works. It usually depends on your project. It is as much art as it is science to tune the learning rate and only repeated experimentation can lead you to a good result. Experience shows however, that a good learning rate is usually around 0.1, even though it can well be different for other projects.
To practice tuning the learning rate, play around with the example below and see whether you can find an appropriate one that converges fast and at a low loss.
Step9: Visualizing our regressor
In the last part of this chapter, I would like to give a closer look at what our regressor actually does. To do so, we will plot the decision boundary, that is the boundary the regressor assigns between the two classes.
Step10: To plot the boundary, we train a new regressor first.
Step11: And then we plot the boundary. Again, do not worry if you do not understand exactly what is going on here, as it is not part of the class.
Step12: As you can see, our logistic regressor seperates the two clouds with a simple line. This is appropriate for this case but might fail when the boundary is a more complex function. Let's try out a more complex function. | Python Code:
# Numpy handles matrix multiplication, see http://www.numpy.org/
import numpy as np
# PyPlot is a matlab like plotting framework, see https://matplotlib.org/api/pyplot_api.html
import matplotlib.pyplot as plt
# This line makes it easier to plot PyPlot graphs in Jupyter Notebooks
%matplotlib inline
import sklearn
import sklearn.datasets
import matplotlib
# Slightly larger plot rendering
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
Explanation: Chapter 3 - Training process and learning rate
In this chapter we will clean up our code and create a logistic classifier class that works much like many modern deep learning libraries do. We will also have a closer look at our first hyper parameter, the learning rate alpha.
End of explanation
class LogisticRegressor:
# Here we are just setting up some placeholder variables
# This is the dimensionality of our input, that is how many features our input has
input_dim = 0
# This is the learning rate alpha
learning_rate = 0.1
# We will store the parameters of our model in a dictionary
model = {}
# The values calculated in the forward propagation will be stored in this dictionary
cache = {}
# The gradients that we calculate during back propagation will be stored in a dictionary
gradients = {}
# Init function of the class
def __init__(self,input_dim, learning_rate):
'''
Assigns the given hyper parameters and initializes the initial parameters.
'''
# Assign input dimensionality
self.input_dim = input_dim
# Assign learning rate
self.learning_rate = learning_rate
# Trigger parameter setup
self.init_parameters()
# Parameter setup function
def init_parameters(self):
'''
Initializes weights with random number between -1 and 1
Initializes bias with 0
Assigns weights and parameters to model
'''
# Randomly init weights
W1 = 2*np.random.random((self.input_dim,1)) - 1
# Set bias to 0
b1 = 0
# Assign to model
self.model = {'W1':W1,'b1':b1}
return
# Sigmoid function
def sigmoid(self,x):
'''
Calculates the sigmoid activation of a given input x
See: https://en.wikipedia.org/wiki/Sigmoid_function
'''
return 1/(1+np.exp(-x))
#Log Loss function
def log_loss(self,y,y_hat):
'''
Calculates the logistic loss between a prediction y_hat and the labels y
See: http://wiki.fast.ai/index.php/Log_Loss
We need to clip values that get too close to zero to avoid zeroing out.
Zeroing out is when a number gets so small that the computer replaces it with 0.
Therefore, we clip numbers to a minimum value.
'''
minval = 0.000000000001
m = y.shape[0]
l = -1/m * np.sum(y * np.log(y_hat.clip(min=minval)) + (1-y) * np.log((1-y_hat).clip(min=minval)))
return l
# Derivative of log loss function
def log_loss_derivative(self,y,y_hat):
'''
Calculates the gradient (derivative) of the log loss between point y and y_hat
See: https://stats.stackexchange.com/questions/219241/gradient-for-logistic-loss-function
'''
return (y_hat-y)
# Forward prop (forward pass) function
def forward_propagation(self,A0):
'''
Forward propagates through the model, stores results in cache.
See: https://stats.stackexchange.com/questions/147954/neural-network-forward-propagation
A0 is the activation at layer zero, it is the same as X
'''
# Load parameters from model
W1, b1 = self.model['W1'],self.model['b1']
# Do the linear step
z1 = A0.dot(W1) + b1
#Pass the linear step through the activation function
A1 = self.sigmoid(z1)
# Store results in cache
self.cache = {'A0':X,'z1':z1,'A1':A1}
return
# Backprop function
def backward_propagation(self,y):
'''
Backward propagates through the model to calculate gradients.
Stores gradients in grads dictionary.
See: https://en.wikipedia.org/wiki/Backpropagation
'''
# Load results from forward pass
A0, z1, A1 = self.cache['A0'],self.cache['z1'], self.cache['A1']
# Load model parameters
W1, b1 = self.model['W1'], self.model['b1']
# Read m, the number of examples
m = A0.shape[0]
# Calculate the gradient of the loss function
dz1 = self.log_loss_derivative(y=y,y_hat=A1)
# Calculate the derivative of the loss with respect to the weights W1
dW1 = 1/m*(A0.T).dot(dz1)
# Calculate the derivative of the loss with respect to the bias b1
db1 = 1/m*np.sum(dz1, axis=0, keepdims=True)
#Make sure the weight derivative has the same shape as the weights
assert(dW1.shape == W1.shape)
# Store gradients in gradient dictionary
self.grads = {'dW1':dW1,'db1':db1}
return
# Parameter update
def update_parameters(self):
'''
Updates parameters accoarding to gradient descent algorithm
See: https://en.wikipedia.org/wiki/Gradient_descent
'''
# Load model parameters
W1, b1 = self.model['W1'],self.model['b1']
# Load gradients
dW1, db1 = self.grads['dW1'], self.grads['db1']
# Update weights
W1 -= self.learning_rate * dW1
# Update bias
b1 -= self.learning_rate * db1
# Store new parameters in model dictionary
self.model = {'W1':W1,'b1':b1}
return
# Prediction function
def predict(self,X):
'''
Predicts y_hat as 1 or 0 for a given input X
'''
# Do forward pass
self.forward_propagation(X)
# Get output of regressor
regressor_output = self.cache['A1']
# Turn values to either 1 or 0
regressor_output[regressor_output > 0.5] = 1
regressor_output[regressor_output < 0.5] = 0
# Return output
return regressor_output
# Train function
def train(self,X,y, epochs):
'''
Trains the regressor on a given training set X, y for the specified number of epochs.
'''
# Set up array to store losses
losses = []
# Loop through epochs
for i in range(epochs):
# Forward pass
self.forward_propagation(X)
# Calculate loss
loss = self.log_loss(y,self.cache['A1'])
# Store loss
losses.append(loss)
# Print loss every 10th iteration
if (i%10 == 0):
print('Epoch:',i,' Loss:', loss)
# Do the backward propagation
self.backward_propagation(y)
# Update parameters
self.update_parameters()
# Return losses for analysis
return losses
Explanation: The regressor class
Let's jump straight into the code. In this chapter, we will create a python class for our logistic regressor. If you are unfamiliar with classes in python, check out Jeff Knup's blogpost for a nice overview. Read the code below carefully, we will deconstruct the different functions afterwards
End of explanation
#Seed the random function to ensure that we always get the same result
np.random.seed(1)
#Variable definition
#define X
X = np.array([[0,1,0],
[1,0,0],
[1,1,1],
[0,1,1]])
#define y
y = np.array([[0,1,1,0]]).T
# Define instance of class
regressor = LogisticRegressor(input_dim=3,learning_rate=1)
# Train classifier
losses = regressor.train(X,y,epochs=100)
# Plot the losses for analyis
plt.plot(losses)
Explanation: Using the regressor
To use the regressor, we define an instance of the class and can then train it. Here we will use the same data as in chapter 2.
End of explanation
# Generate a dataset and plot it
np.random.seed(0)
X, y = sklearn.datasets.make_blobs(n_samples=200,centers=2)
y = y.reshape(200,1)
plt.scatter(X[:,0], X[:,1], s=40, c=y.flatten(), cmap=plt.cm.Spectral)
Explanation: Revisiting the training process
As you can see, our classifier still works! We have improved modularity and created an easier to debug classifier. Let's have a look at its overall structure. As you can see, we make use of three dictionaries:
- model: Stores the model parameters, weights and bias
- cache: Stores all intermediate results from the forward pass. These are needed for the backward propagation
- grads: Stores the gradients from the backward propagation
These dictionaries store all information required to run the training process:
We run this process many times over. One full cycle done with the full training set is called an epoch. How often we have to go through this process can vary, depending on the complexity of the problem we want to solve and the learning rate $\alpha$. You see alpha being used in the code above already so let's give it a closer look.
What is the learning rate anyway?
The learning rate is a lot like the throttle setting in our learning algorithm. It is the multiplier to the update the parameter experiences.
$$a := a - \alpha * \frac{dL(w)}{da}$$
A high learning rate means that the parameters get updated by larger amounts. This can lead to faster training, but it can also mean that we might jump over a minimum.
As you can see with a bigger learning rate we are approaching the minimum much faster. But as we get close, our steps are too big and we are skipping over it. This can even lead to our loss going up over time.
Choosing the right learning rate is therefore crucial. Too small and our learning algorithm might be too slow. Too high and it might fail to converge at a minimum. So in the next step, we will have a look at how to tune this hyper parameter.
A slightly harder problem
So far we have worked with a really simple dataset in which one input feature is perfectly correlated with the labels $y$. Now we will look at a slightly harder problem.
We generate a dataset of two point clouds and we want to train our regressor on separating them. The data generation is done with sklearn's dataset generator.
End of explanation
# Define instance of class
# Learning rate = 1, same as no learning rate used
regressor = LogisticRegressor(input_dim=2,learning_rate=10)
# Train classifier
losses = regressor.train(X,y,epochs=100)
Explanation: Looking at the data we see that it is possible to separate the two clouds quite well, but there is a lot of noise so we can not hope to achieve zero loss. But we can get close to it. Let's set up a regressor. Here we will use a learning rate of 10, which is quite high.
End of explanation
plt.plot(losses)
Explanation: You will probably even get an error message mentioning an overflow and it doesn't look like the regressor converged smoothly. This was a bumpy ride.
End of explanation
# Define instance of class
# Learning rate = 0.05
regressor = LogisticRegressor(input_dim=2,learning_rate=0.05)
# Train classifier
losses = regressor.train(X,y,epochs=100)
plt.plot(losses)
Explanation: As you can see, the loss first went up quite significantly before then coming down. At multiple instances it moves up again. This is a clear sign that the learning rate is too large, let's try a lower one
End of explanation
# Define instance of class
# Learning rate = 0.0005
regressor = LogisticRegressor(input_dim=2,learning_rate=0.0005)
# Train classifier
losses = regressor.train(X,y,epochs=100)
plt.plot(losses)
Explanation: This looks a bit smoother already, and you can see that the error is nearly ten times lower in the end. Let's try an even lower learning rate to see where we can take this.
End of explanation
# Define instance of class
# Tweak learning rate here
regressor = LogisticRegressor(input_dim=2,learning_rate=1)
# Train classifier
losses = regressor.train(X,y,epochs=100)
plt.plot(losses)
Explanation: This is a very smooth gradient descent but also a very slow one. The error is more than twice as high as before in the end. If we would let this run for a few more epochs we probably could achieve a very good model but at a very large computing expense.
How to find a good value for the learning rate
A good learning rate converges fast and leads to low loss. But there is no silver bullet perfect learning rate that always works. It usually depends on your project. It is as much art as it is science to tune the learning rate and only repeated experimentation can lead you to a good result. Experience shows however, that a good learning rate is usually around 0.1, even though it can well be different for other projects.
To practice tuning the learning rate, play around with the example below and see whether you can find an appropriate one that converges fast and at a low loss.
End of explanation
# Helper function to plot a decision boundary.
# If you don't fully understand this function don't worry, it just generates the boundary plot.
def plot_decision_boundary(pred_func):
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
Z = pred_func(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y.flatten(), cmap=plt.cm.Spectral)
Explanation: Visualizing our regressor
In the last part of this chapter, I would like to give a closer look at what our regressor actually does. To do so, we will plot the decision boundary, that is the boundary the regressor assigns between the two classes.
End of explanation
# Define instance of class
# Learning rate = 0.05
regressor = LogisticRegressor(input_dim=2,learning_rate=0.05)
# Train classifier
losses = regressor.train(X,y,epochs=100)
Explanation: To plot the boundary, we train a new regressor first.
End of explanation
# Plot the decision boundary
plot_decision_boundary(lambda x: regressor.predict(x))
plt.title("Decision Boundary for logistic regressor")
Explanation: And then we plot the boundary. Again, do not worry if you do not understand exactly what is going on here, as it is not part of the class.
End of explanation
# Generate a dataset and plot it
np.random.seed(0)
X, y = sklearn.datasets.make_moons(200, noise=0.1)
y = y.reshape(200,1)
plt.scatter(X[:,0], X[:,1], s=40, c=y.flatten(), cmap=plt.cm.Spectral)
# Define instance of class
# Learning rate = 0.05
y = y.reshape(200,1)
regressor = LogisticRegressor(input_dim=2,learning_rate=0.05)
# Train classifier
losses = regressor.train(X,y,epochs=100)
# Plot the decision boundary
plot_decision_boundary(lambda x: regressor.predict(x))
plt.title("Decision Boundary for hidden layer size 3")
Explanation: As you can see, our logistic regressor seperates the two clouds with a simple line. This is appropriate for this case but might fail when the boundary is a more complex function. Let's try out a more complex function.
End of explanation |
3,817 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example with real audio recordings
The iterations are dropped in contrast to the offline version. To use past observations the correlation matrix and the correlation vector are calculated recursively with a decaying window. $\alpha$ is the decay factor.
Setup
Step1: Audio data
Step2: Online buffer
For simplicity the STFT is performed before providing the frames.
Shape
Step3: Non-iterative frame online approach
A frame online example requires, that certain state variables are kept from frame to frame. That is the inverse correlation matrix $\text{R}_{t, f}^{-1}$ which is stored in Q and initialized with an identity matrix, as well as filter coefficient matrix that is stored in G and initialized with zeros.
Again for simplicity the ISTFT is applied in Numpy afterwards.
Step4: Power spectrum
Before and after applying WPE. | Python Code:
channels = 8
sampling_rate = 16000
delay = 3
alpha=0.99
taps = 10
frequency_bins = stft_options['size'] // 2 + 1
Explanation: Example with real audio recordings
The iterations are dropped in contrast to the offline version. To use past observations the correlation matrix and the correlation vector are calculated recursively with a decaying window. $\alpha$ is the decay factor.
Setup
End of explanation
file_template = 'AMI_WSJ20-Array1-{}_T10c0201.wav'
signal_list = [
sf.read(str(project_root / 'data' / file_template.format(d + 1)))[0]
for d in range(channels)
]
y = np.stack(signal_list, axis=0)
IPython.display.Audio(y[0], rate=sampling_rate)
Explanation: Audio data
End of explanation
Y = stft(y, **stft_options).transpose(1, 2, 0)
T, _, _ = Y.shape
def aquire_framebuffer():
buffer = list(Y[:taps+delay, :, :])
for t in range(taps+delay+1, T):
buffer.append(Y[t, :, :])
yield np.array(buffer)
buffer.pop(0)
Explanation: Online buffer
For simplicity the STFT is performed before providing the frames.
Shape: (frames, frequency bins, channels)
frames: K+delay+1
End of explanation
Z_list = []
Q = np.stack([np.identity(channels * taps) for a in range(frequency_bins)])
G = np.zeros((frequency_bins, channels * taps, channels))
with tf.Session() as session:
Y_tf = tf.placeholder(tf.complex128, shape=(taps + delay + 1, frequency_bins, channels))
Q_tf = tf.placeholder(tf.complex128, shape=(frequency_bins, channels * taps, channels * taps))
G_tf = tf.placeholder(tf.complex128, shape=(frequency_bins, channels * taps, channels))
results = online_wpe_step(Y_tf, get_power_online(tf.transpose(Y_tf, (1, 0, 2))), Q_tf, G_tf, alpha=alpha, taps=taps, delay=delay)
for Y_step in tqdm(aquire_framebuffer()):
feed_dict = {Y_tf: Y_step, Q_tf: Q, G_tf: G}
Z, Q, G = session.run(results, feed_dict)
Z_list.append(Z)
Z_stacked = np.stack(Z_list)
z = istft(np.asarray(Z_stacked).transpose(2, 0, 1), size=stft_options['size'], shift=stft_options['shift'])
IPython.display.Audio(z[0], rate=sampling_rate)
Explanation: Non-iterative frame online approach
A frame online example requires, that certain state variables are kept from frame to frame. That is the inverse correlation matrix $\text{R}_{t, f}^{-1}$ which is stored in Q and initialized with an identity matrix, as well as filter coefficient matrix that is stored in G and initialized with zeros.
Again for simplicity the ISTFT is applied in Numpy afterwards.
End of explanation
fig, [ax1, ax2] = plt.subplots(1, 2, figsize=(20, 8))
im1 = ax1.imshow(20 * np.log10(np.abs(Y[200:400, :, 0])).T, origin='lower')
ax1.set_xlabel('')
_ = ax1.set_title('reverberated')
im2 = ax2.imshow(20 * np.log10(np.abs(Z_stacked[200:400, :, 0])).T, origin='lower')
_ = ax2.set_title('dereverberated')
cb = fig.colorbar(im1)
Explanation: Power spectrum
Before and after applying WPE.
End of explanation |
3,818 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
벡터 공간
벡터의 기하학적 의미
길이가 $K$인 벡터(vector) $a$는 $K$차원의 공간에서 원점과 벡터 $a$의 값으로 표시되는 점을 연결한 화살표(arrow)로 간주할 수 있다.
$$ a = \begin{bmatrix}1 \ 2 \end{bmatrix} $$
Step1: 벡터의 길이
벡터 $a$ 의 길이를 놈(norm) $\| a \|$ 이라고 하며 다음과 같이 계산할 수 있다.
$$ \| a \| = \sqrt{a^T a } = \sqrt{a_1^2 + \cdots + a_K^2} $$
numpy의 linalg 서브 패키지의 norm 명령으로 벡터의 길이를 계산할 수 있다.
Step2: 단위 벡터
길이가 1인 벡터를 단위 벡터(unit vector)라고 한다. 예를 들어 다음과 같은 벡터들은 모두 단위 벡터이다.
$$
a = \begin{bmatrix}1 \ 0\end{bmatrix} ,\;\;
b = \begin{bmatrix}0 \ 1\end{bmatrix} ,\;\;
c = \begin{bmatrix} \dfrac{1}{\sqrt{2}} \ \dfrac{1}{\sqrt{2}} \end{bmatrix}
$$
임의의 벡터 $x$에 대해 다음은 벡터는 단위 벡터가 된다.
$$
\dfrac{x}{\| x \|}
$$
Step3: 벡터의 합
벡터와 벡터의 합은 벡터가 된다.
$$
a = \begin{bmatrix}1 \ 2\end{bmatrix} ,\;\;
b = \begin{bmatrix}2 \ 1\end{bmatrix} \;\;\; \rightarrow \;\;\;
c = a + b = \begin{bmatrix}3 \ 3\end{bmatrix} \;\;
$$
Step4: 벡터의 집합 중에서 집합의 원소인 두 벡터의 선형 조합(스칼라 곱의 합)이 그 집합의 원소이면 벡터 공간이라고 한다.
$$ a, b \in \mathbf{R} \;\; \text{ and } \;\; \alpha_1a + \alpha_2b \in \mathbf{R} $$
벡터의 분해
어떤 두 벡터 $a$, $b$의 합이 다른 벡터 $c$가 될 때 $c$가 두 벡터 성분(vector component) $a$, $b$으로 분해(decomposition)된다고 말할 수 있다.
두 벡터의 내적
두 벡터의 내적은 다음과 같이 벡터의 길이 $\|a\|$, $\|b\|$ 와 두 벡터 사이의 각도 $\theta$로 계산할 수도 있다.
$$ a^Tb = \|a\|\|b\| \cos\theta $$
(증명)
위 식은 2차원 벡터의 경우 다음과 같이 증명할 수 있다.
<img src="https
Step5: 투영
벡터 $a$를 다른 벡터 $b$에 직교하는 성분 $a_1$ 와 나머지 성분 $a_2 = a - a_1$로 분해할 수 있다. 이 때 $a_2$는 $b$와 평행하며 이 길이를 벡터 $a$의 벡터 $b$에 대한 투영(projection)이라고 한다.
벡터의 투영은 다음과 같이 내적을 사용하여 구할 수 있다.
$$ a = a_1 + a_2 $$
$$ a_1 \perp b \;\; \text{ and } \;\; a_2 = a - a_1 $$
이면
$$ \| a_2 \| = a^T\dfrac{b}{\|b\|} = \dfrac{a^Tb}{\|b\|} = \dfrac{b^Ta}{\|b\|} $$
이다.
Step6: 직선
벡터 공간에서 직선은 다음과 같은 함수로 표현할 수 있다.
$$
f(x) = w^T(x - w) = w^Tx - w^Tw = w^Tx - \| w \|^2 = w^Tx - w_0 = 0
$$
$x$는 직선 상의 점을 나타내는 벡터이고 $w$는 원점으로부터 직선까지 이어지는 수직선을 나타내는 벡터이다.
$x-w$ 벡터가 $w$ 벡터와 수직이라는 것은 $x$가 가리키는 점과 $w$가 가리키는 점을 이은 선이 $w$와 수직이라는 뜻이다.
예를 들어
$$
w = \begin{bmatrix}1 \ 2\end{bmatrix} ,\;\;
w_0 = 5
$$
일 때
$$
\begin{bmatrix}1 & 2\end{bmatrix} \begin{bmatrix}x_1 \ x_2 \end{bmatrix} - 5 = x_1 + 2x_2 - 5 = 0
$$
이면 벡터 $w$가 가리키는 점 (1, 2)를 지나면서 벡터 $w$에 수직인 선을 뜻한다.
Step7: 직선과 점의 거리
직선 $ w^Tx - w_0 = 0 $ 과 이 직선 위에 있지 않은 점 $x'$의 거리는 단위 벡터 $\dfrac{w}{\|w\|}$에 대한 $x'$의 투영에서 $\|w\|$를 뺀 값의 절대값이다. 따라서 다음과 같이 정리할 수 있다.
$$
\left| \dfrac{w^Tx'}{\|w\|} - \|w\| \right| = \dfrac{\left|w^Tx' - \|w\|^2 \right|}{\|w\|}= \dfrac{\left|w^Tx' - w_0 \right|}{\|w\|}
$$
벡터의 선형 종속과 선형 독립
벡터들의 선형 조합이 0이 되는 모두 0이 아닌 스칼라값들이 존재하면 그 벡터들은 선형 종속(linearly dependent)이라고 한다.
$$
a = \begin{bmatrix}1 \ 2\end{bmatrix} ,\;\;
b = \begin{bmatrix}3 \ 3\end{bmatrix} \;\;
c = \begin{bmatrix}10 \ 14\end{bmatrix} \;\;
$$
$$
2a + b - \frac{1}{2}c = 0
$$
Step8: 벡터들의 선형 조합이 0이 되는 모두 0이 아닌 스칼라값들이 존재하지 않으면 그 벡터들은 선형 독립(linearly independent)이라고 한다.
$$ \alpha_1 a_1 + \cdots + \alpha_K a_K = 0 \;\;\;\; \leftrightarrow \;\;\;\; \alpha_1 = \cdots = \alpha_K = 0 $$
기저 벡터
벡터 공간에 속하는 벡터의 집합이 선형 독립이고 다른 모든 벡터 공간의 벡터들이 그 벡터 집합의 선형 조합으로 나타나면 그 벡터 집합을 벡터 공간의 기저 벡터(basis vector)라고 한다.
예를 들어 다음과 같은 두 벡터는 2차원 벡터 공간의 기저 벡터이다.
$$
a = \begin{bmatrix}1 \ 0\end{bmatrix} ,\;\;
b = \begin{bmatrix}0 \ 1\end{bmatrix} \;\;
$$
또는
$$
a = \begin{bmatrix}1 \ 1\end{bmatrix} ,\;\;
b = \begin{bmatrix}2 \ 3\end{bmatrix} \;\;
$$
다음과 같은 두 벡터는 2차원 벡터 공간의 기저 벡터가 될 수 없다.
$$
a = \begin{bmatrix}1 \ 2\end{bmatrix} ,\;\;
b = \begin{bmatrix}2 \ 4\end{bmatrix} \;\;
$$
열 공간
행렬은 열 벡터의 집합으로 볼 수 있다. 이 때 열 벡터들의 조합으로 생성되는 벡터 공간을 열 공간(column space)이라고 한다.
$$
A = \begin{bmatrix} 1 & 5 & 6 \ 2 & 6 & 8 \ 7 & 1 & 8 \end{bmatrix}
\;\;\;\; \rightarrow \;\;\;\;
\alpha_1 \begin{bmatrix} 1 \ 2 \ 7 \end{bmatrix} +
\alpha_2 \begin{bmatrix} 5 \ 6 \ 1 \end{bmatrix} +
\alpha_3 \begin{bmatrix} 6 \ 8 \ 8 \end{bmatrix}
\; \in \; \text{column space}
$$
열 랭크
행렬의 열 벡터 중 서로 독립인 열 벡터의 최대 갯수를 열 랭크(column rank) 혹은 랭크(rank)라고 한다.
예를 들어 다음 행렬의 랭크는 2이다.
$$
A = \begin{bmatrix} 1 & 5 & 6 \ 2 & 6 & 8 \ 3 & 11 & 14 \end{bmatrix}
$$
numpy의 linalg 서브 패키지의 matrix_rank 명령으로 랭크를 계산할 수 있다.
Step9: 좌표
벡터의 성분, 즉 좌표(coordinate)는 표준 기저 벡터들에 대한 해당 벡터의 투영(projection)으로 볼 수 있다.
Step10: 좌표 변환
새로운 기저 벡터를에 대해 벡터 투영을 계산하는 것을 좌표 변환(coordinate transform)이라고 한다.
좌표 변환은 새로운 기저 벡터로 이루어진 변환 행렬(transform matrix) $A$ 와의 내적으로 계산한다.
$$ Aa' = a $$
$$ a' = A^{-1}a $$
예를 들어, 기존의 기저 벡터가
$$
e_1 = \begin{bmatrix}1 \ 0\end{bmatrix} ,\;\;
e_2 = \begin{bmatrix}0 \ 1\end{bmatrix} \;\;
$$
이면 벡터 $a$는 사실
$$
a = \begin{bmatrix}2 \ 2\end{bmatrix} = 2 \begin{bmatrix}1 \ 0\end{bmatrix} + 2 \begin{bmatrix}0 \ 1 \end{bmatrix} = 2 e_1 + 2 e_2
$$
새로운 기저 벡터가
$$
g_1 = \begin{bmatrix} \dfrac{1}{\sqrt{2}} \ \dfrac{1}{\sqrt{2}} \end{bmatrix} ,\;\;
g_2 = \begin{bmatrix} -\dfrac{1}{\sqrt{2}} \ \dfrac{1}{\sqrt{2}} \end{bmatrix} ,\;\;
$$
이면 벡터 $a$의 좌표는 다음과 같이 바뀐다.
$$
a = \begin{bmatrix}2 \ 2\end{bmatrix} \;\;\;\; \rightarrow \;\;\;\;
a' = A^{-1}a =
\begin{bmatrix}
e'_1 & e'_2
\end{bmatrix}
a
=
\begin{bmatrix}
\dfrac{1}{\sqrt{2}} & -\dfrac{1}{\sqrt{2}} \
\dfrac{1}{\sqrt{2}} & \dfrac{1}{\sqrt{2}}
\end{bmatrix}^{-1}
\begin{bmatrix}2 \ 2\end{bmatrix}
=
\begin{bmatrix}
\dfrac{1}{\sqrt{2}} & \dfrac{1}{\sqrt{2}} \
-\dfrac{1}{\sqrt{2}} & \dfrac{1}{\sqrt{2}}
\end{bmatrix}
\begin{bmatrix}2 \ 2\end{bmatrix}
= \begin{bmatrix}2\sqrt{2}\0\end{bmatrix}
$$ | Python Code:
a = [1, 2]
plt.annotate('', xy=a, xytext=(0,0), arrowprops=dict(facecolor='black'))
plt.plot(0, 0, 'ro', ms=10)
plt.plot(a[0], a[1], 'ro', ms=10)
plt.text(0.35, 1.15, "$a$", fontdict={"size": 18})
plt.xticks(np.arange(-2, 4))
plt.yticks(np.arange(-1, 4))
plt.xlim(-2.4, 3.4)
plt.ylim(-1.2, 3.2)
plt.show()
Explanation: 벡터 공간
벡터의 기하학적 의미
길이가 $K$인 벡터(vector) $a$는 $K$차원의 공간에서 원점과 벡터 $a$의 값으로 표시되는 점을 연결한 화살표(arrow)로 간주할 수 있다.
$$ a = \begin{bmatrix}1 \ 2 \end{bmatrix} $$
End of explanation
a = np.array([1, 1])
np.linalg.norm(a)
Explanation: 벡터의 길이
벡터 $a$ 의 길이를 놈(norm) $\| a \|$ 이라고 하며 다음과 같이 계산할 수 있다.
$$ \| a \| = \sqrt{a^T a } = \sqrt{a_1^2 + \cdots + a_K^2} $$
numpy의 linalg 서브 패키지의 norm 명령으로 벡터의 길이를 계산할 수 있다.
End of explanation
a = np.array([1, 0])
b = np.array([0, 1])
c = np.array([1/np.sqrt(2), 1/np.sqrt(2)])
np.linalg.norm(a), np.linalg.norm(b), np.linalg.norm(c)
Explanation: 단위 벡터
길이가 1인 벡터를 단위 벡터(unit vector)라고 한다. 예를 들어 다음과 같은 벡터들은 모두 단위 벡터이다.
$$
a = \begin{bmatrix}1 \ 0\end{bmatrix} ,\;\;
b = \begin{bmatrix}0 \ 1\end{bmatrix} ,\;\;
c = \begin{bmatrix} \dfrac{1}{\sqrt{2}} \ \dfrac{1}{\sqrt{2}} \end{bmatrix}
$$
임의의 벡터 $x$에 대해 다음은 벡터는 단위 벡터가 된다.
$$
\dfrac{x}{\| x \|}
$$
End of explanation
a = np.array([1, 2])
b = np.array([2, 1])
c = a + b
plt.annotate('', xy=a, xytext=(0,0), arrowprops=dict(facecolor='gray'))
plt.annotate('', xy=b, xytext=(0,0), arrowprops=dict(facecolor='gray'))
plt.annotate('', xy=c, xytext=(0,0), arrowprops=dict(facecolor='black'))
plt.plot(0, 0, 'ro', ms=10)
plt.plot(a[0], a[1], 'ro', ms=10)
plt.plot(b[0], b[1], 'ro', ms=10)
plt.plot(c[0], c[1], 'ro', ms=10)
plt.plot([a[0], c[0]], [a[1], c[1]], 'k--')
plt.plot([b[0], c[0]], [b[1], c[1]], 'k--')
plt.text(0.35, 1.15, "$a$", fontdict={"size": 18})
plt.text(1.15, 0.25, "$b$", fontdict={"size": 18})
plt.text(1.25, 1.45, "$c$", fontdict={"size": 18})
plt.xticks(np.arange(-2, 4))
plt.yticks(np.arange(-1, 4))
plt.xlim(-1.4, 4.4)
plt.ylim(-0.6, 3.8)
plt.show()
Explanation: 벡터의 합
벡터와 벡터의 합은 벡터가 된다.
$$
a = \begin{bmatrix}1 \ 2\end{bmatrix} ,\;\;
b = \begin{bmatrix}2 \ 1\end{bmatrix} \;\;\; \rightarrow \;\;\;
c = a + b = \begin{bmatrix}3 \ 3\end{bmatrix} \;\;
$$
End of explanation
a = np.array([1, 1])
b = np.array([-1, 1])
np.dot(a, b)
Explanation: 벡터의 집합 중에서 집합의 원소인 두 벡터의 선형 조합(스칼라 곱의 합)이 그 집합의 원소이면 벡터 공간이라고 한다.
$$ a, b \in \mathbf{R} \;\; \text{ and } \;\; \alpha_1a + \alpha_2b \in \mathbf{R} $$
벡터의 분해
어떤 두 벡터 $a$, $b$의 합이 다른 벡터 $c$가 될 때 $c$가 두 벡터 성분(vector component) $a$, $b$으로 분해(decomposition)된다고 말할 수 있다.
두 벡터의 내적
두 벡터의 내적은 다음과 같이 벡터의 길이 $\|a\|$, $\|b\|$ 와 두 벡터 사이의 각도 $\theta$로 계산할 수도 있다.
$$ a^Tb = \|a\|\|b\| \cos\theta $$
(증명)
위 식은 2차원 벡터의 경우 다음과 같이 증명할 수 있다.
<img src="https://datascienceschool.net/upfiles/2e57d9e9358241e5862fe734dfd245b2.png">
위 그림과 같은 삼각형에서 세 변은 다음과 같은 공식을 만족한다. (코사인 법칙)
$$
\|a−b\|^2=\|a\|^2+\|b\|^2−2\|a\|\|b\|\cos\theta
$$
$$
\begin{eqnarray}
\|a−b\|^2
&=& (a−b)^T(a−b) \
&=& a^Ta − 2 ( a^Tb ) + b^T b \
&=& \|a\|^2+\|b\|^2 − 2 a^T b
\end{eqnarray}
$$
두 식이 같으므로
$$ a^Tb = \|a\|\|b\| \cos\theta $$
벡터의 직교
두 벡터 $a$와 $b$가 이루는 각이 90도이면 서로 직교(orthogonal)라고 하며 $ a \perp b $로 표시한다.
$\cos 90^{\circ} = 0$이므로 서로 직교인 두 벡터의 벡터 내적(inner product, dot product)는 0이된다.
$$ a^T b = b^T a = 0 \;\;\;\; \leftrightarrow \;\;\;\; a \perp b $$
예를 들어 다음 두 벡터는 서로 직교한다.
$$
a = \begin{bmatrix}1 \ 1\end{bmatrix} ,\;\;
b = \begin{bmatrix}-1 \ 1\end{bmatrix} \;\;\;\; \rightarrow \;\;\;\;
a^T b = \begin{bmatrix}1 & 1\end{bmatrix} \begin{bmatrix}-1 \ 1\end{bmatrix} = -1 + 1 = 0
$$
End of explanation
a = np.array([1, 2])
b = np.array([2, 0])
a2 = np.dot(a, b)/np.linalg.norm(b) * np.array([1, 0])
a1 = a - a2
plt.annotate('', xy=a, xytext=(0,0), arrowprops=dict(facecolor='gray'))
plt.annotate('', xy=b, xytext=(0,0), arrowprops=dict(facecolor='gray'))
plt.annotate('', xy=a2, xytext=(0,0), arrowprops=dict(facecolor='green'))
plt.annotate('', xy=a1, xytext=(0,0), arrowprops=dict(facecolor='green'))
plt.plot(0, 0, 'ro', ms=10)
plt.plot(a[0], a[1], 'ro', ms=10)
plt.plot(b[0], b[1], 'ro', ms=10)
plt.text(0.35, 1.15, "$a$", fontdict={"size": 18})
plt.text(1.55, 0.15, "$b$", fontdict={"size": 18})
plt.text(-0.2, 1.05, "$a_1$", fontdict={"size": 18})
plt.text(0.50, 0.15, "$a_2$", fontdict={"size": 18})
plt.xticks(np.arange(-2, 4))
plt.yticks(np.arange(-1, 4))
plt.xlim(-1.5, 3.5)
plt.ylim(-0.5, 3)
plt.show()
Explanation: 투영
벡터 $a$를 다른 벡터 $b$에 직교하는 성분 $a_1$ 와 나머지 성분 $a_2 = a - a_1$로 분해할 수 있다. 이 때 $a_2$는 $b$와 평행하며 이 길이를 벡터 $a$의 벡터 $b$에 대한 투영(projection)이라고 한다.
벡터의 투영은 다음과 같이 내적을 사용하여 구할 수 있다.
$$ a = a_1 + a_2 $$
$$ a_1 \perp b \;\; \text{ and } \;\; a_2 = a - a_1 $$
이면
$$ \| a_2 \| = a^T\dfrac{b}{\|b\|} = \dfrac{a^Tb}{\|b\|} = \dfrac{b^Ta}{\|b\|} $$
이다.
End of explanation
w = np.array([1, 2])
x1 = np.array([3, 1])
x2 = np.array([-1, 3])
w0 = 5
plt.annotate('', xy=w, xytext=(0,0), arrowprops=dict(facecolor='red'))
plt.annotate('', xy=x1, xytext=(0,0), arrowprops=dict(facecolor='gray'))
plt.annotate('', xy=x2, xytext=(0,0), arrowprops=dict(facecolor='gray'))
plt.plot(0, 0, 'ro', ms=10)
plt.plot(w[0], w[1], 'ro', ms=10)
plt.plot(x1[0], x1[1], 'ro', ms=10)
plt.plot(x2[0], x2[1], 'ro', ms=10)
plt.plot([-3, 5], [4, 0], 'r-', lw=5)
plt.text(0.35, 1.15, "$w$", fontdict={"size": 18})
plt.text(1.55, 0.25, "$x_1$", fontdict={"size": 18})
plt.text(-0.9, 1.40, "$x_2$", fontdict={"size": 18})
plt.xticks(np.arange(-2, 4))
plt.yticks(np.arange(-1, 4))
plt.xlim(-1.7, 4.2)
plt.ylim(-0.5, 3.5)
plt.show()
Explanation: 직선
벡터 공간에서 직선은 다음과 같은 함수로 표현할 수 있다.
$$
f(x) = w^T(x - w) = w^Tx - w^Tw = w^Tx - \| w \|^2 = w^Tx - w_0 = 0
$$
$x$는 직선 상의 점을 나타내는 벡터이고 $w$는 원점으로부터 직선까지 이어지는 수직선을 나타내는 벡터이다.
$x-w$ 벡터가 $w$ 벡터와 수직이라는 것은 $x$가 가리키는 점과 $w$가 가리키는 점을 이은 선이 $w$와 수직이라는 뜻이다.
예를 들어
$$
w = \begin{bmatrix}1 \ 2\end{bmatrix} ,\;\;
w_0 = 5
$$
일 때
$$
\begin{bmatrix}1 & 2\end{bmatrix} \begin{bmatrix}x_1 \ x_2 \end{bmatrix} - 5 = x_1 + 2x_2 - 5 = 0
$$
이면 벡터 $w$가 가리키는 점 (1, 2)를 지나면서 벡터 $w$에 수직인 선을 뜻한다.
End of explanation
a = np.array([1, 2])
b = np.array([3, 3])
c = np.array([10, 14])
2*a + b - 0.5*c
Explanation: 직선과 점의 거리
직선 $ w^Tx - w_0 = 0 $ 과 이 직선 위에 있지 않은 점 $x'$의 거리는 단위 벡터 $\dfrac{w}{\|w\|}$에 대한 $x'$의 투영에서 $\|w\|$를 뺀 값의 절대값이다. 따라서 다음과 같이 정리할 수 있다.
$$
\left| \dfrac{w^Tx'}{\|w\|} - \|w\| \right| = \dfrac{\left|w^Tx' - \|w\|^2 \right|}{\|w\|}= \dfrac{\left|w^Tx' - w_0 \right|}{\|w\|}
$$
벡터의 선형 종속과 선형 독립
벡터들의 선형 조합이 0이 되는 모두 0이 아닌 스칼라값들이 존재하면 그 벡터들은 선형 종속(linearly dependent)이라고 한다.
$$
a = \begin{bmatrix}1 \ 2\end{bmatrix} ,\;\;
b = \begin{bmatrix}3 \ 3\end{bmatrix} \;\;
c = \begin{bmatrix}10 \ 14\end{bmatrix} \;\;
$$
$$
2a + b - \frac{1}{2}c = 0
$$
End of explanation
A = np.array([[1, 5, 6], [2, 6, 8], [3, 11, 14]])
np.linalg.matrix_rank(A)
Explanation: 벡터들의 선형 조합이 0이 되는 모두 0이 아닌 스칼라값들이 존재하지 않으면 그 벡터들은 선형 독립(linearly independent)이라고 한다.
$$ \alpha_1 a_1 + \cdots + \alpha_K a_K = 0 \;\;\;\; \leftrightarrow \;\;\;\; \alpha_1 = \cdots = \alpha_K = 0 $$
기저 벡터
벡터 공간에 속하는 벡터의 집합이 선형 독립이고 다른 모든 벡터 공간의 벡터들이 그 벡터 집합의 선형 조합으로 나타나면 그 벡터 집합을 벡터 공간의 기저 벡터(basis vector)라고 한다.
예를 들어 다음과 같은 두 벡터는 2차원 벡터 공간의 기저 벡터이다.
$$
a = \begin{bmatrix}1 \ 0\end{bmatrix} ,\;\;
b = \begin{bmatrix}0 \ 1\end{bmatrix} \;\;
$$
또는
$$
a = \begin{bmatrix}1 \ 1\end{bmatrix} ,\;\;
b = \begin{bmatrix}2 \ 3\end{bmatrix} \;\;
$$
다음과 같은 두 벡터는 2차원 벡터 공간의 기저 벡터가 될 수 없다.
$$
a = \begin{bmatrix}1 \ 2\end{bmatrix} ,\;\;
b = \begin{bmatrix}2 \ 4\end{bmatrix} \;\;
$$
열 공간
행렬은 열 벡터의 집합으로 볼 수 있다. 이 때 열 벡터들의 조합으로 생성되는 벡터 공간을 열 공간(column space)이라고 한다.
$$
A = \begin{bmatrix} 1 & 5 & 6 \ 2 & 6 & 8 \ 7 & 1 & 8 \end{bmatrix}
\;\;\;\; \rightarrow \;\;\;\;
\alpha_1 \begin{bmatrix} 1 \ 2 \ 7 \end{bmatrix} +
\alpha_2 \begin{bmatrix} 5 \ 6 \ 1 \end{bmatrix} +
\alpha_3 \begin{bmatrix} 6 \ 8 \ 8 \end{bmatrix}
\; \in \; \text{column space}
$$
열 랭크
행렬의 열 벡터 중 서로 독립인 열 벡터의 최대 갯수를 열 랭크(column rank) 혹은 랭크(rank)라고 한다.
예를 들어 다음 행렬의 랭크는 2이다.
$$
A = \begin{bmatrix} 1 & 5 & 6 \ 2 & 6 & 8 \ 3 & 11 & 14 \end{bmatrix}
$$
numpy의 linalg 서브 패키지의 matrix_rank 명령으로 랭크를 계산할 수 있다.
End of explanation
e1 = np.array([1, 0])
e2 = np.array([0, 1])
a = np.array([2, 2])
plt.annotate('', xy=e1, xytext=(0,0), arrowprops=dict(facecolor='green'))
plt.annotate('', xy=e2, xytext=(0,0), arrowprops=dict(facecolor='green'))
plt.annotate('', xy=a, xytext=(0,0), arrowprops=dict(facecolor='gray'))
plt.plot(0, 0, 'ro', ms=10)
plt.plot(a[0], a[1], 'ro', ms=10)
plt.text(1.05, 1.35, "$a$", fontdict={"size": 18})
plt.text(-0.2, 0.5, "$e_1$", fontdict={"size": 18})
plt.text(0.5, -0.2, "$e_2$", fontdict={"size": 18})
plt.xticks(np.arange(-2, 4))
plt.yticks(np.arange(-1, 4))
plt.xlim(-1.5, 3.5)
plt.ylim(-0.5, 3)
plt.show()
Explanation: 좌표
벡터의 성분, 즉 좌표(coordinate)는 표준 기저 벡터들에 대한 해당 벡터의 투영(projection)으로 볼 수 있다.
End of explanation
e1 = np.array([1, 0])
e2 = np.array([0, 1])
a = np.array([2, 2])
g1 = np.array([1, 1])/np.sqrt(2)
g2 = np.array([-1, 1])/np.sqrt(2)
plt.annotate('', xy=e1, xytext=(0,0), arrowprops=dict(facecolor='green'))
plt.annotate('', xy=e2, xytext=(0,0), arrowprops=dict(facecolor='green'))
plt.annotate('', xy=g1, xytext=(0,0), arrowprops=dict(facecolor='red'))
plt.annotate('', xy=g2, xytext=(0,0), arrowprops=dict(facecolor='red'))
plt.annotate('', xy=a, xytext=(0,0), arrowprops=dict(facecolor='gray', alpha=0.5))
plt.plot(0, 0, 'ro', ms=10)
plt.plot(a[0], a[1], 'ro', ms=10)
plt.text(1.05, 1.35, "$a$", fontdict={"size": 18})
plt.text(-0.2, 0.5, "$e_1$", fontdict={"size": 18})
plt.text(0.5, -0.2, "$e_2$", fontdict={"size": 18})
plt.text(0.2, 0.5, "$g_1$", fontdict={"size": 18})
plt.text(-0.6, 0.2, "$g_2$", fontdict={"size": 18})
plt.xticks(np.arange(-2, 4))
plt.yticks(np.arange(-1, 4))
plt.xlim(-1.5, 3.5)
plt.ylim(-0.5, 3)
plt.show()
A = np.vstack([g1, g2]).T
A
Ainv = np.linalg.inv(A)
Ainv
Ainv.dot(a)
Explanation: 좌표 변환
새로운 기저 벡터를에 대해 벡터 투영을 계산하는 것을 좌표 변환(coordinate transform)이라고 한다.
좌표 변환은 새로운 기저 벡터로 이루어진 변환 행렬(transform matrix) $A$ 와의 내적으로 계산한다.
$$ Aa' = a $$
$$ a' = A^{-1}a $$
예를 들어, 기존의 기저 벡터가
$$
e_1 = \begin{bmatrix}1 \ 0\end{bmatrix} ,\;\;
e_2 = \begin{bmatrix}0 \ 1\end{bmatrix} \;\;
$$
이면 벡터 $a$는 사실
$$
a = \begin{bmatrix}2 \ 2\end{bmatrix} = 2 \begin{bmatrix}1 \ 0\end{bmatrix} + 2 \begin{bmatrix}0 \ 1 \end{bmatrix} = 2 e_1 + 2 e_2
$$
새로운 기저 벡터가
$$
g_1 = \begin{bmatrix} \dfrac{1}{\sqrt{2}} \ \dfrac{1}{\sqrt{2}} \end{bmatrix} ,\;\;
g_2 = \begin{bmatrix} -\dfrac{1}{\sqrt{2}} \ \dfrac{1}{\sqrt{2}} \end{bmatrix} ,\;\;
$$
이면 벡터 $a$의 좌표는 다음과 같이 바뀐다.
$$
a = \begin{bmatrix}2 \ 2\end{bmatrix} \;\;\;\; \rightarrow \;\;\;\;
a' = A^{-1}a =
\begin{bmatrix}
e'_1 & e'_2
\end{bmatrix}
a
=
\begin{bmatrix}
\dfrac{1}{\sqrt{2}} & -\dfrac{1}{\sqrt{2}} \
\dfrac{1}{\sqrt{2}} & \dfrac{1}{\sqrt{2}}
\end{bmatrix}^{-1}
\begin{bmatrix}2 \ 2\end{bmatrix}
=
\begin{bmatrix}
\dfrac{1}{\sqrt{2}} & \dfrac{1}{\sqrt{2}} \
-\dfrac{1}{\sqrt{2}} & \dfrac{1}{\sqrt{2}}
\end{bmatrix}
\begin{bmatrix}2 \ 2\end{bmatrix}
= \begin{bmatrix}2\sqrt{2}\0\end{bmatrix}
$$
End of explanation |
3,819 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http
Step1: To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
Step4: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step6: <img src="image/mean_variance.png" style="height
Step7: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
Step8: <img src="image/weight_biases.png" style="height
Step9: <img src="image/learn_rate_tune.png" style="height
Step10: Test
Set the epochs, batch_size, and learning_rate with the best learning parameters you discovered in problem 3. You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. | Python Code:
!which python
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in differents font.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
End of explanation
import hashlib
import os
import pickle
from urllib.request import urlretrieve
#from urllib2 import urlopen
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
Explanation: To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
image_data_min = np.min(image_data)
image_data_max = np.max(image_data)
a,b = 0.1,0.9
image_data_prime = a + (image_data-image_data_min)*(b-a)/(image_data_max-image_data_min)
return image_data_prime
# TODO: Implement Min-Max scaling for grayscale image data
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
Explanation: <img src="image/mean_variance.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
features_count = 784
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32,[None,features_count])
labels = tf.placeholder(tf.float32,[None,labels_count])
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal([features_count,labels_count]))
biases = tf.Variable(tf.zeros([labels_count]))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
Explanation: <img src="image/weight_biases.png" style="height: 60%;width: 60%; position: relative; right: 10%">
Problem 2
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
# TODO: Find the best parameters for each configuration
epochs = 5
batch_size = 100
learning_rate = 0.2
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
Explanation: <img src="image/learn_rate_tune.png" style="height: 60%;width: 60%">
Problem 3
Below are 3 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Batch Size:
* 2000
* 1000
* 500
* 300
* 50
* Learning Rate: 0.01
Configuration 2
* Epochs: 1
* Batch Size: 100
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 3
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Batch Size: 100
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
Configuration 1
Epoch 1, Batch size 50, learning rate 0.01
Configuration 2
Epoch 1 , Batch size 100, learning rate 0.1
Configuration 3
End of explanation
# TODO: Set the epochs, batch_size, and learning_rate with the best parameters from problem 3
epochs = 5
batch_size = 100
learning_rate = 0.2
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
Explanation: Test
Set the epochs, batch_size, and learning_rate with the best learning parameters you discovered in problem 3. You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation |
3,820 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
iPython Cookbook - Monte Carlo Pricing II - Call (Lognormal)
Pricing a call option with Monte Carlo (Normal model)
Step1: Those are our option and market parameters
Step2: We now define our payoff function using a closure
Step3: We also define an analytic function for calculation the price of a call using the Black Scholes formula, allowing us to benchmark our results
Step4: We now generate a set of Standard Gaussian variables $z$ as a basis for our simulation...
Step5: ...and transform it in a lognormal variable with the right mean and log standard deviation, ie a variable that is distributed according to $LN(F,\sigma\sqrt{T})$. Specifically, to transform a Standard Gaussian $Z$ into a lognormal $X$ with the above parameters we use the following formula
$$
X = F \times \exp ( -0.5 \sigma^2 T + \sigma \sqrt{T} Z )
$$
Step6: We first look at the histogram of the spot prices $x$ (the function trim_vals simply deals with with the fact that histogram returns the starting and the ending point of the bin, ie overall one point too many)
Step7: We now determine the payoff values from our draws of the final spot price. Note that we need to use the map command rather than simply writing po = payoff(x). The reason for this is that this latter form is not compatible with the if statement in our payoff function. We also already compute the forward value of the option, which is simply the average payoff over all simulations.
Step8: Now we produce the histogram of the payoffs
Step9: In the next step we compute our "Greeks", ie a number of derivatives of the forward value with respect to the underlying parameters. What is crucial here is that those derivative are calculated on the same draw random numbers $z$, otherwise the Monte Carlo sampling error will dwarf the signal. The sensitivities we compute are to increase / decrease the forward by one currency unit (for Delta and Gamma), to increase the volatility by one currency unit (for Vega), and to decrease the time to maturity by 0.1y (for Theta)
Step10: Licence and version
(c) Stefan Loesch / oditorium 2014; all rights reserved
(license) | Python Code:
import numpy as np
import matplotlib.pyplot as plt
Explanation: iPython Cookbook - Monte Carlo Pricing II - Call (Lognormal)
Pricing a call option with Monte Carlo (Normal model)
End of explanation
strike = 100
mat = 1
forward = 100
vol = 0.3
Explanation: Those are our option and market parameters: the exercise price of the option strike, the forward price of the underlying security forward and its volatility vol (as the model is lognormal, the volatility is a percentage number; eg 0.20 = 20%)
End of explanation
def call(k=100):
def payoff(spot):
if spot > k:
return spot - k
else:
return 0
return payoff
payoff = call(k=strike)
Explanation: We now define our payoff function using a closure: the variable payoff represents a function with one parameter spot with the strike k being frozen at whatever value it had when the outer function call was called to set payoff
End of explanation
from scipy.stats import norm
def bscall(fwd=100,strike=100,sig=0.1,mat=1):
lnfs = log(fwd/strike)
sig2t = sig*sig*mat
sigsqrt = sig*sqrt(mat)
d1 = (lnfs + 0.5 * sig2t)/ sigsqrt
d2 = (lnfs - 0.5 * sig2t)/ sigsqrt
fv = fwd * norm.cdf (d1) - strike * norm.cdf (d2)
#print "d1 = %f (N = %f)" % (d1, norm.cdf (d1))
#print "d2 = %f (N = %f)" % (d2, norm.cdf (d2))
return fv
#bscall(fwd=100, strike=100, sig=0.1, mat=1)
Explanation: We also define an analytic function for calculation the price of a call using the Black Scholes formula, allowing us to benchmark our results
End of explanation
N = 10000
z = np.random.standard_normal((N))
#z
Explanation: We now generate a set of Standard Gaussian variables $z$ as a basis for our simulation...
End of explanation
x = forward * exp(- 0.5 * vol * vol * mat + vol * sqrt(mat) * z)
min(x), max(x), mean(x)
Explanation: ...and transform it in a lognormal variable with the right mean and log standard deviation, ie a variable that is distributed according to $LN(F,\sigma\sqrt{T})$. Specifically, to transform a Standard Gaussian $Z$ into a lognormal $X$ with the above parameters we use the following formula
$$
X = F \times \exp ( -0.5 \sigma^2 T + \sigma \sqrt{T} Z )
$$
End of explanation
def trim_xvals(a):
a1 = np.zeros(len(a)-1)
for idx in range(0,len(a)-1):
#a1[idx] = 0.5*(a[idx]+a[idx+1])
a1[idx] = a[idx]
return a1
hg0=np.histogram(x, bins=50)
xvals0 = trim_xvals(hg0[1])
fwd1 = mean(x)
print ("forward = %f" % (fwd1))
plt.bar(xvals0,hg0[0], width=0.5*(xvals0[1]-xvals0[0]))
plt.title('forward distribution')
plt.xlabel('forward')
plt.ylabel('occurrences')
plt.show()
Explanation: We first look at the histogram of the spot prices $x$ (the function trim_vals simply deals with with the fact that histogram returns the starting and the ending point of the bin, ie overall one point too many)
End of explanation
po = list(map(payoff,x))
fv = mean(po)
#po
Explanation: We now determine the payoff values from our draws of the final spot price. Note that we need to use the map command rather than simply writing po = payoff(x). The reason for this is that this latter form is not compatible with the if statement in our payoff function. We also already compute the forward value of the option, which is simply the average payoff over all simulations.
End of explanation
hg = np.histogram(po,bins=50)
xvals = trim_xvals(hg[1])
plt.bar(xvals,hg[0], width=0.9*(xvals[1]-xvals[0]))
plt.title('payout distribution')
plt.xlabel('payout')
plt.ylabel('occurrences')
plt.show()
Explanation: Now we produce the histogram of the payoffs
End of explanation
x = (forward+1) * exp(- 0.5 * vol * vol * mat + vol * sqrt(mat) * z)
po = list(map(payoff,x))
fv_plus = mean(po)
x = (forward-1) * exp(- 0.5 * vol * vol * mat + vol * sqrt(mat) * z)
po = list(map(payoff,x))
fv_minus = mean(po)
x = forward * exp(- 0.5 * (vol+0.01) * (vol+0.01) * mat + (vol+0.01) * sqrt(mat) * z)
po = list(map(payoff,x))
fv_volp = mean(po)
x = forward * exp(- 0.5 * vol * vol * (mat-0.1) + vol * sqrt(mat-0.1) * z)
po = list(map(payoff,x))
fv_timep = mean(po)
print ("Strike = %f" % strike)
print ("Maturity = %f" % mat)
print ("Forward = %f" % forward)
print ("Volatility = %f" % vol)
print ("FV = %f" % fv)
print (" check = %f" % bscall(fwd=forward, strike=strike, sig=vol, mat=mat))
print ("Delta = %f" % ((fv_plus - fv_minus)/2))
print ("Gamma = %f" % ((fv_plus + fv_minus - 2 * fv)))
print ("Theta = %f" % ((fv_timep - fv)))
print ("Vega = %f" % ((fv_volp - fv)))
Explanation: In the next step we compute our "Greeks", ie a number of derivatives of the forward value with respect to the underlying parameters. What is crucial here is that those derivative are calculated on the same draw random numbers $z$, otherwise the Monte Carlo sampling error will dwarf the signal. The sensitivities we compute are to increase / decrease the forward by one currency unit (for Delta and Gamma), to increase the volatility by one currency unit (for Vega), and to decrease the time to maturity by 0.1y (for Theta)
End of explanation
import sys
print(sys.version)
Explanation: Licence and version
(c) Stefan Loesch / oditorium 2014; all rights reserved
(license)
End of explanation |
3,821 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Wind and Sea Level Pressure Interpolation
Interpolate sea level pressure, as well as wind component data,
to make a consistent looking analysis, featuring contours of pressure and wind barbs.
Step1: Read in data
Step2: Project the lon/lat locations to our final projection
Step3: Remove all missing data from pressure
Step4: Interpolate pressure using Cressman interpolation
Step5: Get wind information and mask where either speed or direction is unavailable
Step6: Calculate u and v components of wind and then interpolate both.
Both will have the same underlying grid so throw away grid returned from v interpolation.
Step7: Get temperature information
Step8: Set up the map and plot the interpolated grids appropriately. | Python Code:
import cartopy.crs as ccrs
import cartopy.feature as cfeature
from matplotlib.colors import BoundaryNorm
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from metpy.calc import wind_components
from metpy.cbook import get_test_data
from metpy.interpolate import interpolate_to_grid, remove_nan_observations
from metpy.plots import add_metpy_logo
from metpy.units import units
to_proj = ccrs.AlbersEqualArea(central_longitude=-97., central_latitude=38.)
Explanation: Wind and Sea Level Pressure Interpolation
Interpolate sea level pressure, as well as wind component data,
to make a consistent looking analysis, featuring contours of pressure and wind barbs.
End of explanation
with get_test_data('station_data.txt') as f:
data = pd.read_csv(f, header=0, usecols=(2, 3, 4, 5, 18, 19),
names=['latitude', 'longitude', 'slp', 'temperature', 'wind_dir',
'wind_speed'],
na_values=-99999)
Explanation: Read in data
End of explanation
lon = data['longitude'].values
lat = data['latitude'].values
xp, yp, _ = to_proj.transform_points(ccrs.Geodetic(), lon, lat).T
Explanation: Project the lon/lat locations to our final projection
End of explanation
x_masked, y_masked, pres = remove_nan_observations(xp, yp, data['slp'].values)
Explanation: Remove all missing data from pressure
End of explanation
slpgridx, slpgridy, slp = interpolate_to_grid(x_masked, y_masked, pres, interp_type='cressman',
minimum_neighbors=1, search_radius=400000,
hres=100000)
Explanation: Interpolate pressure using Cressman interpolation
End of explanation
wind_speed = (data['wind_speed'].values * units('m/s')).to('knots')
wind_dir = data['wind_dir'].values * units.degree
good_indices = np.where((~np.isnan(wind_dir)) & (~np.isnan(wind_speed)))
x_masked = xp[good_indices]
y_masked = yp[good_indices]
wind_speed = wind_speed[good_indices]
wind_dir = wind_dir[good_indices]
Explanation: Get wind information and mask where either speed or direction is unavailable
End of explanation
u, v = wind_components(wind_speed, wind_dir)
windgridx, windgridy, uwind = interpolate_to_grid(x_masked, y_masked, np.array(u),
interp_type='cressman', search_radius=400000,
hres=100000)
_, _, vwind = interpolate_to_grid(x_masked, y_masked, np.array(v), interp_type='cressman',
search_radius=400000, hres=100000)
Explanation: Calculate u and v components of wind and then interpolate both.
Both will have the same underlying grid so throw away grid returned from v interpolation.
End of explanation
x_masked, y_masked, t = remove_nan_observations(xp, yp, data['temperature'].values)
tempx, tempy, temp = interpolate_to_grid(x_masked, y_masked, t, interp_type='cressman',
minimum_neighbors=3, search_radius=400000, hres=35000)
temp = np.ma.masked_where(np.isnan(temp), temp)
Explanation: Get temperature information
End of explanation
levels = list(range(-20, 20, 1))
cmap = plt.get_cmap('viridis')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
fig = plt.figure(figsize=(20, 10))
add_metpy_logo(fig, 360, 120, size='large')
view = fig.add_subplot(1, 1, 1, projection=to_proj)
view._hold = True # Work-around for CartoPy 0.16/Matplotlib 3.0.0 incompatibility
view.set_extent([-120, -70, 20, 50])
view.add_feature(cfeature.STATES.with_scale('50m'))
view.add_feature(cfeature.OCEAN)
view.add_feature(cfeature.COASTLINE.with_scale('50m'))
view.add_feature(cfeature.BORDERS, linestyle=':')
cs = view.contour(slpgridx, slpgridy, slp, colors='k', levels=list(range(990, 1034, 4)))
view.clabel(cs, inline=1, fontsize=12, fmt='%i')
mmb = view.pcolormesh(tempx, tempy, temp, cmap=cmap, norm=norm)
fig.colorbar(mmb, shrink=.4, pad=0.02, boundaries=levels)
view.barbs(windgridx, windgridy, uwind, vwind, alpha=.4, length=5)
view.set_title('Surface Temperature (shaded), SLP, and Wind.')
plt.show()
Explanation: Set up the map and plot the interpolated grids appropriately.
End of explanation |
3,822 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Minimax Algorithm with Memoization
This notebook implements the minimax algorithm with memoization
and thereby implements a program that can play various deterministic, zero-sum, turn-taking, two-person games with perfect information. The implementation assumes that an external notebook defines a game and that this notebook provides the following variables and functions
Step1: The function minValue(State) takes one argument
Step2: The function best_move takes one argument
Step3: The next line is needed because we need the function IPython.display.clear_output to clear the output in a cell.
Step4: The function play_game plays a game on the given canvas. The game played is specified indirectly as follows
Step5: Below, the jupyter magic command %%capture silently discards the output that is produced by the notebook Tic-Tac-Toe.ipynb.
Step6: With the game tic-tac-toe represented as lists, computing the value of the start state takes about 4 seconds on my Windows PC (Processor
Step7: The start state has the value 0as neither player can force a win.
Step8: Let's draw the board and play a game.
Step9: Now it's time to play. In the input window that will pop up later, enter your move in the format "row,col" with no space between row and column.
Step10: Using the BitBoard Implementation of TicTacToe
Next, we try how much the bit-board implementation speeds up the game.
Step11: On my computer, the bit-board implementation is about twice as fast as the list based implementation.
Step12: Memoization
Step13: The list based implementation of TicTacToe with Memoization takes 84 ms.
The bit-board based implementation of TicTacToe takes 38 ms.
Step14: Let us check the size of the cache. | Python Code:
def maxValue(State):
if finished(State):
return utility(State)
return max([ minValue(ns) for ns in next_states(State, gPlayers[0]) ])
Explanation: The Minimax Algorithm with Memoization
This notebook implements the minimax algorithm with memoization
and thereby implements a program that can play various deterministic, zero-sum, turn-taking, two-person games with perfect information. The implementation assumes that an external notebook defines a game and that this notebook provides the following variables and functions:
* gPlayers is a list of length two. The elements of this list are the
names of the players. It is assumed that the first element in this list represents
the computer, while the second element is the human player. The computer
always starts the game.
* gStart is the start state of the game.
* next_states(State, player) is a function that takes two arguments:-Stateis a state of the game.
-playeris the player whose turn it is to make a move.
The function callnext_states(State, player)returns the list
of all states that can be reached by any move ofplayer.
*utility(State)takes a state and a player as its arguments.
Ifstateis a *terminal state* (i.e. a state where the game is finished),
then the function returns the value that thisstatehas forgPlayer[0]. Otherwise, the function returnsNone.
*finished(State)returnsTrueif and only ifstateis a terminal state.
*get_move(State)displays the given state and asks the human player for
her move.
*final_msg(State)informs the human player about the result of the game.
*draw(State, canvas, value)draws the given state on the given canvas and
informs the user about thevalue` of this state. The value is always
calculated from the perspective of the first player, which is the computer.
The function maxValue(State) takes one argument:
- State is the current state of the game.
The function assumes that it is the first player's turn. It returns the value that State has
if both players play their best game. This value is an element from the set ${-1, 0, 1}$.
* If the first player can force a win, then maxValue returns the value 1.
* If the first player can at best force a draw, then the return value is 0.
* If the second player can force a win, then the return value is -1.
Mathematically, the function maxValue is defined recursively:
- $\;\;\texttt{finished}(s) \rightarrow \texttt{maxValue}(s) = \texttt{utility}(s)$
- $\neg \texttt{finished}(s) \rightarrow
\texttt{maxValue}(s) = \max\bigl(\bigl{ \texttt{minValue}(n) \bigm| n \in \texttt{nextStates}(s, \texttt{gPlayers}[0]) \bigr}\bigr)
$
End of explanation
def minValue(State):
if finished(State):
return utility(State)
return min([ maxValue(ns) for ns in next_states(State, gPlayers[1]) ])
Explanation: The function minValue(State) takes one argument:
- State is the current state of the game.
The function assumes that it is the second player's turn. It returns the value that State has
if both players play their best game. This value is an element from the set ${-1, 0, 1}$.
* If the first player can force a win, then the return value is 1.
* If the first player can at best force a draw, then the return value is 0.
* If the second player can force a win, then the return value is -1.
Mathematically, the function minValue is defined recursively:
- $\texttt{finished}(s) \rightarrow \texttt{minValue}(s) = \texttt{utility}(s)$
- $\neg \texttt{finished}(s) \rightarrow
\texttt{minValue}(s) = \min\bigl(\bigl{ \texttt{maxValue}(n) \bigm| n \in \texttt{nextStates}(s, \texttt{gPlayers}[1]) \bigr}\bigr)
$
End of explanation
import random
random.seed(1)
def best_move(State):
NS = next_states(State, gPlayers[0])
bestVal = maxValue(State)
BestMoves = [s for s in NS if minValue(s) == bestVal]
BestState = random.choice(BestMoves)
return bestVal, BestState
Explanation: The function best_move takes one argument:
- State is the current state of the game.
It is assumed that the first player in the list Player is to move.
The function best_move returns a pair of the form $(v, s)$ where $s$ is a state and $v$ is the value of this state. The state $s$ is a state that is reached from State if player makes one of her optimal moves. In order to have some variation in the game, the function randomly chooses any of the optimal moves.
End of explanation
import IPython.display
Explanation: The next line is needed because we need the function IPython.display.clear_output to clear the output in a cell.
End of explanation
def play_game(canvas):
State = gStart
while True:
val, State = best_move(State);
draw(State, canvas, f'For me, the game has the value {val}.')
if finished(State):
final_msg(State)
return
IPython.display.clear_output(wait=True)
State = get_move(State)
draw(State, canvas, '')
if finished(State):
IPython.display.clear_output(wait=True)
final_msg(State)
return
Explanation: The function play_game plays a game on the given canvas. The game played is specified indirectly as follows:
- gStart is a global variable defining the start state of the game.
This variable is defined in the notebook that defines the game that is played.
The same holds for the other functions mentioned below.
- next_states is a function such that $\texttt{next_states}(s, p)$ computes the set of all possible states that can be reached from state $s$ if player $p$ is next to move.
- finished is a function such that $\texttt{finished}(s)$ is true for a state $s$ if the game is over in state $s$.
- utility is a function such that $\texttt{utility}(s)$ returns either -1, 0, or 1 in the terminal state $s$. We have that
- $\texttt{utility}(s)= -1$ iff the game is lost for the first player in state $s$,
- $\texttt{utility}(s)= 0$ iff the game is drawn, and
- $\texttt{utility}(s)= 1$ iff the game is won for the first player in state $s$.
End of explanation
%%capture
%run Tic-Tac-Toe.ipynb
Explanation: Below, the jupyter magic command %%capture silently discards the output that is produced by the notebook Tic-Tac-Toe.ipynb.
End of explanation
%%time
val = maxValue(gStart)
Explanation: With the game tic-tac-toe represented as lists, computing the value of the start state takes about 4 seconds on my Windows PC (Processor: AMD Ryzen Threadripper PRO 3955WX with 16 Cores, 4.1 GHz).
End of explanation
val
Explanation: The start state has the value 0as neither player can force a win.
End of explanation
canvas = create_canvas()
draw(gStart, canvas, f'Current value of game for "X": {val}')
Explanation: Let's draw the board and play a game.
End of explanation
play_game(canvas)
Explanation: Now it's time to play. In the input window that will pop up later, enter your move in the format "row,col" with no space between row and column.
End of explanation
%%capture
%run Tic-Tac-Toe-BitBoard.ipynb
Explanation: Using the BitBoard Implementation of TicTacToe
Next, we try how much the bit-board implementation speeds up the game.
End of explanation
%%time
val = maxValue(gStart)
canvas = create_canvas()
draw(gStart, canvas, f'Current value of game for "X": {val}')
play_game(canvas)
Explanation: On my computer, the bit-board implementation is about twice as fast as the list based implementation.
End of explanation
gCache = {}
def memoize(f):
global gCache
def f_memoized(*args):
if (f, args) in gCache:
return gCache[(f, args)]
result = f(*args)
gCache[(f, args)] = result
return result
return f_memoized
maxValue = memoize(maxValue)
minValue = memoize(minValue)
Explanation: Memoization
End of explanation
%%time
val = maxValue(gStart)
Explanation: The list based implementation of TicTacToe with Memoization takes 84 ms.
The bit-board based implementation of TicTacToe takes 38 ms.
End of explanation
len(gCache)
Explanation: Let us check the size of the cache.
End of explanation |
3,823 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CartesianCoords and PolarCoords are classes that were designed to be used in-house for the conversion between Cartesian and Polar coordinates. You just need to initialise the object with some coordinates, and then it is easy to extract the relevant information.
3D coordinates are possible, but the z-coordinate has a default value of 0.
Step1: pmt.PolarCoords works in exactly the same way, but instead you initialise it with polar coordinates (radius, azimuth and height (optional), respectively) and the cartesian ones can be extracted as above.
Function 1
Step2: Takes three arguments by default
Step3: And of course, as n becomes large, the polygon tends to a circle
Step4: Function 2
Step5: Only has one default argument
Step6: Function 3
Step7: Requires two arguments
Step8: Function 4
Step9: If you need to specify the side length, or the distance from the circumcentre to the middle of one of the faces, this function will convert that value to the circumradius (not diameter!) that would give the correct side length or apothem.
Step10: Using this in combination with plot_poly_fidi_mesh | Python Code:
cc = pmt.CartesianCoords(5,5)
print("2D\n")
print("x-coordinate: {}".format(cc.x))
print("y-coordinate: {}".format(cc.y))
print("radial: {}".format(cc.r))
print("azimuth: {}".format(cc.a))
cc3D = pmt.CartesianCoords(1,2,3)
print("\n3D\n")
print("x-coordinate: {}".format(cc3D.x))
print("y-coordinate: {}".format(cc3D.y))
print("z-coordinate: {}".format(cc3D.z))
print("radial: {}".format(cc3D.r))
print("azimuth: {}".format(cc3D.a))
print("height: {}".format(cc3D.h))
Explanation: CartesianCoords and PolarCoords are classes that were designed to be used in-house for the conversion between Cartesian and Polar coordinates. You just need to initialise the object with some coordinates, and then it is easy to extract the relevant information.
3D coordinates are possible, but the z-coordinate has a default value of 0.
End of explanation
print(pmt.in_poly.__doc__)
Explanation: pmt.PolarCoords works in exactly the same way, but instead you initialise it with polar coordinates (radius, azimuth and height (optional), respectively) and the cartesian ones can be extracted as above.
Function 1: in_poly
End of explanation
pmt.in_poly(x=5, y=30, n=3, r=40, plot=True)
pmt.in_poly(x=5, y=30, n=3, r=40) # No graph will be generated, more useful for use within other functions
pmt.in_poly(x=0, y=10, n=6, r=20, plot=True) # Dot changes colour to green when inside the polygon
import numpy as np
pmt.in_poly(x=-10, y=-25, n=6, r=20, rotation=np.pi/6, translate=(5,-20), plot=True) # Rotation and translation
Explanation: Takes three arguments by default:
x, specifying the x-coordinate of the point you would like to test
y, specifying the y-coordinate of the point you would like to test
n, the number of sides of the polygon
Optional arguments are:
r, the radius of the circumscribed circle (equal to the distance from the circumcentre to one of the vertices). Default r=1
rotation, the anti-clockwise rotation of the shape in radians. Default rotation=0
translate, specifies the coordinates of the circumcentre, given as a tuple (x,y). Default translate=(0,0)
plot, a boolean value to determine whether or not the plot is shown. Default plot=False
Examples below:
End of explanation
pmt.in_poly(x=3, y=5, n=100, r=10, plot=True)
Explanation: And of course, as n becomes large, the polygon tends to a circle:
End of explanation
print(pmt.plot_circular_fidi_mesh.__doc__)
Explanation: Function 2: plot_circular_fidi_mesh
End of explanation
pmt.plot_circular_fidi_mesh(diameter=60)
pmt.plot_circular_fidi_mesh(diameter=60, x_spacing=2, y_spacing=2, centre_mesh=True)
# Note the effect of centre_mesh=True. In the previous plot, the element boundaries are aligned with 0 on the x- and y-axes.
# In this case, centring the mesh has the effect of producing a mesh that is slightly wider than desired, shown below.
pmt.plot_circular_fidi_mesh(diameter=30, x_spacing=1, y_spacing=2, show_axes=False, show_title=False)
# Flexible element sizes. Toggling axes and title can make for prettier (albeit less informative) pictures.
Explanation: Only has one default argument:
diameter, the diameter of the circle you would like to plot
Optional arguments:
x_spacing, the width of the mesh elements. Default x_spacing=2
y_spacing, the height of the mesh elements. Default y_spacing=2 (only integers are currently supported for x- and y-spacing.)
centre_mesh, outlined in the documentation above. Default centre_mesh='auto'
show_axes, boolean, self-explanatory. Default show_axes=True
show_title, boolean, self-explanatory. Default show_title=True
End of explanation
print(pmt.plot_poly_fidi_mesh.__doc__)
Explanation: Function 3: plot_poly_fidi_mesh
End of explanation
pmt.plot_poly_fidi_mesh(diameter=50, n=5, x_spacing=1, y_spacing=1, rotation=np.pi/10)
Explanation: Requires two arguments:
diameter, the diameter of the circumscribed circle
n, the number of sides the polygon should have
Optional arguments:
x_spacing
y_spacing
centre_mesh
show_axes
show_title
(All of the above have the same function as in plot_circular_fidi_mesh, and below, like in_poly)
rotation
translate
End of explanation
print(pmt.find_circumradius.__doc__)
Explanation: Function 4: find_circumradius
End of explanation
pmt.find_circumradius(n=3, side=10)
Explanation: If you need to specify the side length, or the distance from the circumcentre to the middle of one of the faces, this function will convert that value to the circumradius (not diameter!) that would give the correct side length or apothem.
End of explanation
d1 = 2*pmt.find_circumradius(n=3, side=40)
pmt.plot_poly_fidi_mesh(diameter=d1, n=3, x_spacing=1, y_spacing=1)
# It can be seen on the y-axis that the side has a length of 40, as desired.
d2 = 2*pmt.find_circumradius(n=5, apothem=20)
pmt.plot_poly_fidi_mesh(diameter=d2, n=5, x_spacing=1, y_spacing=1)
# The circumcentre lies at (0,0), and the leftmost side is in line with x=-20
Explanation: Using this in combination with plot_poly_fidi_mesh:
End of explanation |
3,824 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BERT Experiments with passage
In this notebook we repeat the experiments from the first BERT notebook, but this time we also feed the passage to the model. This results in the following differences
Step1: Data
We use the same data as for all our previous experiments. Here we load the training, development and test data for a particular prompt. This time we also read the passage.
Step2: Next, we build the label vocabulary, which maps every label in the training data to an index.
Step3: Model
We load the pretrained model and put it on a GPU if one is available. We also put the model in "training" mode, so that we can correctly update its internal parameters on the basis of our data sets.
Step6: Preprocessing
We preprocess the data by turning every example to an InputFeatures item. This item has all the attributes we need for finetuning BERT
Step7: Next, we initialize data loaders for each of our data sets. These data loaders present the data for training (for example, by grouping them into batches).
Step8: Evaluation
Our evaluation method takes a pretrained model and a dataloader. It has the model predict the labels for the items in the data loader, and returns the loss, the correct labels, and the predicted labels.
Step9: Training
Let's prepare the training. We set the training parameters and choose an optimizer and learning rate scheduler.
Step10: Now we do the actual training. In each epoch, we present the model with all training data and compute the loss on the training set and the development set. We save the model whenever the development loss improves. We end training when we haven't seen an improvement of the development loss for a specific number of epochs (the patience).
Optionally, we use gradient accumulation to accumulate the gradient for several training steps. This is useful when we want to use a larger batch size than our current GPU allows us to do.
Step11: Results
We load the pretrained model, set it to evaluation mode and compute its performance on the training, development and test set. We print out an evaluation report for the test set.
Note that different runs will give slightly different results. | Python Code:
import torch
from pytorch_transformers.tokenization_bert import BertTokenizer
from pytorch_transformers.modeling_bert import BertForSequenceClassification
BERT_MODEL = 'bert-base-uncased'
tokenizer = BertTokenizer.from_pretrained(BERT_MODEL)
Explanation: BERT Experiments with passage
In this notebook we repeat the experiments from the first BERT notebook, but this time we also feed the passage to the model. This results in the following differences:
We read a text file with the passage and concatenate the passage to the responses.
Running the model will take longer, because the input is considerably longer.
Depending on the available memory on the GPU, we may have to bring down the batch size and use gradient accumulation to accumulate the gradients across batches.
Note that BERT only takes inputs with a maximum length of 512 (after tokenization). This may become a problem with long passages, but it looks like our passages are typically shorter than that.
End of explanation
import ndjson
import glob
train_files = glob.glob("../data/interim/eatingmeat_emma_train_withprompt*.ndjson")
dev_file = "../data/interim/eatingmeat_emma_dev_withprompt.ndjson"
test_file = "../data/interim/eatingmeat_emma_test_withprompt.ndjson"
passage_file = "../data/raw/eatingmeat_passage.txt"
train_data = []
for train_file in train_files:
print(train_file)
with open(train_file) as i:
train_data += ndjson.load(i)
with open(dev_file) as i:
dev_data = ndjson.load(i)
with open(test_file) as i:
test_data = ndjson.load(i)
with open(passage_file) as i:
passage = "".join(i.readlines())
Explanation: Data
We use the same data as for all our previous experiments. Here we load the training, development and test data for a particular prompt. This time we also read the passage.
End of explanation
label2idx = {}
target_names = []
for item in train_data:
if item["label"] not in label2idx:
target_names.append(item["label"])
label2idx[item["label"]] = len(label2idx)
label2idx
Explanation: Next, we build the label vocabulary, which maps every label in the training data to an index.
End of explanation
model = BertForSequenceClassification.from_pretrained(BERT_MODEL, num_labels=len(label2idx))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
model.train()
Explanation: Model
We load the pretrained model and put it on a GPU if one is available. We also put the model in "training" mode, so that we can correctly update its internal parameters on the basis of our data sets.
End of explanation
import logging
import warnings
import numpy as np
logging.basicConfig(format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt = '%m/%d/%Y %H:%M:%S',
level = logging.INFO)
logger = logging.getLogger(__name__)
MAX_SEQ_LENGTH=512
class InputFeatures(object):
A single set of features of data.
def __init__(self, input_ids, input_mask, segment_ids, label_id):
self.input_ids = input_ids
self.input_mask = input_mask
self.segment_ids = segment_ids
self.label_id = label_id
def convert_examples_to_features(examples, passage, label2idx, max_seq_length, tokenizer, verbose=0):
Loads a data file into a list of `InputBatch`s.
features = []
for (ex_index, ex) in enumerate(examples):
# TODO: should deal better with sentences > max tok length
input_ids = tokenizer.encode("[CLS] " + passage + " " + ex["text"] + " [SEP]")
if len(input_ids) > max_seq_length:
warnings.warn("Input longer than maximum sequence length.")
input_ids = input_ids[:max_seq_length]
segment_ids = [0] * len(input_ids)
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
input_mask = [1] * len(input_ids)
# Zero-pad up to the sequence length.
padding = [0] * (max_seq_length - len(input_ids))
input_ids += padding
input_mask += padding
segment_ids += padding
assert len(input_ids) == max_seq_length
assert len(input_mask) == max_seq_length
assert len(segment_ids) == max_seq_length
label_id = label2idx[ex["label"]]
if verbose and ex_index == 0:
logger.info("*** Example ***")
logger.info("text: %s" % ex["text"])
logger.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
logger.info("input_mask: %s" % " ".join([str(x) for x in input_mask]))
logger.info("segment_ids: %s" % " ".join([str(x) for x in segment_ids]))
logger.info("label:" + str(ex["label"]) + " id: " + str(label_id))
features.append(
InputFeatures(input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids,
label_id=label_id))
return features
train_features = convert_examples_to_features(train_data, passage, label2idx, MAX_SEQ_LENGTH, tokenizer, verbose=0)
dev_features = convert_examples_to_features(dev_data, passage, label2idx, MAX_SEQ_LENGTH, tokenizer)
test_features = convert_examples_to_features(test_data, passage, label2idx, MAX_SEQ_LENGTH, tokenizer, verbose=1)
Explanation: Preprocessing
We preprocess the data by turning every example to an InputFeatures item. This item has all the attributes we need for finetuning BERT:
input ids: the ids of the tokens in the text
input mask: tells BERT what part of the input it should not look at (such as padding tokens)
segment ids: tells BERT what segment every token belongs to. BERT can take two different segments as input
label id: the id of this item's label
End of explanation
import torch
from torch.utils.data import TensorDataset, DataLoader, RandomSampler
def get_data_loader(features, max_seq_length, batch_size):
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
all_input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long)
all_segment_ids = torch.tensor([f.segment_ids for f in features], dtype=torch.long)
all_label_ids = torch.tensor([f.label_id for f in features], dtype=torch.long)
data = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_label_ids)
sampler = RandomSampler(data, replacement=False)
dataloader = DataLoader(data, sampler=sampler, batch_size=batch_size)
return dataloader
BATCH_SIZE = 2
train_dataloader = get_data_loader(train_features, MAX_SEQ_LENGTH, BATCH_SIZE)
dev_dataloader = get_data_loader(dev_features, MAX_SEQ_LENGTH, BATCH_SIZE)
test_dataloader = get_data_loader(test_features, MAX_SEQ_LENGTH, BATCH_SIZE)
Explanation: Next, we initialize data loaders for each of our data sets. These data loaders present the data for training (for example, by grouping them into batches).
End of explanation
def evaluate(model, dataloader):
eval_loss = 0
nb_eval_steps = 0
predicted_labels, correct_labels = [], []
for step, batch in enumerate(tqdm(dataloader, desc="Evaluation iteration")):
batch = tuple(t.to(device) for t in batch)
input_ids, input_mask, segment_ids, label_ids = batch
with torch.no_grad():
tmp_eval_loss, logits = model(input_ids, segment_ids, input_mask, label_ids)
outputs = np.argmax(logits.to('cpu'), axis=1)
label_ids = label_ids.to('cpu').numpy()
predicted_labels += list(outputs)
correct_labels += list(label_ids)
eval_loss += tmp_eval_loss.mean().item()
nb_eval_steps += 1
eval_loss = eval_loss / nb_eval_steps
correct_labels = np.array(correct_labels)
predicted_labels = np.array(predicted_labels)
return eval_loss, correct_labels, predicted_labels
Explanation: Evaluation
Our evaluation method takes a pretrained model and a dataloader. It has the model predict the labels for the items in the data loader, and returns the loss, the correct labels, and the predicted labels.
End of explanation
from pytorch_transformers.optimization import AdamW, WarmupLinearSchedule
GRADIENT_ACCUMULATION_STEPS = 8
NUM_TRAIN_EPOCHS = 20
LEARNING_RATE = 1e-5
WARMUP_PROPORTION = 0.1
def warmup_linear(x, warmup=0.002):
if x < warmup:
return x/warmup
return 1.0 - x
num_train_steps = int(len(train_data) / BATCH_SIZE / GRADIENT_ACCUMULATION_STEPS * NUM_TRAIN_EPOCHS)
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
optimizer = AdamW(optimizer_grouped_parameters, lr=LEARNING_RATE, correct_bias=False)
scheduler = WarmupLinearSchedule(optimizer, warmup_steps=100, t_total=num_train_steps)
Explanation: Training
Let's prepare the training. We set the training parameters and choose an optimizer and learning rate scheduler.
End of explanation
import os
from tqdm import trange
from tqdm import tqdm_notebook as tqdm
from sklearn.metrics import classification_report, precision_recall_fscore_support
OUTPUT_DIR = "/tmp/"
MODEL_FILE_NAME = "pytorch_model.bin"
PATIENCE = 5
global_step = 0
model.train()
loss_history = []
best_epoch = 0
for epoch in trange(int(NUM_TRAIN_EPOCHS), desc="Epoch"):
tr_loss = 0
nb_tr_examples, nb_tr_steps = 0, 0
for step, batch in enumerate(tqdm(train_dataloader, desc="Training iteration")):
batch = tuple(t.to(device) for t in batch)
input_ids, input_mask, segment_ids, label_ids = batch
outputs = model(input_ids, segment_ids, input_mask, label_ids)
loss = outputs[0]
if GRADIENT_ACCUMULATION_STEPS > 1:
loss = loss / GRADIENT_ACCUMULATION_STEPS
loss.backward()
tr_loss += loss.item()
nb_tr_examples += input_ids.size(0)
nb_tr_steps += 1
if (step + 1) % GRADIENT_ACCUMULATION_STEPS == 0:
lr_this_step = LEARNING_RATE * warmup_linear(global_step/num_train_steps, WARMUP_PROPORTION)
for param_group in optimizer.param_groups:
param_group['lr'] = lr_this_step
optimizer.step()
optimizer.zero_grad()
global_step += 1
dev_loss, _, _ = evaluate(model, dev_dataloader)
print("Loss history:", loss_history)
print("Dev loss:", dev_loss)
if len(loss_history) == 0 or dev_loss < min(loss_history):
model_to_save = model.module if hasattr(model, 'module') else model
output_model_file = os.path.join(OUTPUT_DIR, MODEL_FILE_NAME)
torch.save(model_to_save.state_dict(), output_model_file)
best_epoch = epoch
if epoch-best_epoch >= PATIENCE:
print("No improvement on development set. Finish training.")
break
loss_history.append(dev_loss)
Explanation: Now we do the actual training. In each epoch, we present the model with all training data and compute the loss on the training set and the development set. We save the model whenever the development loss improves. We end training when we haven't seen an improvement of the development loss for a specific number of epochs (the patience).
Optionally, we use gradient accumulation to accumulate the gradient for several training steps. This is useful when we want to use a larger batch size than our current GPU allows us to do.
End of explanation
print("Loading model from", output_model_file)
device="cpu"
model_state_dict = torch.load(output_model_file, map_location=lambda storage, loc: storage)
model = BertForSequenceClassification.from_pretrained(BERT_MODEL, state_dict=model_state_dict, num_labels=len(label2idx))
model.to(device)
model.eval()
_, train_correct, train_predicted = evaluate(model, train_dataloader)
_, dev_correct, dev_predicted = evaluate(model, dev_dataloader)
_, test_correct, test_predicted = evaluate(model, test_dataloader)
print("Training performance:", precision_recall_fscore_support(train_correct, train_predicted, average="micro"))
print("Development performance:", precision_recall_fscore_support(dev_correct, dev_predicted, average="micro"))
print("Test performance:", precision_recall_fscore_support(test_correct, test_predicted, average="micro"))
print(classification_report(test_correct, test_predicted, target_names=target_names))
Explanation: Results
We load the pretrained model, set it to evaluation mode and compute its performance on the training, development and test set. We print out an evaluation report for the test set.
Note that different runs will give slightly different results.
End of explanation |
3,825 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple 2D Plots using Matplotlib
Week 4 onwards (Term 2)
Import dependencies
Step1: Load the dataset from the csv file
Step2: Process the data
We will output the data in the form of a numpy matrix using our preprocessing functions.
The first row in the matrix (first data entry) and total number of samples will be displayed.
Step3: Prepare the individual data axis
Step4: Plot the data in 2D | Python Code:
import numpy as np
%run 'preprocessor.ipynb' #our own preprocessor functions
Explanation: Simple 2D Plots using Matplotlib
Week 4 onwards (Term 2)
Import dependencies
End of explanation
with open('/Users/timothy/Desktop/Files/data_new/merged.csv', 'r') as f:
reader = csv.reader(f)
data = list(reader)
Explanation: Load the dataset from the csv file
End of explanation
matrix = obtain_data_matrix(data)
samples = len(matrix)
print("Number of samples: " + str(samples))
Explanation: Process the data
We will output the data in the form of a numpy matrix using our preprocessing functions.
The first row in the matrix (first data entry) and total number of samples will be displayed.
End of explanation
filament = matrix[:,[8]]
time = matrix[:,[9]]
satisfaction = matrix[:,[10]]
result = matrix[:,[11]]
Explanation: Prepare the individual data axis
End of explanation
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
plt.figure(1,figsize=(12,8))
plt.xlabel('Print Time (mins)')
plt.ylabel('Filament (m)')
plt.title('Prints (Combined)')
plt.figure(2,figsize=(12,8))
plt.xlabel('Print Time (mins)')
plt.ylabel('Filament (m)')
plt.title('Prints (Failed Focus)')
plt.figure(3,figsize=(10,8))
plt.xlabel('Print Time (mins)')
plt.ylabel('Filament (m)')
plt.title('Prints (Success Only)')
for data in matrix:
filament = [data[0][:,8]]
time = [data[0][:,9]]
satisfaction = [data[:,[10]]]
result = [data[:,[11]]]
result_success = [[1]]
if result[0] == result_success:
plt.figure(1)
plt.scatter(time, filament, c="green", alpha=0.5,)
plt.figure(2)
plt.scatter(time, filament, c="green", alpha=0.1,)
plt.figure(3)
plt.scatter(time, filament, c="green", alpha=1,)
else:
plt.figure(1)
plt.scatter(time, filament, c="red", alpha=0.5,)
plt.figure(2)
plt.scatter(time, filament, c="red", alpha=1,)
plt.tight_layout()
plt.show()
Explanation: Plot the data in 2D
End of explanation |
3,826 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regressions
Author
Step1: Underfitting | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import math
x = np.linspace(0,4*math.pi,100)
y = map(math.sin,x)
plt.plot(x,y)
plt.plot(x,y,'o')
plt.show()
Explanation: Regressions
Author: Yang Long
Email: [email protected]
Linear Regression
Logistic Regression
Softmax Regression
Regularization
Optimization Objective
Linear Regression
The most common form of linear regression can be written as:
$$ y = \theta X $$
which tell us that the value of $y$ is determined by the linear combination of components of $X$. But in the mind, the vector $X$ not only contains the all features ${x_i}$, but also contains the bias $x_0$.
In some case, this linear regression can fit and predict the value of $y$ well, but unfortunately, it doesn't often behave as our wish. So for describing the bias between the practical value and analytical one, we define Cost Function J:
$$J = \frac{1}{m} \sum_{i=1}^{m} (\theta X^{(i)} - y^{(i)})^2 $$
Best value for parameters $\theta$ can minimize the cost function.
Logistic Regression
The common form:
$$ y = h_{\theta}(X) $$
$$ h_{\theta}(X) = g(\theta X) $$
The sigmoid function:
$$ g(z) = \frac {1} {1+e^{-z}}$$
The cost function J for logistic regression would be described by Cross-Entropy as the following form:
$$J = \frac{1}{m} \sum_{i=1}^{m} -y^{(i)}log(h_{\theta}(X^{(i)})) - (1-y^{(i)})log(1-h_{\theta}(X^{(i)}))$$
Softmax Regression
$$p(y^{(i)}=j|x^{(i)};\theta)=\frac{e^{\theta^T_j x^{(i)}}} {\sum_{l=1}^{k} {\theta^T_l x^{(i)}}}$$
$$J(\theta) = -\frac{1}{m}(\sum_{i=1}^{m} \sum_{j=1}^{k}1{y^{(i)}=j} log\frac{e^{\theta^T_j x^{(i)}}} {\sum_{l=1}^{k} {\theta^T_l x^{(i)}}})+\frac {\lambda}{2} \sum_{i=1}^{m} \sum_{j=0}^{k} \theta_{ij}^2$$
Fitting Error
As we know, simple linear regression can not fit some specific function very well, especially for regular function, i.e. $sin(x)$. So we will add or extract more features to minimize the difference between fitting one and practical one.
Overfitting and underfitting
Two kinds of errors frequently happen when fitting the training data -- Overfit and Underfit.
Overfitting
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import math
x = np.linspace(0,4*math.pi,100)
y = map(math.sin,x)
plt.plot(x,y)
plt.plot(x,y,'o')
plt.show()
Explanation: Underfitting
End of explanation |
3,827 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: We compare three implementations of the same Algorithm
Step5: Multiprocessing implementation
Step8: ipyparallel implementation
First start ipython cluster in terminal
<center> $ipcluster start -n 4 </center>
Step9: Performance test | Python Code:
from random import uniform
from time import time
def sample_circle(n):
throw n darts in [0, 1] * [0, 1] square, return the number
of darts inside unit circle.
Parameter
---------
n: number of darts to throw.
Return
------
m: number of darts inside unit circle.
m = 0
for i in range(int(n)):
x, y = uniform(0, 1), uniform(0, 1)
if x**2 + y**2 < 1:
m += 1
return m
def pi_serial(n):
Naive serial implementation of monte carlo pi using
sample_circle function.
Parameter
---------
n: number of darts to throw
Return
------
value of pi in the monte-carlo simulation
t1 = time()
m = sample_circle(n)
t2 = time()
t_loop = t2 - t1
pi_approx = 4. * m / n
return pi_approx, t_loop
n = 100000
%time pi_serial(n)
Explanation: We compare three implementations of the same Algorithm: calculating $\pi$ by Monte Carlo method
- Serial
- multiprocessing
- ipyparallel
Serial solution
End of explanation
from random import uniform
from multiprocessing import Pool
from time import time
def sample_circle(n):
throw n darts in [0, 1] * [0, 1] square, return the number
of darts inside unit circle.
Parameter
---------
n: number of darts to throw.
Return
------
m: number of darts inside unit circle.
m = 0
for i in range(int(n)):
x, y = uniform(0, 1), uniform(0, 1)
if x**2 + y**2 < 1:
m += 1
return m
def pi_mp(num_darts, num_proc=None, msg = False):
Calculate pi using multiprocessing
Parameter
---------
num_darts: total number of darts to throw to calculate pi
num_proc: number of processors/ workers to assign, default
value = os.cpu_count()
Return
------
pi_approx: approximated value of pi
t_loop: time spent in monte carlo simulation in seconds.
initializing and shutting down worker pool have been
excluded in timing.
# default number processes = num of CPUs
if not num_proc:
import os
num_proc = os.cpu_count()
t1 = time()
# average number of darts that each processor process
avg_load = num_darts // num_proc
extra_load = num_darts % avg_load
# initialize workload for processors
loads = [avg_load] * num_proc
loads[num_proc - 1] += extra_load
# start a pool of workers
pool = Pool(processes=num_proc)
t2 = time()
# assign jobs for each worker
result = pool.imap_unordered(sample_circle, loads)
# combine results from all workers
num_in_cirlce = sum(result)
t3 = time()
# shut down pool, remove pointer to pool object
# allowing garbage collectors release memory
pool.terminate()
del pool
t4 = time()
t_setup = t2-t1
t_loop = t3-t2
t_shutdown = t4-t3
pi_approx = 4 * num_in_cirlce / num_darts
if msg:
print("set up {0} workers used {1:.3g}s".format(num_proc, t_setup))
print("throwing {0} darts used {1:.3g}s".format(num_darts, t_loop))
print("terminate {0} workers used {1:.3g}s".format(num_proc, t_shutdown))
return pi_approx, t_loop
num_darts = 100000
pi_mp(num_darts)
Explanation: Multiprocessing implementation
End of explanation
def sample_circle(n):
throw n darts in [0, 1] * [0, 1] square, return the number
of darts inside unit circle.
Parameter
---------
n: number of darts to throw.
Return
------
m: number of darts inside unit circle.
m = 0
for i in range(int(n)):
x, y = uniform(0, 1), uniform(0, 1)
if x**2 + y**2 < 1:
m += 1
return m
from ipyparallel import Client
from time import time
def pi_ipp(num_darts, msg=False):
Calculate pi using ipyparallel module
Parameter
---------
num_darts: total number of darts to throw to calculate pi
num_proc: number of processors/ workers to assign, default
value = os.cpu_count()
Return
------
pi_approx: approximated value of pi
t_loop: time spent in monte carlo simulation in seconds.
initializing ipyparallel client has been
excluded in timing.
t1 = time()
num_proc = len(clients.ids)
avg_load = num_darts // num_proc
extra_load = num_darts % avg_load
# initialize workload for processors
loads = [avg_load] * num_proc
loads[num_proc - 1] += extra_load
t2 = time()
result = dview.map_async(sample_circle, loads)
approx_pi = 4 * sum(result) / num_darts
t3 = time()
t_loop = t3 - t2
t_setup = t2 - t1
if msg:
print("set up {0} ipyparallel engines used {1:.3g}s".format(
num_proc, t_setup))
print("throwing {0} darts used {1:.3g}s".format(num_darts, t_loop))
return approx_pi, t_loop
clients = Client()
dview = clients.direct_view()
with dview.sync_imports():
from random import uniform
n = 100000
pi_ipp(n)
Explanation: ipyparallel implementation
First start ipython cluster in terminal
<center> $ipcluster start -n 4 </center>
End of explanation
n_benchmark = [10, 30, 100, 3e2, 1e3, 3e3, 1e4, 3e4, 1e5, 3e5, 1e6, 3e6, 1e7]
t_serial = [pi_serial(n)[1] for n in n_benchmark]
t_mp = [pi_mp(n)[1] for n in n_benchmark]
clients = Client()
dview = clients.direct_view()
with dview.sync_imports():
from random import uniform
t_ipp = [pi_ipp(n)[1] for n in n_benchmark]
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
fs = 20
f1, ax1= plt.subplots(figsize = [8, 6])
ax1.plot(n_benchmark, t_ipp, label = 'IPcluster')
ax1.plot(n_benchmark, t_mp, label = 'Multiprocessing')
ax1.plot(n_benchmark, t_serial, label = 'serial')
ax1.set_yscale('log')
ax1.set_xscale('log')
plt.legend(loc = 'best')
ax1.set_xlabel('# of darts thrown', fontsize = fs)
ax1.set_ylabel('Execution time (s): solid', fontsize = fs)
ax2 = ax1.twinx()
n_bm_np = np.array(n_benchmark)
ax2.plot(n_benchmark, n_bm_np/np.array(t_ipp), '--')
ax2.plot(n_benchmark, n_bm_np/np.array(t_mp), '--')
ax2.plot(n_benchmark, n_bm_np/np.array(t_serial), '--')
ax2.set_yscale('log')
ax2.set_xscale('log')
ax2.set_ylabel('simulation rate (darts/s): dashed', fontsize = fs)
plt.show()
# f1.savefig('performance.png', dpi = 300)
Explanation: Performance test
End of explanation |
3,828 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Grid algorithm for a logitnormal-binomial hierarchical model
Bayesian Inference with PyMC
Copyright 2021 Allen B. Downey
License
Step2: Heart Attack Data
This example is based on Chapter 10 of Probability and Bayesian Modeling; it uses data on death rates due to heart attack for patients treated at various hospitals in New York City.
We can use Pandas to read the data into a DataFrame.
Step3: The columns we need are Cases, which is the number of patients treated at each hospital, and Deaths, which is the number of those patients who died.
Step4: Hospital Data with PyMC
Here's a hierarchical model that estimates the death rate for each hospital, and simultaneously estimates the distribution of rates across hospitals.
Step5: Here are the posterior distributions of the hyperparameters
Step6: And we can extract the posterior distributions of the xs.
Step7: As an example, here's the posterior distribution of x for the first hospital.
Step8: Just one update
Step9: The grid priors
Step10: The joint distribution of hyperparameters
Step11: Joint prior of hyperparameters, and x
Step12: We can speed this up by computing skipping the terms that don't depend on x
Step13: The following function computes the marginal distributions.
Step14: And let's confirm that the marginal distributions are what they are supposed to be.
Step15: The Update
Step16: Multiple updates
Step17: One at a time | Python Code:
# If we're running on Colab, install libraries
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install pymc3
!pip install arviz
!pip install empiricaldist
# PyMC generates a FutureWarning we don't need to deal with yet
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
import matplotlib.pyplot as plt
def legend(**options):
Make a legend only if there are labels.
handles, labels = plt.gca().get_legend_handles_labels()
if len(labels):
plt.legend(**options)
def decorate(**options):
plt.gca().set(**options)
legend()
plt.tight_layout()
from empiricaldist import Cdf
def compare_cdf(pmf, sample):
pmf.make_cdf().plot(label='grid')
Cdf.from_seq(sample).plot(label='mcmc')
print(pmf.mean(), sample.mean())
decorate()
from empiricaldist import Pmf
def make_pmf(ps, qs, name):
pmf = Pmf(ps, qs)
pmf.normalize()
pmf.index.name = name
return pmf
Explanation: Grid algorithm for a logitnormal-binomial hierarchical model
Bayesian Inference with PyMC
Copyright 2021 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
import os
filename = 'DeathHeartAttackManhattan.csv'
if not os.path.exists(filename):
!wget https://github.com/AllenDowney/BayesianInferencePyMC/raw/main/DeathHeartAttackManhattan.csv
import pandas as pd
df = pd.read_csv(filename)
df
Explanation: Heart Attack Data
This example is based on Chapter 10 of Probability and Bayesian Modeling; it uses data on death rates due to heart attack for patients treated at various hospitals in New York City.
We can use Pandas to read the data into a DataFrame.
End of explanation
data_ns = df['Cases'].values
data_ks = df['Deaths'].values
Explanation: The columns we need are Cases, which is the number of patients treated at each hospital, and Deaths, which is the number of those patients who died.
End of explanation
import pymc3 as pm
import theano.tensor as tt
def make_model():
with pm.Model() as model:
mu = pm.Normal('mu', 0, 2)
sigma = pm.HalfNormal('sigma', sigma=1)
xs = pm.LogitNormal('xs', mu=mu, sigma=sigma, shape=len(data_ns))
ks = pm.Binomial('ks', n=data_ns, p=xs, observed=data_ks)
return model
%time model = make_model()
pm.model_to_graphviz(model)
with model:
pred = pm.sample_prior_predictive(1000)
%time trace = pm.sample(500, target_accept=0.97)
Explanation: Hospital Data with PyMC
Here's a hierarchical model that estimates the death rate for each hospital, and simultaneously estimates the distribution of rates across hospitals.
End of explanation
import arviz as az
with model:
az.plot_posterior(trace, var_names=['mu', 'sigma'])
Explanation: Here are the posterior distributions of the hyperparameters
End of explanation
trace_xs = trace['xs'].transpose()
trace_xs.shape
Explanation: And we can extract the posterior distributions of the xs.
End of explanation
with model:
az.plot_posterior(trace_xs[0])
Explanation: As an example, here's the posterior distribution of x for the first hospital.
End of explanation
i = 3
data_n = data_ns[i]
data_k = data_ks[i]
sample = pm.HalfNormal.dist().random(size=1000)
Cdf.from_seq(sample).plot()
def make_model1():
with pm.Model() as model1:
mu = pm.Normal('mu', 0, 2)
sigma = pm.HalfNormal('sigma', sigma=1)
x = pm.LogitNormal('x', mu=mu, sigma=sigma)
k = pm.Binomial('k', n=data_n, p=x, observed=data_k)
return model1
model1 = make_model1()
pm.model_to_graphviz(model1)
with model1:
pred1 = pm.sample_prior_predictive(1000)
trace1 = pm.sample(500, target_accept=0.97)
Cdf.from_seq(pred1['mu']).plot(label='prior', color='gray')
Cdf.from_seq(trace1['mu']).plot(label='posterior')
decorate(title='Distribution of mu')
Cdf.from_seq(pred1['sigma']).plot(label='prior', color='gray')
Cdf.from_seq(trace1['sigma']).plot(label='posterior')
decorate(title='Distribution of sigma')
Cdf.from_seq(pred1['x']).plot(label='prior', color='gray')
Cdf.from_seq(trace1['x']).plot(label='posterior')
decorate(title='Distribution of x')
Explanation: Just one update
End of explanation
import numpy as np
from scipy.stats import norm
mus = np.linspace(-6, 6, 201)
ps = norm.pdf(mus, 0, 2)
prior_mu = make_pmf(ps, mus, 'mu')
prior_mu.plot()
decorate(title='Prior distribution of mu')
from scipy.stats import logistic
sigmas = np.linspace(0.03, 3.6, 90)
ps = norm.pdf(sigmas, 0, 1)
prior_sigma = make_pmf(ps, sigmas, 'sigma')
prior_sigma.plot()
decorate(title='Prior distribution of sigma')
compare_cdf(prior_mu, pred1['mu'])
decorate(title='Prior distribution of mu')
compare_cdf(prior_sigma, pred1['sigma'])
decorate(title='Prior distribution of sigma')
Explanation: The grid priors
End of explanation
# TODO: Change these variable names
def make_hyper(prior_alpha, prior_beta):
PA, PB = np.meshgrid(prior_alpha.ps, prior_beta.ps, indexing='ij')
hyper = PA * PB
return hyper
prior_hyper = make_hyper(prior_mu, prior_sigma)
prior_hyper.shape
import pandas as pd
from utils import plot_contour
plot_contour(pd.DataFrame(prior_hyper, index=mus, columns=sigmas))
decorate(title="Joint prior of mu and sigma")
Explanation: The joint distribution of hyperparameters
End of explanation
xs = np.linspace(0.01, 0.99, 99)
M, S, X = np.meshgrid(mus, sigmas, xs, indexing='ij')
from scipy.special import logit
LO = logit(X)
LO.sum()
from scipy.stats import beta as betadist
%time normpdf = norm.pdf(LO, M, S)
normpdf.sum()
Explanation: Joint prior of hyperparameters, and x
End of explanation
# TODO
totals = normpdf.sum(axis=2)
totals.sum()
shape = totals.shape + (1,)
totals = totals.reshape(shape)
out = np.zeros_like(normpdf)
normpdf = np.divide(normpdf, totals,
out=out, where=(totals!=0))
normpdf.sum()
def make_prior(hyper):
# reshape hyper so we can multiply along axis 0
shape = hyper.shape + (1,)
prior = normpdf * hyper.reshape(shape)
return prior
%time prior = make_prior(prior_hyper)
prior.sum()
Explanation: We can speed this up by computing skipping the terms that don't depend on x
End of explanation
def marginal(joint, axis):
axes = [i for i in range(3) if i != axis]
return joint.sum(axis=tuple(axes))
Explanation: The following function computes the marginal distributions.
End of explanation
prior_mu.plot()
marginal_mu = Pmf(marginal(prior, 0), mus)
marginal_mu.plot()
decorate(title='Checking the marginal distribution of mu')
prior_sigma.plot()
marginal_sigma = Pmf(marginal(prior, 1), sigmas)
marginal_sigma.plot()
decorate(title='Checking the marginal distribution of sigma')
prior_x = Pmf(marginal(prior, 2), xs)
prior_x.plot()
decorate(title='Prior distribution of x',
ylim=[0, prior_x.max()*1.05])
compare_cdf(prior_x, pred1['x'])
decorate(title='Checking the marginal distribution of x')
def get_hyper(joint):
return joint.sum(axis=2)
hyper = get_hyper(prior)
plot_contour(pd.DataFrame(hyper,
index=mus,
columns=sigmas))
decorate(title="Joint prior of mu and sigma")
Explanation: And let's confirm that the marginal distributions are what they are supposed to be.
End of explanation
from scipy.stats import binom
like_x = binom.pmf(data_k, data_n, xs)
like_x.shape
plt.plot(xs, like_x)
decorate(title='Likelihood of the data')
def update(prior, data):
n, k = data
like_x = binom.pmf(k, n, xs)
posterior = prior * like_x
posterior /= posterior.sum()
return posterior
data = data_n, data_k
%time posterior = update(prior, data)
marginal_mu = Pmf(marginal(posterior, 0), mus)
compare_cdf(marginal_mu, trace1['mu'])
marginal_sigma = Pmf(marginal(posterior, 1), sigmas)
compare_cdf(marginal_sigma, trace1['sigma'])
marginal_x = Pmf(marginal(posterior, 2), xs)
compare_cdf(marginal_x, trace1['x'])
marginal_x.mean(), trace1['x'].mean()
posterior_hyper = get_hyper(posterior)
plot_contour(pd.DataFrame(posterior_hyper,
index=mus,
columns=sigmas))
decorate(title="Joint posterior of mu and sigma")
like_hyper = posterior_hyper / prior_hyper
plot_contour(pd.DataFrame(like_hyper,
index=mus,
columns=sigmas))
decorate(title="Likelihood of mu and sigma")
Explanation: The Update
End of explanation
prior = make_prior(prior_hyper)
prior.shape
def multiple_updates(prior, ns, ks, xs):
for data in zip(ns, ks):
print(data)
posterior = update(prior, data)
hyper = get_hyper(posterior)
prior = make_prior(hyper)
return posterior
%time posterior = multiple_updates(prior, data_ns, data_ks, xs)
marginal_mu = Pmf(marginal(posterior, 0), mus)
compare_cdf(marginal_mu, trace['mu'])
marginal_sigma = Pmf(marginal(posterior, 1), sigmas)
compare_cdf(marginal_sigma, trace['sigma'])
marginal_x = Pmf(marginal(posterior, 2), prior_x.qs)
compare_cdf(marginal_x, trace_xs[-1])
posterior_hyper = get_hyper(posterior)
plot_contour(pd.DataFrame(posterior_hyper,
index=mus,
columns=sigmas))
decorate(title="Joint posterior of mu and sigma")
like_hyper = posterior_hyper / prior_hyper
plot_contour(pd.DataFrame(like_hyper,
index=mus,
columns=sigmas))
decorate(title="Likelihood of mu and sigma")
Explanation: Multiple updates
End of explanation
def compute_likes_hyper(ns, ks):
shape = ns.shape + mus.shape + sigmas.shape
likes_hyper = np.empty(shape)
for i, data in enumerate(zip(ns, ks)):
print(data)
n, k = data
like_x = binom.pmf(k, n, xs)
posterior = normpdf * like_x
likes_hyper[i] = posterior.sum(axis=2)
print(likes_hyper[i].sum())
return likes_hyper
%time likes_hyper = compute_likes_hyper(data_ns, data_ks)
likes_hyper.sum()
like_hyper_all = likes_hyper.prod(axis=0)
like_hyper_all.sum()
plot_contour(pd.DataFrame(like_hyper_all,
index=mus,
columns=sigmas))
decorate(title="Likelihood of mu and sigma")
posterior_hyper_all = prior_hyper * like_hyper_all
posterior_hyper_all /= posterior_hyper_all.sum()
np.allclose(posterior_hyper_all, posterior_hyper)
marginal_mu2 = Pmf(posterior_hyper_all.sum(axis=1), mus)
marginal_mu2.make_cdf().plot()
marginal_mu.make_cdf().plot()
np.allclose(marginal_mu, marginal_mu2)
marginal_sigma2 = Pmf(posterior_hyper_all.sum(axis=0), sigmas)
marginal_sigma2.make_cdf().plot()
marginal_sigma.make_cdf().plot()
np.allclose(marginal_sigma, marginal_sigma2)
plot_contour(pd.DataFrame(posterior_hyper_all,
index=mus,
columns=sigmas))
decorate(title="Joint posterior of mu and sigma")
i = 3
data = data_ns[i], data_ks[i]
data
out = np.zeros_like(prior_hyper)
hyper_i = np.divide(prior_hyper * like_hyper_all, likes_hyper[i],
out=out, where=(like_hyper_all!=0))
hyper_i.sum()
prior_i = make_prior(hyper_i)
posterior_i = update(prior_i, data)
Pmf(marginal(posterior_i, 0), mus).make_cdf().plot()
marginal_mu.make_cdf().plot()
Pmf(marginal(posterior_i, 1), sigmas).make_cdf().plot()
marginal_sigma.make_cdf().plot()
marginal_mu = Pmf(marginal(posterior_i, 0), mus)
marginal_sigma = Pmf(marginal(posterior_i, 1), sigmas)
marginal_x = Pmf(marginal(posterior_i, 2), xs)
compare_cdf(marginal_mu, trace['mu'])
compare_cdf(marginal_sigma, trace['sigma'])
compare_cdf(marginal_x, trace_xs[i])
def compute_all_marginals(ns, ks):
prior = prior_hyper * like_hyper_all
for i, data in enumerate(zip(ns, ks)):
n, k = data
out = np.zeros_like(prior)
hyper_i = np.divide(prior, likes_hyper[i],
out=out, where=(prior!=0))
prior_i = make_prior(hyper_i)
posterior_i = update(prior_i, data)
marginal_x = Pmf(marginal(posterior_i, 2), xs)
marginal_x.make_cdf().plot()
print(i, n, k/n, marginal_x.mean())
for hyper_i in likes_hyper:
print(i, (hyper_i==0).sum())
%time compute_all_marginals(data_ns, data_ks)
Explanation: One at a time
End of explanation |
3,829 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Propensity to Buy
Company XYZ is into creating productivity apps on cloud. Their apps are quite popular across the industry spectrum - large enterprises, small and medium companies and startups - all of them use their apps.
A big challenge that their sales team need to know is to know if the product is ready to be bought by a customer. The products can take anywhere from 3 months to a year to be created/updated. Given the current state of the product, the sales team want to know if customers will be ready to buy.
They have anonymized data from various apps - and know if customers have bought the product or not.
Can you help the enterprise sales team in this initiative?
1. Frame
The first step is to convert the business problem into an analytics problem.
The sales team wants to know if a customer will buy the product, given its current development stage. This is a propensity to buy model. This is a classification problem and the preferred output is the propensity of the customer to buy the product
2. Acquire
The IT team has provided the data in a csv format. The file has the following fields
still_in_beta - Is the product still in beta
bugs_solved_3_months - Number of bugs solved in the last 3 months
bugs_solved_6_months - Number of bugs solved in the last 3 months
bugs_solved_9_months - Number of bugs solved in the last 3 months
num_test_accounts_internal - Number of test accounts internal teams have
time_needed_to_ship - Time needed to ship the product
num_test_accounts_external - Number of customers who have test account
min_installations_per_account - Minimum number of installations customer need to purchase
num_prod_installations - Current number of installations that are in production
ready_for_enterprise - Is the product ready for large enterprises
perf_dev_index - The development performance index
perf_qa_index - The QA performance index
sev1_issues_outstanding - Number of severity 1 bugs outstanding
potential_prod_issue - Is there a possibility of production issue
ready_for_startups - Is the product ready for startups
ready_for_smb - Is the product ready for small and medium businesses
sales_Q1 - Sales of product in last quarter
sales_Q2 - Sales of product 2 quarters ago
sales_Q3 - Sales of product 3 quarters ago
sales_Q4 - Sales of product 4 quarters ago
saas_offering_available - Is a SaaS offering available
customer_bought - Did the customer buy the product
Load the required libraries
Step1: Load the data
Step2: 3. Refine
Step3: 4. Explore
Step4: 5. Transform
Step5: 6. Model | Python Code:
#code here
Explanation: Propensity to Buy
Company XYZ is into creating productivity apps on cloud. Their apps are quite popular across the industry spectrum - large enterprises, small and medium companies and startups - all of them use their apps.
A big challenge that their sales team need to know is to know if the product is ready to be bought by a customer. The products can take anywhere from 3 months to a year to be created/updated. Given the current state of the product, the sales team want to know if customers will be ready to buy.
They have anonymized data from various apps - and know if customers have bought the product or not.
Can you help the enterprise sales team in this initiative?
1. Frame
The first step is to convert the business problem into an analytics problem.
The sales team wants to know if a customer will buy the product, given its current development stage. This is a propensity to buy model. This is a classification problem and the preferred output is the propensity of the customer to buy the product
2. Acquire
The IT team has provided the data in a csv format. The file has the following fields
still_in_beta - Is the product still in beta
bugs_solved_3_months - Number of bugs solved in the last 3 months
bugs_solved_6_months - Number of bugs solved in the last 3 months
bugs_solved_9_months - Number of bugs solved in the last 3 months
num_test_accounts_internal - Number of test accounts internal teams have
time_needed_to_ship - Time needed to ship the product
num_test_accounts_external - Number of customers who have test account
min_installations_per_account - Minimum number of installations customer need to purchase
num_prod_installations - Current number of installations that are in production
ready_for_enterprise - Is the product ready for large enterprises
perf_dev_index - The development performance index
perf_qa_index - The QA performance index
sev1_issues_outstanding - Number of severity 1 bugs outstanding
potential_prod_issue - Is there a possibility of production issue
ready_for_startups - Is the product ready for startups
ready_for_smb - Is the product ready for small and medium businesses
sales_Q1 - Sales of product in last quarter
sales_Q2 - Sales of product 2 quarters ago
sales_Q3 - Sales of product 3 quarters ago
sales_Q4 - Sales of product 4 quarters ago
saas_offering_available - Is a SaaS offering available
customer_bought - Did the customer buy the product
Load the required libraries
End of explanation
#code here
#train = pd.read_csv
Explanation: Load the data
End of explanation
# View the first few rows
# What are the columns
# What are the column types?
# How many observations are there?
# View summary of the raw data
# Check for missing values. If they exist, treat them
Explanation: 3. Refine
End of explanation
# Single variate analysis
# histogram of target variable
# Bi-variate analysis
Explanation: 4. Explore
End of explanation
# encode the categorical variables
Explanation: 5. Transform
End of explanation
# Create train-test dataset
# Build decision tree model - depth 2
# Find accuracy of model
# Visualize decision tree
# Build decision tree model - depth none
# find accuracy of model
# Build random forest model
# Find accuracy model
# Bonus: Do cross-validation
Explanation: 6. Model
End of explanation |
3,830 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center"></h1>
<h1 align="center">Problem 1 - Building an HBM</h1>
<h2 align="center">Hierarchical Bayesian Modeling of a Truncated Gaussian Population with PyStan and PyJAGS</h2>
<div align="center">Simulating the distribution of "$\boldsymbol{e \cos \omega}$" for hot Jupiter exoplanets from <i>Kepler</i>. These simulated exoplanet candiates are potentially undergoing migration and orbital circularization. </div>
<h4 align="center">LSSTC DSFP Session 4, September 21st, 2017</h4>
<h5 align="center">Author
Step1: <h3 align="left"> 1. Generate simulated data set from a truncated Gaussian generative model</h3>
<div>Below we designate the population values of our generative model. These are the truths that we should recover if our hiararchical Bayesian model is properly specified and diagnostics have indicated that the simulation has "not-not converged". "You can't
prove convergence, at best you can fail to prove a failure to converge".</div>
<h4>Set your one-component truncated Gaussian mixture simulated data hyperparameters
Step2: <div><p>Eccentricity cannot be greater than 1 physically (or it means an unbound orbit) and that is not the case for planets observed to both transit and occult more than once.
This creates a situation where the distribution for the projected $\boldsymbol{e \cos \omega}$ and $\boldsymbol{e \sin \omega}$ must be truncated between 1 and -1, rendering this problem with no analytical solution, and requaring numerical simulation of the posterior probability.</p>
<p>In this first notebook, we will ignore $\boldsymbol{e \sin \omega}$, and set up our model using only simulated $\boldsymbol{e \cos \omega}$. This allows us to start with as simple as a model as possible for pedagogical purposes and for workflow. A workflow that builds in model complexity is approproate for exploratory science, and it this case, it is often helpful for testing for mathematical artifacts that can arrise for your combination of MCMC method and your generative model. A goal is to find a model that performs well for a variety of possibilies that could be occuring in nature (this is part of our science question). We will see this unfold in the following notebooks in this series.</p><div>
Step3: <h4>Next we generate our true values and our simulated measurements
Step5: <h3 align="left"> 2. Create PyStan truncated Gaussian hierarchical Bayesian model. </h3>
Run the code block below to pre-compile the Stan model.
Step6: Where in the Stan model above where the truncation is indicated?
Step7: <h3 align="left"> 3. Perform diagnostics to asses if the model shows signs of having not converged. </h3>
Step8: From the diagnostics above, what can you tell about the status of this MCMC numerical simulation?
Step9: <div> <p>Below is code to look at <b>summary statistics</b> of <b>the marginal posterior distributions</b> (the probabilistic parameter estimates)
for the hyperparameter and the latent variables
(each population constituent), in this case <i>h</i> (a.k.a one term of the projected eccentricity here), of the exoplanet systems we are simulating). </p> </div>
NOTES
Step10: Overplot the true value and the measurement uncertainy with the marginal posterior.
Step11: Plot draws from the posterior to summarize the population distribution
Step12: <h3 align="left"> 4. Complete problems 2 and 3 on the same generative model simulated data set above now using PyJAGS.</h3>
Step13: How is the truncation specified in the JAGS model above?
Step14: <div>
<h4>Data wrangling notes
Step15: <div> <p>Below is code to look at <b>summary statistics</b> of <b>the marginal posterior distributions</b> (the probabilistic parameter estimates)
for the hyperparameter and the latent variables
(each population constituent), in this case <i>h</i> and <i>k</i> (a.k.a projected eccentricity here), of the exoplanet systems we are simulating). </p> </div> | Python Code:
import numpy as np
import scipy.stats as stats
import pandas as pd
import matplotlib.pyplot as plt
import pyjags
import pystan
import pickle
import triangle_linear
from IPython.display import display, Math, Latex
from __future__ import division, print_function
from pandas.tools.plotting import *
from matplotlib import rcParams
rcParams["savefig.dpi"] = 100
rcParams["font.size"] = 20
#plt.style.use('ggplot')
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
#%qtconsole
Explanation: <h1 align="center"></h1>
<h1 align="center">Problem 1 - Building an HBM</h1>
<h2 align="center">Hierarchical Bayesian Modeling of a Truncated Gaussian Population with PyStan and PyJAGS</h2>
<div align="center">Simulating the distribution of "$\boldsymbol{e \cos \omega}$" for hot Jupiter exoplanets from <i>Kepler</i>. These simulated exoplanet candiates are potentially undergoing migration and orbital circularization. </div>
<h4 align="center">LSSTC DSFP Session 4, September 21st, 2017</h4>
<h5 align="center">Author: Megan I. Shabram, PhD,
NASA Postdoctoral Program Fellow, [email protected]</h5>
<div><p>In this Hierachical Bayesian Model workbook, we will begin by using <a href="http://mc-stan.org/users/documentation/case-studies.html">Stan</a>, a <a href="https://en.wikipedia.org/wiki/Hybrid_Monte_Carlo">Hamiltonian Monte Carlo method </a>. Here, PyStan is used to obtain a sample from the full posterior distribution of a truncated Gaussian population model, where the measurement uncertainty of the population constituents are Normally distributed. The truncation renders this HBM with no analytical solution, thus requiring a Monte Carlo numerical approximation technique. After we investigate using Stan, we will then explore the same problem using <a href="https://martynplummer.wordpress.com/2016/01/11/pyjags/"> JAGS</a> a <a href="https://en.wikipedia.org/wiki/Gibbs_sampling">Gibbs sampling</a> method (that reverts to Metropolis-Hastings when necessary). The simulated data sets generated below are modeled after the projected eccentricity obtained for exoplanet systems that both transit and occult their host star (see Shabram et al. 2016)</p><div>
<h3 align="left"> Goals of this series of notebooks:</h3>
- Learn how to use a computing infrastructre that allows you to carry out future HBM research projects.
- Gain a sense for the workflow involved in setting up an HBM, in particular, using key diagnostics and simulated data.
- Practice data wrangling with pandas data frames and numpy dictionaries.
- Understand the relationship between data quantity, quality, number of iterations, and analysis model complexity. The term analysis model is also refered to as a generative model, a population model, or sometimes a shape function.
- Learn how to run HBM on datasets with missing values using JAGS.<br />
<b>Later:</b>
- Notebook 2: model mispecification and regularization, (running Nm2 data through Nm1 model)
- Notebook 3: Break point model on eclipsing binary data.
<div>The additional software you will need to install:
<blockquote>PyJAGS<br />
PyStan</blockquote>
I have also include two codes in the folder
<blockquote>triangle_linear.py<br />
credible_interval.py</blockquote></div>
End of explanation
## In this simulated data set, there are "Ndata" planetary systems (with one planet each)
Ndata = 25
## Here we asign the dispersion of the simulated population to be 0.3, this is
## the truth we wish to recover when we run our diagnostics. In other words, this is the
## spread of projected eccentricities about the mean (In this case the values are symteric
## about zero, so the mean of the population is zero, and we can treat this paramter as
## known, simplyfying our model based on our science question!)
sigmae = 0.3
## We approximate the uncertainty for each measurement as normally distributed about a
## reported measurement point estimate, or a summary statistics for a posterior estimate.
sigmahobs = 0.04
Explanation: <h3 align="left"> 1. Generate simulated data set from a truncated Gaussian generative model</h3>
<div>Below we designate the population values of our generative model. These are the truths that we should recover if our hiararchical Bayesian model is properly specified and diagnostics have indicated that the simulation has "not-not converged". "You can't
prove convergence, at best you can fail to prove a failure to converge".</div>
<h4>Set your one-component truncated Gaussian mixture simulated data hyperparameters:</h4>
End of explanation
## function to draw from truncated normal, this function will be used for both the
## one- and two-componenet cases in this series of workbooks.
def rnorm_bound( Ndata, mu, prec, lower_bound = 0.0, upper_bound = float('Inf')):
x = np.zeros(Ndata)
# for i in range(0, ### Complete ### ):
for i in range(0, Ndata ):
#print(i)
while True:
x[i] = np.random.normal(mu,prec,1)
if( (x[i]>lower_bound) and (x[i]<upper_bound) ):
break
return x;
Explanation: <div><p>Eccentricity cannot be greater than 1 physically (or it means an unbound orbit) and that is not the case for planets observed to both transit and occult more than once.
This creates a situation where the distribution for the projected $\boldsymbol{e \cos \omega}$ and $\boldsymbol{e \sin \omega}$ must be truncated between 1 and -1, rendering this problem with no analytical solution, and requaring numerical simulation of the posterior probability.</p>
<p>In this first notebook, we will ignore $\boldsymbol{e \sin \omega}$, and set up our model using only simulated $\boldsymbol{e \cos \omega}$. This allows us to start with as simple as a model as possible for pedagogical purposes and for workflow. A workflow that builds in model complexity is approproate for exploratory science, and it this case, it is often helpful for testing for mathematical artifacts that can arrise for your combination of MCMC method and your generative model. A goal is to find a model that performs well for a variety of possibilies that could be occuring in nature (this is part of our science question). We will see this unfold in the following notebooks in this series.</p><div>
End of explanation
h = np.repeat(0.,Ndata) ## Simuated true values of h = e cos w
hhat = np.repeat(0.,Ndata) ## Simulated measurements of h
hhat_sigma = np.repeat(sigmahobs,Ndata) ## measurement uncertainty summary statics
## Note: in this simulated data set, the measurement uncertainties for each projected eccentricity are
## set to be the same. You can use the real heteroscadastic uncertainties from your
## real data set when testing with simulated data in the future.
for i in range(0,Ndata):
h[i] = rnorm_bound(1,0,sigmae,lower_bound=-1,upper_bound=1)
hhat[i] = rnorm_bound(1,h[i],sigmahobs,lower_bound=-1,upper_bound=1)
## Vizualize the true data values, and the simulated measurements:
print(h, hhat)
plt.hist( h, label='h')
#plt.hist( ##Complete, label='h')
plt.hist(hhat, label='hhat')
#plt.hist( ##Complete, label='hhat')
plt.legend()
Explanation: <h4>Next we generate our true values and our simulated measurements:</h4>
End of explanation
eccmodel =
data {
int<lower=1> Ndata;
real<lower=-1,upper=1> hhat[Ndata];
real<lower=0,upper=1> hhat_sigma[Ndata];
}
parameters {
real<lower=0> e_sigma;
real<lower=-1,upper=1> h[Ndata];
}
model {
e_sigma ~ uniform(0, 1.0);
for (n in 1:Ndata)
hhat[n] ~ normal(h[n], hhat_sigma[n]);
for (n in 1:Ndata)
increment_log_prob(normal_log(h[n], 0.0, e_sigma));
}
# Compiled Stan Model
sm = pystan.StanModel(model_code=eccmodel)
Explanation: <h3 align="left"> 2. Create PyStan truncated Gaussian hierarchical Bayesian model. </h3>
Run the code block below to pre-compile the Stan model.
End of explanation
ecc_dat = {'Ndata': len(hhat), 'hhat': hhat, 'hhat_sigma': hhat_sigma}
iters = 10000
fit = sm.sampling(data=ecc_dat, iter=iters, chains=4, seed=48389, refresh=1000, n_jobs=-1)
Explanation: Where in the Stan model above where the truncation is indicated?:
Run the code cell below to carry out the HMC numerical simulation of the Stan model that was compiled above.
End of explanation
print(fit)
fig = fit.traceplot()
Explanation: <h3 align="left"> 3. Perform diagnostics to asses if the model shows signs of having not converged. </h3>
End of explanation
la = fit.extract(permuted=True) # returns a dictionary of arrays
a = fit.extract(permuted=False)
e_sigma = la['e_sigma'].reshape(4,-1) # there are 4 chains here, hence the 4, -1
print(e_sigma.shape)
#print(e_sigma.reshape(4,5000))
## Wrangle the dictionary of samples from the posterior:
num_chains = 4
df = pd.DataFrame(la.items())
print(df)
samples_Stan = dict(zip(df[0], df[1].values ))
#print(samples.items())
print(type(samples_Stan['e_sigma']))
print(samples_Stan.items())
#for j, i in samples_Stan.items():
# print(the keys)
# print(i.shape)
## Print and check the shape of the resultant samples dictionary:
#print(samples)
#print(samples.items())
print('-----')
print(samples_Stan['e_sigma'].shape)
print(samples_Stan['h'].shape)
print(samples_Stan['h'][:,3].shape)
print('-----')
## Update the samples dictionary so that it includes keys for the latent variables
## Also, we will use LaTeX formatting to help make legible plots ahead.
samples_Nm1_Stan = {}
## adjust the thin varible to only look at every #th population element
thin1 = 1
## Need to enter the number of hyperparameter variables here:
numHyperParams = 1
## Specify the dimension we want for our plot below, for legibility.
dim1 = (Ndata/thin1) + numHyperParams
print(dim1)
for i in np.arange(0,Ndata,thin1):
samples_Nm1_Stan.update({'$h_{'+str(i+1)+'}$': samples_Stan['h'][:,i]})
## Add the hyperparameter marginal posterior back in:
samples_Nm1_Stan.update({'$e_{\sigma}$': samples_Stan['e_sigma']})
print(samples_Nm1_Stan['$h_{5}$'].shape)
## Reshape values for diagnostic plot functions (trace, autocorrelation) below:
samples_Nm1_trace_Stan = {}
for j, i in samples_Nm1_Stan.items():
samples_Nm1_trace_Stan.update({str(j): i.reshape(int(len(i)/num_chains),-1)})
Explanation: From the diagnostics above, what can you tell about the status of this MCMC numerical simulation?
End of explanation
## equal tailed 95% credible intervals, and posterior distribution means:
def summary(samples, varname, p=95):
values = samples[varname][0]
ci = np.percentile(values, [100-p, p])
print('{:<6} mean = {:>5.1f}, {}% credible interval [{:>4.1f} {:>4.1f}]'.format(
varname, np.mean(values), p, *ci))
for varname in samples_Nm1_trace_Stan:
summary(samples_Nm1_trace_Stan, varname)
## Use pandas three dimensional Panel to represent the trace:
## There is some wait time involved
trace_1_Stan = pd.Panel({k: v for k, v in samples_Nm1_trace_Stan.items()})
trace_1_Stan.axes[0].name = 'Variable'
trace_1_Stan.axes[1].name = 'Iteration'
trace_1_Stan.axes[2].name = 'Chain'
## Point estimates:
print(trace_1_Stan.to_frame().mean())
## Bayesian equal-tailed 95% credible intervals:
print(trace_1_Stan.to_frame().quantile([0.05, 0.95]))
## ^ entering the values here could be a good question part
def plot(trace, var):
fig, axes = plt.subplots(1, 3, figsize=(9, 3))
fig.suptitle(var, y=0.95, fontsize='xx-large')
## Marginal posterior density estimate:
trace[var].plot.density(ax=axes[0])
axes[0].set_xlabel('Parameter value')
axes[0].locator_params(tight=True)
## Autocorrelation for each chain:
axes[1].set_xlim(0, 100)
for chain in trace[var].columns:
autocorrelation_plot(trace[var,:,chain], axes[1], label=chain)
## Trace plot:
axes[2].set_ylabel('Parameter value')
trace[var].plot(ax=axes[2])
## Save figure
filename = var.replace("\\", "")
filename = filename.replace("$", "")
filename = filename.replace("}", "")
filename = filename.replace("{", "")
plt.tight_layout(pad=3)
#fig.savefig(Nm1_Stan_'+'{}.png'.format(filename))
# Display diagnostic plots
for var in trace_1_Stan:
plot(trace_1_Stan, var)
## Scatter matrix plot:
## If Ndata is increased, this scatter_matrix might do better to skip every nth latent variable
## Code for that is in a future code cell, where we make a triangle plot with KDE for
## joint 2D marginals. scatter_matrix is a Python Pandas module.
sm = scatter_matrix(trace_1_Stan.to_frame(), color="#00BFFF", alpha=0.2, figsize=(dim1*2, dim1*2), diagonal='hist',hist_kwds={'bins':25,'histtype':'step', 'edgecolor':'r','linewidth':2})
## y labels size
[plt.setp(item.yaxis.get_label(), 'size', 20) for item in sm.ravel()]
## x labels size
[plt.setp(item.xaxis.get_label(), 'size', 20) for item in sm.ravel()]
## Change label rotation. This is helpful for very long labels
#[s.xaxis.label.set_rotation(45) for s in sm.reshape(-1)]
[s.xaxis.label.set_rotation(0) for s in sm.reshape(-1)]
[s.yaxis.label.set_rotation(0) for s in sm.reshape(-1)]
## May need to offset label when rotating to prevent overlap of figure
[s.get_yaxis().set_label_coords(-0.5,0.5) for s in sm.reshape(-1)]
## Hide all ticks
#[s.set_xticks(()) for s in sm.reshape(-1)]
#[s.set_yticks(()) for s in sm.reshape(-1)]
## Save the figure as .png
plt.savefig('scatter_matrix_Nm1_Stan.png')
## Redefine the trace so that we only vizualize every #th latent variable element in
## the scatter_matrix plot below. Vizualizing all Ndata is too cumbersome for the scatter
## matrix.
## adjust the thin varible to only look at every #th population element
##Ndata is 25 use
thin = 5
##Ndata is 20 use
#thin = 4
## Re-specify the dimension we want for our plot below, with thinning, for legibility.
dim2 = (Ndata/thin) + numHyperParams
print(dim2)
samples_Nm1_triangle_1_Stan = {}
truths_hhat = {}
truths_h = {}
for i in np.arange(0,Ndata,thin):
samples_Nm1_triangle_1_Stan.update({'$h_{'+str(i+1)+'}$': samples_Stan['h'][:,i]})
truths_hhat.update({'$h_{'+str(i+1)+'}$': hhat[i]})
truths_h.update({'$h_{'+str(i+1)+'}$': h[i]})
samples_Nm1_triangle_1_Stan.update({'$e_{\sigma}$': samples_Stan['e_sigma']})
truths_hhat.update({'$e_{\sigma}$': sigmae})
truths_h.update({'$e_{\sigma}$': sigmae})
## Code below is to reshape values for diagnostic plot functions (trace, autocorrelation, scatter_matrix):
#samples_Nm1_triangle_trace_Stan = {}
#for j, i in samples_Nm1_triangle_Stan.items():
# samples_Nm1_triangle_trace_Stan.update({str(j): i.reshape(int(len(i)/num_chains),-1)})
#
#print(list(samples_Nm1_scatter_matrix_trace.items()))
#print(list(samples_Nm1_scatter_matrix_trace['$h_{6}$'][0]))
#trace_2.to_frame()
print(samples_Nm1_triangle_1_Stan.keys())
print(5000*num_chains)
print(int(dim2))
print(truths_hhat.values())
#data = np.asarray(samples_Nm1_scatter_matrix_triangle.values()).reshape((5000*num_chains),int(dim2))
#print(np.asarray(samples_Nm1_scatter_matrix_triangle.values()).reshape((5000*num_chains),int(dim2)).shape)
samples_Nm1_triangle_2_Stan = {}
for j, i in samples_Nm1_triangle_1_Stan.items():
samples_Nm1_triangle_2_Stan.update({str(j): i.reshape(-1,1)})
data = None
for k, v in samples_Nm1_triangle_2_Stan.items():
column = v.reshape(-1,1)
if data is None:
data = column
else:
data = np.hstack((data, column))
print(data.shape)
figure = triangle_linear.corner(data,labels=samples_Nm1_triangle_2_Stan.keys(),labelsy=samples_Nm1_triangle_2_Stan.keys(), truths=truths_h.values(), truths_color = 'black')
#plt.savefig('triangle_linear_Nm1_Stan.png')
Explanation: <div> <p>Below is code to look at <b>summary statistics</b> of <b>the marginal posterior distributions</b> (the probabilistic parameter estimates)
for the hyperparameter and the latent variables
(each population constituent), in this case <i>h</i> (a.k.a one term of the projected eccentricity here), of the exoplanet systems we are simulating). </p> </div>
NOTES: Wikipedia article for credible interval
"Choosing the interval where the probability of being below the interval is as likely as being above it. This interval will include the median. This is sometimes called the equal-tailed interval."
End of explanation
def plot_2(Ndata):
for i in np.arange(0,Ndata):
fig = plt.figure(figsize=(8,5))
x = np.arange(-1,1, 0.001)
fit = stats.norm.pdf(x, h[i], sigmahobs)
plt.plot([hhat[i],hhat[i]], [0,10],label=r'$\hat{h_{'+str(i)+'}}$ (measured)')
plt.plot([h[i],h[i]], [0,10],label=r'$h_{'+str(i)+'}$ (true value)')
plt.plot(x,fit,'-',label=r'$N(h_{'+str(i)+'},\sigma_{\hat{h_{'+str(i)+'}}})$')
plt.hist(samples_Stan['h'][:,i],histtype='step', bins=50, normed=True,label=r'Marginal Posterior of $h_{'+str(i)+'}$')
plt.xlim(-0.7,0.7)
plt.ylim(0,13)
plt.legend()
plt.text(-.49, 12,'variance of posterior = '+str(np.var(samples_Stan['h'][:,i])))
plt.text(-.49, 11.5,'variance of observation = '+str(sigmahobs**2))
plt.tight_layout(pad=3)
fig.savefig('Nm1_latent_var_h_Stan'+str(i)+'.png')
plot_2(Ndata)
Explanation: Overplot the true value and the measurement uncertainy with the marginal posterior.
End of explanation
## take some some values of e_sigma from the posterior of e_sigma, at plot the population
## distribution for these values. These should be normal distributions centered at 0.0 with dispersions that are drawn from e_sigma.
Explanation: Plot draws from the posterior to summarize the population distribution:
End of explanation
## JAGS user manual:
## http://www.uvm.edu/~bbeckage/Teaching/DataAnalysis/Manuals/manual.jags.pdf
## JAGS model code
code = '''
model {
#Population parameters
e_sigma ~ dunif(0.0, 1.0)
e_phi <- 1/(e_sigma*e_sigma)
for (n in 1:Ndata){
#True planet properties
h[n] ~ dnorm(0, e_phi) T(-1,1) #Can try multivariate truncated normal in future
#Observed planet properties
hhat[n] ~ dnorm(h[n], 1.0/(hhat_sigma[n]*hhat_sigma[n])) T(-1,1)
}
}
'''
Explanation: <h3 align="left"> 4. Complete problems 2 and 3 on the same generative model simulated data set above now using PyJAGS.</h3>
End of explanation
## Load additional JAGS module
pyjags.load_module('glm')
pyjags.load_module('dic')
## See blog post for origination of the adapted analysis tools used here and below:
## https://martynplummer.wordpress.com/2016/01/11/pyjags/
num_chains = 4
iterations = 10000
## data list include only variables in the model
model = pyjags.Model(code, data=dict( Ndata=Ndata, hhat=hhat,
hhat_sigma=hhat_sigma),
chains=num_chains, adapt=1000)
## Code to speed up compute time. This feature might not be
## well tested in pyjags at this time.
## threads=4, chains_per_thread=1
## 500 warmup / burn-in iterations, not used for inference.
model.sample(500, vars=[])
## Run model for desired steps, monitoring hyperparameter variables, and latent variables
## for hierarchical Bayesian model.
## Returns a dictionary with numpy array for each monitored variable.
## Shapes of returned arrays are (... shape of variable ..., iterations, chains).
## samples = model.sample(#iterations per chain here, vars=['e_sigma', 'h'])
samples_JAGS = model.sample(iterations, vars=['e_sigma', 'h'])
## Code to save, open and use pickled dictionary of samples:
## -- Pickle the data --
#with open('ecc_1_test.pkl', 'wb') as handle:
# pickle.dump(samples, handle)
## -- Retrieve pickled data --
#with open('ecc_1_test.pkl', 'rb') as handle:
# retrieved_results = pickle.load(handle)
Explanation: How is the truncation specified in the JAGS model above?
End of explanation
#print(samples)
#print(samples.items())
## Print and check the shape of the resultant samples dictionary:
print(samples_JAGS['e_sigma'].shape)
print(samples_JAGS['e_sigma'].squeeze(0).shape)
print(samples_JAGS['h'].shape)
print(samples_JAGS['h'][0,:,:].shape)
print('-----')
## Update the samples dictionary so that it includes keys for the latent variables
## Also, we will use LaTeX formatting to help make legible plots ahead.
samples_Nm1_JAGS = {}
## adjust the thin varible to only look at every 10th population element by setting it to 10
thin = 1
## Need to enter the number of hyperparameter variables here:
numHyperParams = 1
## Specify the dimension we want for our plot below, for legibility.
dim = (Ndata/thin)*2 + numHyperParams
print(dim)
for i in np.arange(0,Ndata,thin):
#samples_Nm1({hval: samples['h'][i,:,:]})
samples_Nm1_JAGS.update({'$h_{'+str(i)+'}$': samples_JAGS['h'][i,:,:]})
#print(samples_2['h11'].shape)
## Add the hyperparameter marginal posterior back in:
samples_Nm1_JAGS.update({'$e_{\sigma}$': samples_JAGS['e_sigma'].squeeze(0)})
## Below, examine the updated and reformatted sample dictionary to include keys for
## latent variables
for j, i in samples_Nm1_JAGS.items():
print(j)
print(i)
samples_Nm1_JAGS['$h_{5}$'][0]
Explanation: <div>
<h4>Data wrangling notes:</h4>
<p>The pyjags output is a dictionary type (above, this is asigned to the "samples" variable). Here, the keys are <i>e_sigma</i>, <i>h</i>, and <i>k</i> (from the one-component analysis model used above). Furthermore, <br />
<blockquote> e_sigma has shape (1,10000,4), <br />
h has shape (25,10000,4), and<br />
</blockquote>
The way to acces the latent variable marginal posteriors before updating the samples dictionary as we do below, you would use:
<blockquote>
h1 = samples['h'][0,:,:], h2 = samples['h'][1,:,:], ... hn = samples['h'][n,:,:] and, <br />
</blockquote>
for n population constituents. </p>
<p>Now, we need to add keys for each latent variable into the dictionary.
The keys we want here are:<br />
<blockquote> e_sigma (the population parameter here), and<br />
h1, ..., h25 for the latent variables.<br />
</blockquote> </p>
<p>Our resulting dictionary elements will have the following shapes:
<blockquote> e_sigma will have shape (10000,4)<br />
h1 will have shape (10000,4)<br />
.<br />
.<br />
.<br />
h25 will have shape (10000,4)<br />
for a data set with 25 population constituents. </p>
</div>
End of explanation
## equal tailed 95% credible intervals, and posterior distribution means:
def summary(samples_JAGS, varname, p=95):
values = samples_JAGS[varname][0]
ci = np.percentile(values, [100-p, p])
print('{:<6} mean = {:>5.1f}, {}% credible interval [{:>4.1f} {:>4.1f}]'.format(
varname, np.mean(values), p, *ci))
for varname in samples_Nm1_JAGS:
summary(samples_Nm1_JAGS, varname)
## Use pandas three dimensional Panel to represent the trace:
trace = pd.Panel({k: v for k, v in samples_Nm1_JAGS.items()})
trace.axes[0].name = 'Variable'
trace.axes[1].name = 'Iteration'
trace.axes[2].name = 'Chain'
## Point estimates:
print(trace.to_frame().mean())
## Bayesian equal-tailed 95% credible intervals:
print(trace.to_frame().quantile([0.05, 0.95]))
## ^ entering the values here could be a good question part
def plot(trace, var):
fig, axes = plt.subplots(1, 3, figsize=(9, 3))
fig.suptitle(var, y=0.95, fontsize='xx-large')
## Marginal posterior density estimate:
trace[var].plot.density(ax=axes[0])
axes[0].set_xlabel('Parameter value')
axes[0].locator_params(tight=True)
## Autocorrelation for each chain:
axes[1].set_xlim(0, 100)
for chain in trace[var].columns:
autocorrelation_plot(trace[var,:,chain], axes[1], label=chain)
## Trace plot:
axes[2].set_ylabel('Parameter value')
trace[var].plot(ax=axes[2])
## Save figure
filename = var.replace("\\", "")
filename = filename.replace("$", "")
filename = filename.replace("}", "")
filename = filename.replace("{", "")
plt.tight_layout(pad=3)
fig.savefig('Nm1_JAGS_'+'{}.png'.format(filename))
# Display diagnostic plots
for var in trace:
plot(trace, var)
## Scatter matrix plot:
## Redefine the trace so that we only vizualize every 10th latent variable element in
## the scatter_matrix plot below. Vizualizing all 50 is too cumbersome for the scatter
## matrix.
samples_Nm1_for_scatter_matrix = {}
start = int(iters-1000)
## adjust the thin varible to only look at every 10th population element by setting it to 10
thin = 1
numHyperParams = 1
dim = (Ndata/thin)*2 + numHyperParams
print(dim)
for i in np.arange(0,Ndata,thin):
samples_Nm1_for_scatter_matrix.update({'$h_{'+str(i+1)+'}$': samples_JAGS['h'][i,start::,:]})
samples_Nm1_for_scatter_matrix.update({'$e_{\sigma}$': samples_JAGS['e_sigma'].squeeze(0)})
for j, i in samples_Nm1_for_scatter_matrix.items():
print(j)
# print(i)
trace_2 = pd.Panel({k: v for k, v in samples_Nm1_for_scatter_matrix.items()})
sm = scatter_matrix(trace_2.to_frame(), color="darkturquoise", alpha=0.2, figsize=(dim*2, dim*2), diagonal='hist',hist_kwds={'bins':25,'histtype':'step', 'edgecolor':'r','linewidth':2})
## y labels size
[plt.setp(item.yaxis.get_label(), 'size', 20) for item in sm.ravel()]
## x labels size
[plt.setp(item.xaxis.get_label(), 'size', 20) for item in sm.ravel()]
## Change label rotation
## This is helpful for very long labels
#[s.xaxis.label.set_rotation(45) for s in sm.reshape(-1)]
[s.xaxis.label.set_rotation(0) for s in sm.reshape(-1)]
[s.yaxis.label.set_rotation(0) for s in sm.reshape(-1)]
## May need to offset label when rotating to prevent overlap of figure
[s.get_yaxis().set_label_coords(-0.5,0.5) for s in sm.reshape(-1)]
## Hide all ticks
#[s.set_xticks(()) for s in sm.reshape(-1)]
#[s.set_yticks(()) for s in sm.reshape(-1)]
plt.savefig('scatter_matrix_Nm1_JAGS.png')
## Redefine the trace so that we only vizualize every #th latent variable element in
## the scatter_matrix plot below. Vizualizing all Ndata is too cumbersome for the scatter
## matrix.
## adjust the thin varible to only look at every #th population element
##Ndata is 25 use
thin = 5
##Ndata is 20 use
#thin = 4
## Re-specify the dimension we want for our plot below, with thinning, for legibility.
dim2 = (Ndata/thin) + numHyperParams
print(dim2)
samples_Nm1_triangle_1_JAGS = {}
truths_hhat = {}
truths_h = {}
for i in np.arange(0,Ndata,thin):
samples_Nm1_triangle_1_JAGS.update({'$h_{'+str(i+1)+'}$': samples_JAGS['h'][:,i]})
truths_hhat.update({'$h_{'+str(i+1)+'}$': hhat[i]})
truths_h.update({'$h_{'+str(i+1)+'}$': h[i]})
samples_Nm1_triangle_1_JAGS.update({'$e_{\sigma}$': samples_JAGS['e_sigma']})
truths_hhat.update({'$e_{\sigma}$': sigmae})
truths_h.update({'$e_{\sigma}$': sigmae})
## Reshape values for diagnostic plot functions (trace, autocorrelation, scatter_matrix):
#samples_Nm1_triangle_trace_Stan = {}
#for j, i in samples_Nm1_triangle_Stan.items():
# samples_Nm1_triangle_trace_Stan.update({str(j): i.reshape(int(len(i)/num_chains),-1)})
#
#print(list(samples_Nm1_scatter_matrix_trace.items()))
#print(list(samples_Nm1_scatter_matrix_trace['$h_{6}$'][0]))
#data = np.asarray(samples_Nm1_scatter_matrix_triangle.values()).reshape((5000*num_chains),int(dim2))
#print(np.asarray(samples_Nm1_scatter_matrix_triangle.values()).reshape((5000*num_chains),int(dim2)).shape)
samples_Nm1_triangle_2_Stan = {}
for j, i in samples_Nm1_triangle_1_Stan.items():
samples_Nm1_triangle_2_Stan.update({str(j): i.reshape(-1,1)})
data = None
for k, v in samples_Nm1_triangle_2_Stan.items():
column = v.reshape(-1,1)
if data is None:
data = column
else:
data = np.hstack((data, column))
print(data.shape)
figure = triangle_linear.corner(data,labels=samples_Nm1_triangle_2_Stan.keys(),labelsy=samples_Nm1_triangle_2_Stan.keys(), truths=truths_h.values(), truths_color = 'black')
plt.savefig('triangle_linear_Nm1_JAGS.png')
def plot_2(Ndata):
for i in np.arange(0,Ndata):
fig = plt.figure(figsize=(8,5))
x = np.arange(-1,1, 0.001)
fit = stats.norm.pdf(x, h[i], sigmahobs)
plt.plot([hhat[i],hhat[i]], [0,10],label=r'$\hat{h_{'+str(i)+'}}$ (measured)')
plt.plot([h[i],h[i]], [0,10],label=r'$h_{'+str(i)+'}$ (true value)')
plt.plot(x,fit,'-',label=r'$N(h_{'+str(i)+'},\sigma_{\hat{h_{'+str(i)+'}}})$')
plt.hist(samples_Stan['h'][:,i],histtype='step', bins=50, normed=True,label=r'Marginal Posterior of $h_{'+str(i)+'}$')
plt.xlim(-0.7,0.7)
plt.ylim(0,13)
plt.legend()
plt.text(-.49, 12,'variance of posterior = '+str(np.var(samples_Stan['h'][:,i])))
plt.text(-.49, 11.5,'variance of observation = '+str(sigmahobs**2))
plt.tight_layout(pad=3)
fig.savefig('Nm1_latent_var_h_JAGS'+str(i)+'.png')
plot_2(Ndata)
Explanation: <div> <p>Below is code to look at <b>summary statistics</b> of <b>the marginal posterior distributions</b> (the probabilistic parameter estimates)
for the hyperparameter and the latent variables
(each population constituent), in this case <i>h</i> and <i>k</i> (a.k.a projected eccentricity here), of the exoplanet systems we are simulating). </p> </div>
End of explanation |
3,831 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
What is the quickest way to convert the non-diagonal elements of a square symmetrical numpy ndarray to 0? I don't wanna use LOOPS! | Problem:
import numpy as np
a = np.array([[1,0,2,3],[0,5,3,4],[2,3,2,10],[3,4, 10, 7]])
result = np.einsum('ii->i', a)
save = result.copy()
a[...] = 0
result[...] = save |
3,832 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fungal ITS QIIME analysis tutorial
In this tutorial we illustrate steps for analyzing fungal ITS amplicon data using the QIIME/UNITE reference OTUs (alpha version 12_11) to compare the composition of 9 soil communities using open-reference OTU picking. More recent ITS reference databases based on UNITE are available on the QIIME resources page. The steps in this tutorial can be generalized to work with other marker genes, such as 18S.
We recommend working through the Illumina Overview Tutorial before working through this tutorial, as it provides more detailed annotation of the steps in a QIIME analysis. This tutorial is intended to highlight the differences that are necessary to work with a database other than QIIME's default reference database. For ITS, we won't build a phylogenetic tree and therefore use nonphylogenetic diversity metrics. Instructions are included for how to build a phylogenetic tree if you're sequencing a non-16S, phylogenetically-informative marker gene (e.g., 18S).
First, we obtain the tutorial data and reference database
Step1: Now unzip these files.
Step2: You can then view the files in each of these direcories by passing the directory name to the FileLinks function.
Step3: The params.txt file modifies some of the default parameters of this analysis. You can review those by clicking the link or by catting the file.
Step4: The parameters that differentiate ITS analysis from analysis of other amplicons are the two assign_taxonomy parameters, which are pointing to the reference collection that we just downloaded.
We're now ready to run the pick_open_reference_otus.py workflow. Discussion of these methods can be found in Rideout et. al (2014).
Note that we pass -r to specify a non-default reference database. We're also passing --suppress_align_and_tree because we know that trees generated from ITS sequences are generally not phylogenetically informative.
Step5: Note
Step6: You can then pass the OTU table to biom summarize-table to view a summary of the information in the OTU table.
Step7: Next, we run several core diversity analyses, including alpha/beta diversity and taxonomic summarization. We will use an even sampling depth of 353 based on the results of biom summarize-table above. Since we did not built a phylogenetic tree, we'll pass the --nonphylogenetic_diversity flag, which specifies to compute Bray-Curtis distances instead of UniFrac distances, and to use only nonphylogenetic alpha diversity metrics.
Step8: You may see a warning issued above; this is safe to ignore.
Note
Step9: Precomputed results
In case you're having trouble running the steps above, for example because of a broken QIIME installation, all of the output generated above has been precomputed. You can access this by running the cell below. | Python Code:
!(wget ftp://ftp.microbio.me/qiime/tutorial_files/its-soils-tutorial.tgz || curl -O ftp://ftp.microbio.me/qiime/tutorial_files/its-soils-tutorial.tgz)
!(wget ftp://ftp.microbio.me/qiime/tutorial_files/its_12_11_otus.tgz || curl -O ftp://ftp.microbio.me/qiime/tutorial_files/its_12_11_otus.tgz)
Explanation: Fungal ITS QIIME analysis tutorial
In this tutorial we illustrate steps for analyzing fungal ITS amplicon data using the QIIME/UNITE reference OTUs (alpha version 12_11) to compare the composition of 9 soil communities using open-reference OTU picking. More recent ITS reference databases based on UNITE are available on the QIIME resources page. The steps in this tutorial can be generalized to work with other marker genes, such as 18S.
We recommend working through the Illumina Overview Tutorial before working through this tutorial, as it provides more detailed annotation of the steps in a QIIME analysis. This tutorial is intended to highlight the differences that are necessary to work with a database other than QIIME's default reference database. For ITS, we won't build a phylogenetic tree and therefore use nonphylogenetic diversity metrics. Instructions are included for how to build a phylogenetic tree if you're sequencing a non-16S, phylogenetically-informative marker gene (e.g., 18S).
First, we obtain the tutorial data and reference database:
End of explanation
!tar -xzf its-soils-tutorial.tgz
!tar -xzf its_12_11_otus.tgz
!gunzip ./its_12_11_otus/rep_set/97_otus.fasta.gz
!gunzip ./its_12_11_otus/taxonomy/97_otu_taxonomy.txt.gz
Explanation: Now unzip these files.
End of explanation
from IPython.display import FileLink, FileLinks
FileLinks('its-soils-tutorial')
Explanation: You can then view the files in each of these direcories by passing the directory name to the FileLinks function.
End of explanation
!cat its-soils-tutorial/params.txt
Explanation: The params.txt file modifies some of the default parameters of this analysis. You can review those by clicking the link or by catting the file.
End of explanation
!pick_open_reference_otus.py -i its-soils-tutorial/seqs.fna -r its_12_11_otus/rep_set/97_otus.fasta -o otus/ -p its-soils-tutorial/params.txt --suppress_align_and_tree
Explanation: The parameters that differentiate ITS analysis from analysis of other amplicons are the two assign_taxonomy parameters, which are pointing to the reference collection that we just downloaded.
We're now ready to run the pick_open_reference_otus.py workflow. Discussion of these methods can be found in Rideout et. al (2014).
Note that we pass -r to specify a non-default reference database. We're also passing --suppress_align_and_tree because we know that trees generated from ITS sequences are generally not phylogenetically informative.
End of explanation
FileLink('otus/index.html')
Explanation: Note: If you would like to build a phylogenetic tree (e.g., if you're using a phylogentically-informative marker gene such as 18S instead of ITS), you should remove the --suppress_align_and_tree parameter from the above command and add the following lines to the parameters file:
align_seqs:template_fp <path to reference alignment>
filter_alignment:suppress_lane_mask_filter True
filter_alignment:entropy_threshold 0.10
After that completes (it will take a few minutes) we'll have the OTU table with taxonomy. You can review all of the files that are created by passing the path to the index.html file in the output directory to the FileLink function.
End of explanation
!biom summarize-table -i otus/otu_table_mc2_w_tax.biom
Explanation: You can then pass the OTU table to biom summarize-table to view a summary of the information in the OTU table.
End of explanation
!core_diversity_analyses.py -i otus/otu_table_mc2_w_tax.biom -o cdout/ -m its-soils-tutorial/map.txt -e 353 --nonphylogenetic_diversity
Explanation: Next, we run several core diversity analyses, including alpha/beta diversity and taxonomic summarization. We will use an even sampling depth of 353 based on the results of biom summarize-table above. Since we did not built a phylogenetic tree, we'll pass the --nonphylogenetic_diversity flag, which specifies to compute Bray-Curtis distances instead of UniFrac distances, and to use only nonphylogenetic alpha diversity metrics.
End of explanation
FileLink('cdout/index.html')
Explanation: You may see a warning issued above; this is safe to ignore.
Note: If you built a phylogenetic tree, you should pass the path to that tree via -t and not pass --nonphylogenetic_diversity.
You can view the output of core_diversity_analyses.py using FileLink.
End of explanation
FileLinks("its-soils-tutorial/precomputed-output/")
Explanation: Precomputed results
In case you're having trouble running the steps above, for example because of a broken QIIME installation, all of the output generated above has been precomputed. You can access this by running the cell below.
End of explanation |
3,833 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Softmax exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
This exercise is analogous to the SVM exercise. You will
Step1: CIFAR-10 Data Loading and Preprocessing
Step2: Softmax Classifier
Step3: Inline Question 1 | Python Code:
# Run some setup code
import numpy as np
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# bool var. to let program show debug info.
debug = True
show_img = True
Explanation: Softmax exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
This exercise is analogous to the SVM exercise. You will:
implement a fully-vectorized loss function for the Softmax classifier
implement the fully-vectorized expression for its analytic gradient
check your implementation with numerical gradient
use a validation set to tune the learning rate and regularization strength
optimize the loss function with SGD
visualize the final learned weights
End of explanation
import cifar10
# Load the raw CIFAR-10 data
X, y, X_test, y_test = cifar10.load('../cifar-10-batches-py', debug = debug)
m = 49000
m_val = 1000
m_test = 1000
m_dev = 500
X, y, X_test, y_test, X_dev, y_dev, X_val, y_val = cifar10.split_vec(X, y, X_test, y_test, m, m_test, m_val, m_dev, debug = debug, show_img = show_img)
Explanation: CIFAR-10 Data Loading and Preprocessing
End of explanation
n = X_dev.shape[1]
K = 10
from softmax import Softmax
model = Softmax(n, K)
lamda = 0.0
model.train_check(X, y, lamda)
lamda = 3.3
model.train_check(X, y, lamda)
Explanation: Softmax Classifier
End of explanation
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of over 0.35 on the validation set.
results = {}
best_val = -1
best_model = None
best_hpara = None
T = 1500
B = 256
alpha = 1e-6
# alphalearning_rates = [1e-7, 5e-7]
# regularization_strengths = [5e4, 1e8]
for lamda in [1e0, 1e3, 1e1]:
model = Softmax(n, K)
hpara = (alpha, lamda, T, B)
model.train(X, y, hpara, show_img = False, debug = False)
y_hat = model.predict(X)
train_acc = np.mean(y == y_hat)
y_val_hat = model.predict(X_val)
val_acc = np.mean(y_val == y_val_hat)
results[(alpha, lamda)] = (train_acc, val_acc)
print 'alpha =', alpha, 'lamda =', lamda, 'train_acc =', train_acc, 'val_acc =', val_acc
if val_acc > best_val:
best_model = model
best_val = val_acc
best_hpara = hpara
print 'best val. acc.:', best_val, 'best hpara:', best_hpara
# evaluate on test set
# Evaluate the best softmax on test set
y_test_hat = best_model.predict(X_test)
print 'test acc.:', np.mean(y_test_hat == y_test)
# Visualize the learned weights for each class
best_model.visualize_W()
Explanation: Inline Question 1:
Why do we expect our loss to be close to -log(0.1)? Explain briefly.**
Your answer: log(K)
End of explanation |
3,834 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading multiple files with xarray
From the documentation
Step1: Note that the default chunking will have each file in a separate chunk. You can't change this with the chunk option (i.e. the commented code above still chunks along the time axis (as well as the level axis)), so you have to rechunk later on (see below).
Step2: So there's 16.4 GB of data, according to the conversion that Stephan Hoyer does at this blog post.
Step3: You can re-chunk your data like so, where the number represents the size of each individual chunk. This might be useful when you want each chunk to contain the entire time axis.
Step4: Eager evaluation
From the documentation
Step5: Lazy evaluation
Step8: Notice that the computation used only 25 seconds of wall clock time, but 47 seconds of CPU time. It's definitely using 2 cores.
Applying a function in a lazy manner
The simple numpy functions are available through xarray (see here for the notes on import xarray.ufuncs as xu), so doing something like a mean or standard deviation is trivial. For more complex functions, you need to use the map_blocks() method associted with dask arrays. Below I'll try this for the task of fitting a cubic polynomial
Step9: A simple example
To try and figure out if save_mfdataset() accepts dask arrays...
Step10: How do you write a dask array to a netCDF file??? Perhaps manually convert each chuck to a numpy array / xarray dataArray?
Reading multiple files with iris | Python Code:
import xarray
ds = xarray.open_mfdataset(allfiles) #chunks={'lev': 1, 'time': 1956})
Explanation: Reading multiple files with xarray
From the documentation: xarray uses Dask, which divides arrays into many small pieces, called chunks, each of which is presumed to be small enough to fit into memory.
Unlike NumPy, which has eager evaluation, operations on dask arrays are lazy. Operations queue up a series of tasks mapped over blocks, and no computation is performed until you actually ask values to be computed (e.g., to print results to your screen or write to disk).
End of explanation
print ds
ds.nbytes * (2 ** -30)
Explanation: Note that the default chunking will have each file in a separate chunk. You can't change this with the chunk option (i.e. the commented code above still chunks along the time axis (as well as the level axis)), so you have to rechunk later on (see below).
End of explanation
ds.chunks
Explanation: So there's 16.4 GB of data, according to the conversion that Stephan Hoyer does at this blog post.
End of explanation
rechunked = ds.chunk({'time': 1956, 'lev': 1})
rechunked.chunks
Explanation: You can re-chunk your data like so, where the number represents the size of each individual chunk. This might be useful when you want each chunk to contain the entire time axis.
End of explanation
import numpy
darray = ds['thetao']
print darray
climatology_eager = darray.values.mean(axis=0)
Explanation: Eager evaluation
From the documentation: You can convert an xarray data structure from lazy dask arrays into eager, in-memory numpy arrays using the load() method (i.e. ds.load()), or make it a numpy array using the values method of numpy.asarray().
End of explanation
climatology_lazy = ds.mean('time')
%%time
climatology_lazy.to_netcdf("/g/data/r87/dbi599/lazy.nc")
Explanation: Lazy evaluation
End of explanation
% matplotlib inline
import matplotlib.pyplot as plt
x = [2.53240, 1.91110, 1.18430, 0.95784, 0.33158,
-0.19506, -0.82144, -1.64770, -1.87450, -2.2010]
y = [-2.50400, -1.62600, -1.17600, -0.87400, -0.64900,
-0.477000, -0.33400, -0.20600, -0.10100, -0.00600]
coefficients = numpy.polyfit(x, y, 3)
polynomial = numpy.poly1d(coefficients)
xs = numpy.arange(-2.2, 2.6, 0.1)
ys = polynomial(xs)
plt.plot(x, y, 'o')
plt.plot(xs, ys)
plt.ylabel('y')
plt.xlabel('x')
plt.show()
def cubic_fit(data_series):
Fit a cubic polynomial to a 1D numpy array.
x = numpy.arange(0, len(data_series))
coefficients = numpy.polyfit(x, data_series, 3)
polynomial = numpy.poly1d(coefficients)
return polynomial(x)
def cubic_fit_ds(dataset):
Fit a cubic polynomial to an xarray dataset.
return numpy.apply_along_axis(cubic_fit, 0, dataset)
import dask.array as da
#dask_array = da.from_array(rechunked['thetao'], chunks=(1956, 1, 189, 192))
dask_array = rechunked['thetao'].data
print dask_array
cubic_data = dask_array.map_blocks(cubic_fit_ds)
cubic_data.chunks
cubic_data.shape
new_ds = xarray.Dataset({'thetao': (('time', 'lev', 'lat', 'lon',), cubic_data)})
file_nums = range(0,31)
paths = ['/g/data/r87/dbi599/dask_%s.nc' %f for f in file_nums]
xarray.save_mfdataset(new_ds, paths)
Explanation: Notice that the computation used only 25 seconds of wall clock time, but 47 seconds of CPU time. It's definitely using 2 cores.
Applying a function in a lazy manner
The simple numpy functions are available through xarray (see here for the notes on import xarray.ufuncs as xu), so doing something like a mean or standard deviation is trivial. For more complex functions, you need to use the map_blocks() method associted with dask arrays. Below I'll try this for the task of fitting a cubic polynomial:
End of explanation
ds_small = xarray.open_mfdataset(infile2)
print ds_small
type(ds_small['thetao'].data)
cubic_data_small = ds_small['thetao'].data.map_blocks(cubic_fit_ds)
file_nums = range(0,1)
paths = ['/g/data/r87/dbi599/dask_%s.nc' %f for f in file_nums]
xarray.save_mfdataset([cubic_data_small], paths)
Explanation: A simple example
To try and figure out if save_mfdataset() accepts dask arrays...
End of explanation
import iris
cube = iris.load_cube([infile1, infile2])
history = []
def edit_attributes(cube, field, filename):
cube.attributes.pop('creation_date', None)
cube.attributes.pop('tracking_id', None)
history.append(cube.attributes['history'])
cube.attributes.pop('history', None)
cubes = iris.load([infile1, infile2], 'sea_water_potential_temperature', callback=edit_attributes)
print cubes
#iris.util.unify_time_units(cubes)
cubes = cubes.concatenate_cube()
print cubes
print len(history)
print history
for i, x_slice in enumerate(cubes.slices(['time', 'latitude', 'longitude'])):
print(i, x_slice.shape)
coord_names = [coord.name() for coord in cubes.coords()]
print coord_names
cubes.aux_coords
str(cubes.coord('time').units)
Explanation: How do you write a dask array to a netCDF file??? Perhaps manually convert each chuck to a numpy array / xarray dataArray?
Reading multiple files with iris
End of explanation |
3,835 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-2', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: BNU
Source ID: SANDBOX-2
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
3,836 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Assignment 2
Step1: Let's look at what a random episode looks like.
Step2: In the episode above, the agent falls into a hole after two timesteps. Also note the stochasticity--on the first step, the DOWN action is selected, but the agent moves to the right.
We extract the relevant information from the gym Env into the MDP class below.
The env object won't be used any further, we'll just use the mdp object.
Step4: Part 1 | Python Code:
from frozen_lake import FrozenLakeEnv
env = FrozenLakeEnv()
print(env.__doc__)
Explanation: Assignment 2: Markov Decision Processes
Homework Instructions
All your answers should be written in this notebook. You shouldn't need to write or modify any other files.
Look for four instances of "YOUR CODE HERE"--those are the only parts of the code you need to write. To grade your homework, we will check whether the printouts immediately following your code match up with the results we got. The portions used for grading are highlighted in yellow. (However, note that the yellow highlighting does not show up when github renders this file.)
To submit your homework, send an email to berkeleydeeprlcourse@gmail.com with the subject line "Deep RL Assignment 2" and two attachments:
1. This ipynb file
2. A pdf version of this file (To make the pdf, do File - Print Preview)
The homework is due Febrary 22nd, 11:59 pm.
Introduction
This assignment will review the two classic methods for solving Markov Decision Processes (MDPs) with finite state and action spaces.
We will implement value iteration (VI) and policy iteration (PI) for a finite MDP, both of which find the optimal policy in a finite number of iterations.
The experiments here will use the Frozen Lake environment, a simple gridworld MDP that is taken from gym and slightly modified for this assignment. In this MDP, the agent must navigate from the start state to the goal state on a 4x4 grid, with stochastic transitions.
End of explanation
# Some basic imports and setup
import numpy as np, numpy.random as nr, gym
np.set_printoptions(precision=3)
def begin_grading(): print("\x1b[43m")
def end_grading(): print("\x1b[0m")
# Seed RNGs so you get the same printouts as me
env.seed(0); from gym.spaces import prng; prng.seed(10)
# Generate the episode
env.reset()
for t in range(100):
env.render()
a = env.action_space.sample()
ob, rew, done, _ = env.step(a)
if done:
break
assert done
env.render();
Explanation: Let's look at what a random episode looks like.
End of explanation
class MDP(object):
def __init__(self, P, nS, nA, desc=None):
self.P = P # state transition and reward probabilities, explained below
self.nS = nS # number of states
self.nA = nA # number of actions
self.desc = desc # 2D array specifying what each grid cell means (used for plotting)
mdp = MDP( {s : {a : [tup[:3] for tup in tups] for (a, tups) in a2d.items()} for (s, a2d) in env.P.items()}, env.nS, env.nA, env.desc)
print("mdp.P is a two-level dict where the first key is the state and the second key is the action.")
print("The 2D grid cells are associated with indices [0, 1, 2, ..., 15] from left to right and top to down, as in")
print(np.arange(16).reshape(4,4))
print("mdp.P[state][action] is a list of tuples (probability, nextstate, reward).\n")
print("For example, state 0 is the initial state, and the transition information for s=0, a=0 is \nP[0][0] =", mdp.P[0][0], "\n")
print("As another example, state 5 corresponds to a hole in the ice, which transitions to itself with probability 1 and reward 0.")
print("P[5][0] =", mdp.P[5][0], '\n')
Explanation: In the episode above, the agent falls into a hole after two timesteps. Also note the stochasticity--on the first step, the DOWN action is selected, but the agent moves to the right.
We extract the relevant information from the gym Env into the MDP class below.
The env object won't be used any further, we'll just use the mdp object.
End of explanation
import random
import datetime
def sarsa_lambda(env, gamma, delta, rate, epsilon, nIt, render):
Salsa(lambda) algorithm
Args:
env: environment
gamma: decay of reward
delta: the lambda parameter for Salsa(lambda) algorithm
rate: learning rate
nIt: number of iterations
render: boolean which determines if render the state or not
random.seed(datetime.datetime.now().timestamp())
q = np.array([0] * env.nS * env.nA, dtype = float).reshape(env.nS, env.nA)
for i in range(nIt):
trace = np.zeros_like(q)
obs_prev = None
act_prev = None
obs = None
done = False
totalr = 0.
# Need to reorganize the code a little bit as Sarsa(lambda) needs an extra action sampling
while not done:
if render:
env.render()
if obs is None:
obs = env.reset()
else:
assert act is not None
obs, r, done, _ = env.step(act)
totalr += r
p = np.random.uniform(0., 1.)
if p > epsilon:
act = np.argmax(q[obs])
else:
act = np.random.randint(env.nA)
# Sarsa(delta)
# R and S are ready. Waiting for A.
if obs_prev is not None:
trace *= delta * gamma
trace[obs_prev][act_prev] += 1
q += rate * trace * (r + gamma * q[obs][act] - q[obs_prev][act_prev])
obs_prev = obs
act_prev = act
if render:
env.render()
return q
gamma = 0.9 # decay of reward
delta = 0.5 # decay of eligibility trace
rate = 0.1 # the learning rate, or alpha in the book
nIt = 1000
epsilon = 0.5 # epsilon greedy
q = sarsa_lambda(env, gamma, delta, rate, epsilon, nIt, False)
print("Q function:\n")
print(q)
print()
print("Greedy algorithm:")
import matplotlib.pyplot as plt
%matplotlib inline
def policy_matrix(q):
indices = np.argmax(q, axis = 1)
indices[np.max(q, axis = 1) == 0] = 4
to_direction = np.vectorize(lambda x: ['L', 'D', 'R', 'U', ''][x])
return to_direction(indices.reshape(4, 4))
plt.figure(figsize=(3,3))
# imshow makes top left the origin
plt.imshow(np.array([0] * 16).reshape(4,4), cmap='gray', interpolation='none', clim=(0,1))
ax = plt.gca()
ax.set_xticks(np.arange(4)-.5)
ax.set_yticks(np.arange(4)-.5)
directions = policy_matrix(q)
for y in range(4):
for x in range(4):
plt.text(x, y, str(env.desc[y,x].item().decode()) + ',' + directions[y, x],
color='g', size=12, verticalalignment='center',
horizontalalignment='center', fontweight='bold')
plt.grid(color='b', lw=2, ls='-')
Explanation: Part 1: Value Iteration
Problem 1: implement value iteration
In this problem, you'll implement value iteration, which has the following pseudocode:
Initialize $V^{(0)}(s)=0$, for all $s$
For $i=0, 1, 2, \dots$
- $V^{(i+1)}(s) = \max_a \sum_{s'} P(s,a,s') [ R(s,a,s') + \gamma V^{(i)}(s')]$, for all $s$
We additionally define the sequence of greedy policies $\pi^{(0)}, \pi^{(1)}, \dots, \pi^{(n-1)}$, where
$$\pi^{(i)}(s) = \arg \max_a \sum_{s'} P(s,a,s') [ R(s,a,s') + \gamma V^{(i)}(s')]$$
Your code will return two lists: $[V^{(0)}, V^{(1)}, \dots, V^{(n)}]$ and $[\pi^{(0)}, \pi^{(1)}, \dots, \pi^{(n-1)}]$
To ensure that you get the same policies as the reference solution, choose the lower-index action to break ties in $\arg \max_a$. This is done automatically by np.argmax. This will only affect the "# chg actions" printout below--it won't affect the values computed.
<div class="alert alert-warning">
Warning: make a copy of your value function each iteration and use that copy for the update--don't update your value function in place.
Updating in-place is also a valid algorithm, sometimes called Gauss-Seidel value iteration or asynchronous value iteration, but it will cause you to get different results than me.
</div>
End of explanation |
3,837 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
Step1: 2D SH finite difference modelling - the quarter plane problem
Before modelling the propagation of Love-waves, let's take a look how accurate reflected body waves from the free-surface boundary can be modelled using the quarter plane problem. This is an important aspect, because one way to understand Love waves is by the interference of SH-body waves in a layered medium.
Analytical solution of 2D SH quarter plane problem
We seek an analytical solution to the 2D isotropic elastic SH problem
Step2: To build the FD code, we first define the velocity update $v_y$ ...
Step3: ... update the shear stress components $\sigma_{yx}$ and $\sigma_{yz}$ ...
Step4: ... and harmonically averaging the shear modulus ...
Step5: Finally, we define a function to calculate the Green's function at arbritary source positions or image points ...
Step6: ... and assemble the 2D SH FD code ...
Step7: First, we compare the analytical solution with the FD modelling result using the spatial 2nd order operator and a wavefield sampling of $N_\lambda = 12$ gridpoints per minimum wavelength
Step8: While, the direct SH wave is modelled very accurately, the fit of the boundary reflection waveforms can be improved. Let's try more grid points per minimum wavelength ($N_\lambda = 16$)... | Python Code:
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
End of explanation
# Import Libraries
# ----------------
import numpy as np
from numba import jit
import matplotlib
import matplotlib.pyplot as plt
from pylab import rcParams
# Ignore Warning Messages
# -----------------------
import warnings
warnings.filterwarnings("ignore")
# Definition of modelling parameters
# ----------------------------------
xmax = 500.0 # maximum spatial extension of the 1D model in x-direction (m)
zmax = xmax # maximum spatial extension of the 1D model in z-direction(m)
tmax = 1.12 # maximum recording time of the seismogram (s)
vs0 = 580. # S-wave speed in medium (m/s)
rho0 = 1000. # Density in medium (kg/m^3)
# acquisition geometry
xr = 125.0 # x-receiver position (m)
zr = xr # z-receiver position (m)
xsrc = 250.0 # x-source position (m)
zsrc = xsrc # z-source position (m)
f0 = 40. # dominant frequency of the source (Hz)
t0 = 4. / f0 # source time shift (s)
Explanation: 2D SH finite difference modelling - the quarter plane problem
Before modelling the propagation of Love-waves, let's take a look how accurate reflected body waves from the free-surface boundary can be modelled using the quarter plane problem. This is an important aspect, because one way to understand Love waves is by the interference of SH-body waves in a layered medium.
Analytical solution of 2D SH quarter plane problem
We seek an analytical solution to the 2D isotropic elastic SH problem:
\begin{align}
\rho \frac{\partial v_y}{\partial t} &= \frac{\partial \sigma_{yx}}{\partial x} + \frac{\partial \sigma_{yz}}{\partial z} + f_y, \notag\
\frac{\partial \sigma_{yx}}{\partial t} &= \mu \frac{\partial v_{y}}{\partial x},\notag\
\frac{\partial \sigma_{yz}}{\partial t} &= \mu \frac{\partial v_{y}}{\partial z}. \notag\
\end{align}
for wave propagation in a homogeneous medium, by setting density $\rho$ and shear modulus $\mu$ to constant values $\rho_0,\; \mu_0$
\begin{align}
\rho(i,j) &= \rho_0 \notag \
\mu(i,j) &= \mu_0 = \rho_0 V_{s0}^2\notag
\end{align}
at each spatial grid point $i = 0, 1, 2, ..., nx$; $j = 0, 1, 2, ..., nz$, in order to compare the numerical with the analytical solution. For a complete description of the problem we also have to define initial and boundary conditions. The initial condition is
\begin{equation}
v_y(i,j,0) = \sigma_{yx}(i+1/2,j,0) = \sigma_{yz}(i,j+1/2,0) = 0, \nonumber
\end{equation}
so the modelling starts with zero particel velocity and shear stress amplitudes at each spatial grid point. As boundary conditions, we assume
\begin{align}
v_y(0,j,n) &= \sigma_{yx}(1/2,j,n) = \sigma_{yz}(0,j+1/2,n) = 0, \nonumber\
v_y(nx,j,n) &= \sigma_{yx}(nx+1/2,j,n) = \sigma_{yz}(nx,j+1/2,n) = 0, \nonumber\
v_y(i,0,n) &= \sigma_{yx}(i+1/2,0,n) = \sigma_{yz}(i,1/2,n) = 0, \nonumber\
v_y(i,nz,n) &= \sigma_{yx}(i+1/2,nz,n) = \sigma_{yz}(i,nz+1/2,n) = 0, \nonumber\
\end{align}
for all time steps n. This Dirichlet boundary condition, leads to artifical boundary reflections. In the previous notebook, we neglected these reflections. In this case, we want to incorporate them into the analytical solution, to test how accurate the boundary reflections actually are. We can describe the reflections by using image points
<img src="images/SH_quarter_problem.jpg" width="60%">
where we excitate the Green's function solution for the homogenous medium:
\begin{equation}
G_{2D}(x,z,t,xsrc,zsrc) = \dfrac{1}{2\pi \rho_0 V_{s0}^2}\dfrac{H\biggl((t-t_s)-\dfrac{|r|}{V_{s0}}\biggr)}{\sqrt{(t-t_s)^2-\dfrac{r^2}{V_{s0}^2}}}, \nonumber
\end{equation}
where $H$ denotes the Heaviside function, $r = \sqrt{(x-x_s)^2+(z-z_s)^2}$ the source-receiver distance (offset), at image point ...
1 (-xsrc, zsrc) to describe the reflection from the left free surface boundary
2 (-xsrc, -zsrc) to describe the reflection from the lower left corner
3 (xsrc, -zsrc) to describe the reflection from the bottom free surface boundary
Using the superposition principle of linear partial differential equations, we can describe the Green's function solution for the source and reflected wavefield by
\begin{align}
G_{2D}^{total}(x,z,t,xsrc,zsrc) &= G_{2D}^{source}(x,z,t,xsrc,zsrc) + G_{2D}^{image1}(x,z,t,-xsrc,zsrc)\nonumber\
&+ G_{2D}^{image2}(x,z,t,-xsrc,-zsrc) + G_{2D}^{image3}(x,z,t,xsrc,-zsrc)\nonumber\
\end{align}
For a given source wavelet S, we get the displacement wavefield:
\begin{equation}
u_{y,analy}^{total}(x,z,t) = G_{2D}^{total} * S \nonumber
\end{equation}
Keep in mind that the stress-velocity code computes the particle velocities $\mathbf{v_{y,analy}^{total}}$, while the analytical solution is expressed in terms of the displacement $\mathbf{u_{y,analy}^{total}}$. Therefore, we have to take the first derivative of the analytical solution, before comparing the numerical with the analytical solution:
\begin{equation}
v_{y,analy}^{total}(x,z,t) = \frac{\partial u_{y,analy}^{total}}{\partial t} \nonumber
\end{equation}
End of explanation
# Particle velocity vy update
# ---------------------------
@jit(nopython=True) # use JIT for C-performance
def update_vel(vy, syx, syz, dx, dz, dt, nx, nz, rho, op):
# 2nd order FD operator
if (op==2):
for i in range(1, nx - 1):
for j in range(1, nz - 1):
# Calculate spatial derivatives (2nd order operator)
syx_x = (syx[i,j] - syx[i - 1,j]) / dx
syz_z = (syz[i,j] - syz[i,j - 1]) / dz
# Update particle velocities
vy[i,j] = vy[i,j] + (dt/rho[i,j]) * (syx_x + syz_z)
return vy
Explanation: To build the FD code, we first define the velocity update $v_y$ ...
End of explanation
# Shear stress syx, syz updates
# -----------------------------
@jit(nopython=True) # use JIT for C-performance
def update_stress(vy, syx, syz, dx, dz, dt, nx, nz, mux, muz, op):
# 2nd order FD operator
if(op==2):
for i in range(1, nx - 1):
for j in range(1, nz - 1):
# Calculate spatial derivatives (2nd order operator)
vy_x = (vy[i + 1,j] - vy[i,j]) / dx
vy_z = (vy[i,j + 1] - vy[i,j]) / dz
# Update shear stresses
syx[i,j] = syx[i,j] + dt * mux[i,j] * vy_x
syz[i,j] = syz[i,j] + dt * muz[i,j] * vy_z
return syx, syz
Explanation: ... update the shear stress components $\sigma_{yx}$ and $\sigma_{yz}$ ...
End of explanation
# Harmonic averages of shear modulus
# ----------------------------------
@jit(nopython=True) # use JIT for C-performance
def shear_avg(mu, nx, nz, mux, muz):
for i in range(1, nx - 1):
for j in range(1, nz - 1):
# Calculate harmonic averages of shear moduli
mux[i,j] = 2 / (1 / mu[i + 1,j] + 1 / mu[i,j])
muz[i,j] = 2 / (1 / mu[i,j + 1] + 1 / mu[i,j])
return mux, muz
Explanation: ... and harmonically averaging the shear modulus ...
End of explanation
def Green_2D(ir, jr, xsrc, zsrc, x, z, vs0, rho0, time, G, nt):
# calculate source-receiver distance
#r = np.sqrt((x[ir] - x[isrc])**2 + (z[jr] - z[jsrc])**2)
r = np.sqrt((x[ir] - xsrc)**2 + (z[jr] - zsrc)**2)
for it in range(nt): # Calculate Green's function (Heaviside function)
if (time[it] - r / vs0) >= 0:
G[it] = 1. / (2 * np.pi * rho0 * vs0**2) * (1. / np.sqrt(time[it]**2 - (r/vs0)**2))
return G
Explanation: Finally, we define a function to calculate the Green's function at arbritary source positions or image points ...
End of explanation
# 2D SH Wave Propagation (Finite Difference Solution)
# ---------------------------------------------------
def FD_2D_SH_JIT(dt,dx,dz,f0,xsrc,zsrc,op):
nx = (int)(xmax/dx) # number of grid points in x-direction
print('nx = ',nx)
nz = (int)(zmax/dz) # number of grid points in x-direction
print('nz = ',nz)
nt = (int)(tmax/dt) # maximum number of time steps
print('nt = ',nt)
ir = (int)(xr/dx) # receiver location in grid in x-direction
jr = (int)(zr/dz) # receiver location in grid in z-direction
# half FD-operator length
noph = (int)(op/2)
# Source time function (Gaussian)
# -------------------------------
src = np.zeros(nt + 1)
time = np.linspace(0 * dt, nt * dt, nt)
# 1st derivative of a Gaussian
src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2))
# Analytical solution
# -------------------
G = time * 0.
G1 = time * 0.
G2 = time * 0.
G3 = time * 0.
vy_analy = time * 0.
# Initialize coordinates
# ----------------------
x = np.arange(nx)
x = x * dx # coordinates in x-direction (m)
z = np.arange(nz)
z = z * dz # coordinates in z-direction (m)
# calculate 2D Green's function for direct SH wave from source position
isrc = (int)(xsrc/dx) # source location in grid in x-direction
jsrc = (int)(zsrc/dz) # source location in grid in x-direction
# calculate source position from isrc and jsrc
xsrcd = isrc * dx
zsrcd = jsrc * dz
G = Green_2D(ir, jr, xsrcd, zsrcd, x, z, vs0, rho0, time, G, nt)
# calculate 2D Green's function for image points
# shift image points by half the FD operator size to compare with FD solution
G1 = Green_2D(ir, jr, -xsrcd + noph*dx, zsrcd, x, z, vs0, rho0, time, G1, nt)
G2 = Green_2D(ir, jr, xsrcd, -zsrcd+noph*dz, x, z, vs0, rho0, time, G2, nt)
G3 = Green_2D(ir, jr, -xsrcd+noph*dx, -zsrcd+noph*dz, x, z, vs0, rho0, time, G3, nt)
G = G + G1 + G2 + G3
Gc = np.convolve(G, src * dt)
Gc = Gc[0:nt]
# compute vy_analy from uy_analy
for i in range(1, nt - 1):
vy_analy[i] = (Gc[i+1] - Gc[i-1]) / (2.0 * dt)
# Initialize empty pressure arrays
# --------------------------------
vy = np.zeros((nx,nz)) # particle velocity vy
syx = np.zeros((nx,nz)) # shear stress syx
syz = np.zeros((nx,nz)) # shear stress syz
# Initialize model (assume homogeneous model)
# -------------------------------------------
vs = np.zeros((nx,nz))
vs = vs + vs0 # initialize wave velocity in model
rho = np.zeros((nx,nz))
rho = rho + rho0 # initialize wave velocity in model
# calculate shear modulus
mu = np.zeros((nx,nz))
mu = rho * vs ** 2
# harmonic average of shear moduli
# --------------------------------
mux = mu # initialize harmonic average mux
muz = mu # initialize harmonic average muz
mux, muz = shear_avg(mu, nx, nz, mux, muz)
# Initialize empty seismogram
# ---------------------------
seis = np.zeros(nt)
# Time looping
# ------------
for it in range(nt):
# Update particle velocity vy
# ---------------------------
vy = update_vel(vy, syx, syz, dx, dz, dt, nx, nz, rho, op)
# Add Source Term at (isrc,jsrc)
# ------------------------------
# Absolute particle velocity w.r.t analytical solution
vy[isrc,jsrc] = vy[isrc,jsrc] + (dt * src[it] / (rho[isrc,jsrc] * dx * dz))
# Update shear stress syx, syz
# ----------------------------
syx, syz = update_stress(vy, syx, syz, dx, dz, dt, nx, nz, mux, muz, op)
# Output of Seismogram
# -----------------
seis[it] = vy[ir,jr]
# Compare FD Seismogram with analytical solution
# ----------------------------------------------
# Define figure size
rcParams['figure.figsize'] = 12, 5
plt.plot(time, seis, 'b-',lw=3,label="FD solution") # plot FD seismogram
Analy_seis = plt.plot(time,vy_analy,'r--',lw=3,label="Analytical solution") # plot analytical solution
plt.xlim(time[0], time[-1])
plt.title('Seismogram')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.legend()
plt.grid()
plt.show()
return vy
Explanation: ... and assemble the 2D SH FD code ...
End of explanation
Nlam = 12
dx = vs0 / (Nlam * 2 * f0)
dz = dx # grid point distance in z-direction (m)
op = 2 # order of spatial FD operator
# calculate dt according to CFL criterion
dt = dx / (np.sqrt(2) * vs0)
vy = FD_2D_SH_JIT(dt,dx,dz,f0,xsrc,zsrc,op)
Explanation: First, we compare the analytical solution with the FD modelling result using the spatial 2nd order operator and a wavefield sampling of $N_\lambda = 12$ gridpoints per minimum wavelength:
End of explanation
Nlam = 16
dx = vs0 / (Nlam * 2 * f0)
dz = dx # grid point distance in z-direction (m)
op = 2 # order of spatial FD operator
# calculate dt according to CFL criterion
dt = dx / (np.sqrt(2) * vs0)
vy = FD_2D_SH_JIT(dt,dx,dz,f0,xsrc,zsrc,op)
Explanation: While, the direct SH wave is modelled very accurately, the fit of the boundary reflection waveforms can be improved. Let's try more grid points per minimum wavelength ($N_\lambda = 16$)...
End of explanation |
3,838 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
Step24: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
Step27: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save the batch_size and save_path parameters for inference.
Step43: Checkpoint
Step46: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step48: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def single_text_to_ids(text, vocab_to_int, add_EOS):
id_text = []
for sentence in text.split('\n'):
id_sentence = []
for word in sentence.split():
id_sentence.append(vocab_to_int[word])
if add_EOS:
id_sentence.append(vocab_to_int['<EOS>'])
#print(sentence)
#print(id_sentence)
id_text.append(id_sentence)
#print(id_text)
return id_text
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
#print(source_text)
#print(target_text)
#print(source_vocab_to_int)
#print(target_vocab_to_int)
source_id_text = single_text_to_ids(source_text, source_vocab_to_int, False)
target_id_text = single_text_to_ids(target_text, target_vocab_to_int, True)
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
input = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return input, targets, learning_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
lstm = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
enc_cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)
_, enc_state = tf.nn.dynamic_rnn(enc_cell, rnn_inputs, dtype=tf.float32)
return enc_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
#with tf.variable_scope("decoding") as decoding_scope:
# Training Decoder
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(train_pred)
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
# Inference Decoder
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
# Decoder RNNs
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
lstm = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
dec_cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)
with tf.variable_scope("decoding") as decoding_scope:
# Output Layer
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
#with tf.variable_scope("decoding") as decoding_scope:
train_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
maximum_length = sequence_length - 1
inference_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob)
return train_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
#Apply embedding to the input data for the encoder.
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
#Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
enc_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
#Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
#Apply embedding to the target data for the decoder.
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
#Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
train_logits, inference_logits = decoding_layer(dec_embed_input, dec_embeddings, enc_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
return train_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
# Number of Epochs
epochs = 20
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 128
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 200
decoding_embedding_size = 200
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.5
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
if batch_i % 10 == 0:
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
lower_sentence = sentence.lower()
id_seq = []
for word in lower_sentence.split():
id_seq.append(vocab_to_int.get(word, vocab_to_int['<UNK>']))
return id_seq
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
3,839 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inferring species trees with tetrad
When you install ipyrad a number of analysis tools are installed as well. This includes the program tetrad, which applies the theory of phylogenetic invariants (see Lake 1987) to infer quartet trees based on a SNP alignment. It then uses the software wQMC to join the quartets into a species tree. This combined approach was first developed by Chifman and Kubatko (2015) in the software SVDQuartets.
Required software
Step1: Connect to a cluster
Step2: Run tetrad
Step3: Plot the tree | Python Code:
## conda install ipyrad -c ipyrad
## conda install toytree -c eaton-lab
import ipyrad.analysis as ipa
import ipyparallel as ipp
import toytree
Explanation: Inferring species trees with tetrad
When you install ipyrad a number of analysis tools are installed as well. This includes the program tetrad, which applies the theory of phylogenetic invariants (see Lake 1987) to infer quartet trees based on a SNP alignment. It then uses the software wQMC to join the quartets into a species tree. This combined approach was first developed by Chifman and Kubatko (2015) in the software SVDQuartets.
Required software
End of explanation
## connect to a cluster
ipyclient = ipp.Client()
print("connected to {} cores".format(len(ipyclient)))
Explanation: Connect to a cluster
End of explanation
## initiate a tetrad object
tet = ipa.tetrad(
name="pedic-full",
seqfile="analysis-ipyrad/pedic-full_outfiles/pedic-full.snps.phy",
mapfile="analysis-ipyrad/pedic-full_outfiles/pedic-full.snps.map",
nboots=100,
)
## run tetrad on the cluster
tet.run(ipyclient=ipyclient)
Explanation: Run tetrad
End of explanation
## plot the resulting unrooted tree
import toytree
tre = toytree.tree(tet.trees.nhx)
tre.draw(
width=350,
node_labels=tre.get_node_values("support"),
);
## save the tree as a pdf
import toyplot.pdf
toyplot.pdf.render(canvas, "analysis-tetrad/tetrad-tree.pdf")
Explanation: Plot the tree
End of explanation |
3,840 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome to the Development Version of BASS
This notebook is inteded for very advanced users, as there is almost no interactivity features. However, this notebook is all about speed. If you know exactly what you are doing, then this is the notebook for you.
BASS
Step1: WARNING All strings should be raw, especially if in Windows.
r'String!'
Step2: pyEEG required for approximate and sample entropy
Step3: Need help?
Try checking the docstring of a function you are struggling with.
moving_statistics?
help(moving_statistics) | Python Code:
from bass import *
Explanation: Welcome to the Development Version of BASS
This notebook is inteded for very advanced users, as there is almost no interactivity features. However, this notebook is all about speed. If you know exactly what you are doing, then this is the notebook for you.
BASS: Biomedical Analysis Software Suite for event detection and signal processing.
Copyright (C) 2015 Abigail Dobyns
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>
End of explanation
#initialize new file
Data = {}
Settings = {}
Results ={}
############################################################################################
#manual Setting block
Settings['folder']= r"/Users/abigaildobyns/Desktop"
Settings['Label'] = r'rat34_ECG.txt'
Settings['Output Folder'] = r"/Users/abigaildobyns/Desktop/demo"
#transformation Settings
Settings['Absolute Value'] = True #Must be True if Savitzky-Golay is being used
Settings['Bandpass Highcut'] = r'none' #in Hz
Settings['Bandpass Lowcut'] = r'none' #in Hz
Settings['Bandpass Polynomial'] = r'none' #integer
Settings['Linear Fit'] = False #between 0 and 1 on the whole time series
Settings['Linear Fit-Rolling R'] = 0.75 #between 0 and 1
Settings['Linear Fit-Rolling Window'] = 1000 #window for rolling mean for fit, unit is index not time
Settings['Relative Baseline'] = 0 #default 0, unless data is normalized, then 1.0. Can be any float
Settings['Savitzky-Golay Polynomial'] = 4 #integer
Settings['Savitzky-Golay Window Size'] = 251 #must be odd. units are index not time
#Baseline Settings
Settings['Baseline Type'] = r'static' #'linear', 'rolling', or 'static'
#For Linear
Settings['Baseline Start'] = 0.0 #start time in seconds
Settings['Baseline Stop'] = 1.0 #end time in seconds
#For Rolling
Settings['Rolling Baseline Window'] = r'none' #leave as 'none' if linear or static
#Peaks
Settings['Delta'] = 0.25
Settings['Peak Minimum'] = -1 #amplitude value
Settings['Peak Maximum'] = 1 #amplitude value
#Bursts
Settings['Burst Area'] = False #calculate burst area
Settings['Exclude Edges'] = True #False to keep edges, True to discard them
Settings['Inter-event interval minimum (seconds)'] = 0.0100 #only for bursts, not for peaks
Settings['Maximum Burst Duration (s)'] = 10
Settings['Minimum Burst Duration (s)'] = 0
Settings['Minimum Peak Number'] = 1 #minimum number of peaks/burst, integer
Settings['Threshold']= 0.15 #linear: proportion of baseline.
#static: literal value.
#rolling, linear ammount grater than rolling baseline at each time point.
#Outputs
Settings['Generate Graphs'] = False #create and save the fancy graph outputs
#Settings that you should not change unless you are a super advanced user:
#These are settings that are still in development
Settings['Graph LCpro events'] = False
Settings['File Type'] = r'Plain' #'LCPro', 'ImageJ', 'SIMA', 'Plain', 'Morgan'
Settings['Milliseconds'] = False
############################################################################################
#Load in a Settings File
#initialize new file
Data = {}
Settings = {}
Results ={}
############################################################################################
#manual Setting block
Settings['folder']= "/Users/abigaildobyns/Desktop/Neuron Modeling/morgan voltage data"
Settings['Label'] = 'voltage-TBModel-sec1320-eL-IP0_9.txt'
Settings['Output Folder'] = "/Users/abigaildobyns/Desktop/Neuron Modeling/morgan voltage data/IP0_9"
Settings['File Type'] = 'Morgan' #'LCPro', 'ImageJ', 'SIMA', 'Plain', 'Morgan'
Settings['Milliseconds'] = True
#Load a Settings file
Settings['Settings File'] = '/Users/abigaildobyns/Desktop/Neuron Modeling/morgan voltage data/IP0_9/voltage-TBModel-sec1320-eL-IP0_9.txt_Settings.csv'
Settings = load_settings(Settings)
Data, Settings, Results = analyze(Data, Settings, Results)
#plot raw data
plot_rawdata(Data)
#grouped summary for peaks
Results['Peaks-Master'].groupby(level=0).describe()
#grouped summary for bursts
Results['Bursts-Master'].groupby(level=0).describe()
#Call one time series by Key
key = 'Mean1'
graph_ts(Data, Settings, Results, key)
#raw and transformed event plot
key = 'Mean1'
start =100 #start time in seconds
end= 101#end time in seconds
results_timeseries_plot(key, start, end, Data, Settings, Results)
#Frequency plot
event_type = 'Peaks'
meas = 'Intervals'
key = 'Mean1' #'Mean1' default for single wave
frequency_plot(event_type, meas, key, Data, Settings, Results)
#Get average plots, display only
event_type = 'peaks'
meas = 'Peaks Amplitude'
average_measurement_plot(event_type, meas,Results)
#raster
raster(Results)
#Batch
event_type = 'Peaks'
meas = 'all'
Results = poincare_batch(event_type, meas, Data, Settings, Results)
pd.concat({'SD1':Results['Poincare SD1'],'SD2':Results['Poincare SD2']})
#quick poincare
event_type = 'Peaks'
meas = 'Intervals'
key = 'Mean1'
poincare_plot(Results[event_type][key][meas])
#PSD of DES
Settings['PSD-Event'] = Series(index = ['Hz','ULF', 'VLF', 'LF','HF','dx'])
#Set PSD ranges for power in band
Settings['PSD-Event']['hz'] = 4.0 #freqency that the interpolation and PSD are performed with.
Settings['PSD-Event']['ULF'] = 0.03 #max of the range of the ultra low freq band. range is 0:ulf
Settings['PSD-Event']['VLF'] = 0.05 #max of the range of the very low freq band. range is ulf:vlf
Settings['PSD-Event']['LF'] = 0.15 #max of the range of the low freq band. range is vlf:lf
Settings['PSD-Event']['HF'] = 0.4 #max of the range of the high freq band. range is lf:hf. hf can be no more than (hz/2)
Settings['PSD-Event']['dx'] = 10 #segmentation for the area under the curve.
event_type = 'Peaks'
meas = 'Intervals'
key = 'Mean1'
scale = 'raw'
Results = psd_event(event_type, meas, key, scale, Data, Settings, Results)
Results['PSD-Event'][key]
#PSD of raw signal
#optional
Settings['PSD-Signal'] = Series(index = ['ULF', 'VLF', 'LF','HF','dx'])
#Set PSD ranges for power in band
Settings['PSD-Signal']['ULF'] = 100 #max of the range of the ultra low freq band. range is 0:ulf
Settings['PSD-Signal']['VLF'] = 200 #max of the range of the very low freq band. range is ulf:vlf
Settings['PSD-Signal']['LF'] = 300 #max of the range of the low freq band. range is vlf:lf
Settings['PSD-Signal']['HF'] = 400 #max of the range of the high freq band. range is lf:hf. hf can be no more than (hz/2)
Settings['PSD-Signal']['dx'] = 10 #segmentation for the area under the curve.
scale = 'raw' #raw or db
Results = psd_signal(version = 'original', key = 'Mean1', scale = scale,
Data = Data, Settings = Settings, Results = Results)
Results['PSD-Signal']
#spectrogram
version = 'original'
key = 'Mean1'
spectogram(version, key, Data, Settings, Results)
#Moving Stats
event_type = 'Peaks'
meas = 'all'
window = 60 #seconds
Results = moving_statistics(event_type, meas, window, Data, Settings, Results)
#Histogram Entropy-events
event_type = 'Bursts'
meas = 'all'
Results = histent_wrapper(event_type, meas, Data, Settings, Results)
Results['Histogram Entropy']
Explanation: WARNING All strings should be raw, especially if in Windows.
r'String!'
End of explanation
#Approximate Entropy-events
event_type = 'Peaks'
meas = 'all'
Results = ap_entropy_wrapper(event_type, meas, Data, Settings, Results)
Results['Approximate Entropy']
#Sample Entropy-events
event_type = 'Peaks'
meas = 'all'
Results = samp_entropy_wrapper(event_type, meas, Data, Settings, Results)
Results['Sample Entropy']
#Approximate Entropy on raw signal
#takes a VERY long time
from pyeeg import ap_entropy
version = 'original' #original, trans, shift, or rolling
key = 'Mean1' #Mean1 default key for one time series
start = 0 #seconds, where you want the slice to begin
end = 1 #seconds, where you want the slice to end. The absolute end is -1
ap_entropy(Data[version][key][start:end].tolist(), 2, (0.2*np.std(Data[version][key][start:end])))
#Sample Entropy on raw signal
#takes a VERY long time
from pyeeg import samp_entropy
version = 'original' #original, trans, shift, or rolling
key = 'Mean1' #Mean1 default key for one time series
start = 0 #seconds, where you want the slice to begin
end = 1 #seconds, where you want the slice to end. The absolute end is -1
samp_entropy(Data[version][key][start:end].tolist(), 2, (0.2*np.std(Data[version][key][start:end])))
Explanation: pyEEG required for approximate and sample entropy
End of explanation
moving_statistics?
import pyeeg
pyeeg.samp_entropy?
Explanation: Need help?
Try checking the docstring of a function you are struggling with.
moving_statistics?
help(moving_statistics)
End of explanation |
3,841 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Support Vector Machines
Step1: Kernel SVMs
Predictions in a kernel-SVM are made using the formular
$$
\hat{y} = \alpha_0 + \alpha_1 y_1 k(\mathbf{x^{(1)}}, \mathbf{x}) + ... + \alpha_n y_n k(\mathbf{x^{(n)}}, \mathbf{x})> 0
$$
$$
0 \leq \alpha_i \leq C
$$
Radial basis function (Gaussian) kernel | Python Code:
from sklearn.datasets import load_digits
from sklearn.cross_validation import train_test_split
digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data / 16., digits.target % 2, random_state=2)
from sklearn.svm import LinearSVC, SVC
linear_svc = LinearSVC(loss="hinge").fit(X_train, y_train)
svc = SVC(kernel="linear").fit(X_train, y_train)
np.mean(linear_svc.predict(X_test) == svc.predict(X_test))
Explanation: Support Vector Machines
End of explanation
from sklearn.metrics.pairwise import rbf_kernel
line = np.linspace(-3, 3, 100)[:, np.newaxis]
kernel_value = rbf_kernel([[0]], line, gamma=1)
plt.plot(line, kernel_value.T)
from figures import plot_svm_interactive
plot_svm_interactive()
svc = SVC().fit(X_train, y_train)
svc.score(X_test, y_test)
Cs = [0.001, 0.01, 0.1, 1, 10, 100]
gammas = [0.001, 0.01, 0.1, 1, 10, 100]
from sklearn.grid_search import GridSearchCV
param_grid = {'C': Cs, 'gamma' : gammas}
grid_search = GridSearchCV(SVC(), param_grid, cv=5)
grid_search.fit(X_train, y_train)
grid_search.score(X_test, y_test)
# We extract just the scores
scores = [x[1] for x in grid_search.grid_scores_]
scores = np.array(scores).reshape(6, 6)
plt.matshow(scores)
plt.xlabel('gamma')
plt.ylabel('C')
plt.colorbar()
plt.xticks(np.arange(6), param_grid['gamma'])
plt.yticks(np.arange(6), param_grid['C']);
Explanation: Kernel SVMs
Predictions in a kernel-SVM are made using the formular
$$
\hat{y} = \alpha_0 + \alpha_1 y_1 k(\mathbf{x^{(1)}}, \mathbf{x}) + ... + \alpha_n y_n k(\mathbf{x^{(n)}}, \mathbf{x})> 0
$$
$$
0 \leq \alpha_i \leq C
$$
Radial basis function (Gaussian) kernel:
$$k(\mathbf{x}, \mathbf{x'}) = \exp(-\gamma ||\mathbf{x} - \mathbf{x'}||^2)$$
End of explanation |
3,842 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="images/csdms_logo.jpg">
Flexural Subsidence
Link to this notebook
Step1: Import the Subside class, and instantiate it. In Python, a model with a BMI will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!
Step2: Even though we can't run our waves model yet, we can still get some information about it. Just don't try to run it. Some things we can do with our model are get the names of the input variables.
Step3: We can also get information about specific variables. Here we'll look at some info about lithospheric deflections. This is the main input of the Subside model. Notice that BMI components always use CSDMS standard names. With that name we can get information about that variable and the grid that it is on.
OK. We're finally ready to run the model. Well not quite. First we initialize the model with the BMI initialize method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass None, which tells Cem to use some defaults.
Step4: Before running the model, let's set an input parameter - the overlying load.
Step5: The main output variable for this model is deflection. In this case, the CSDMS Standard Name is
Step6: With the grid_id, we can now get information about the grid. For instance, the number of dimension and the type of grid (structured, unstructured, etc.). This grid happens to be uniform rectilinear. If you were to look at the "grid" types for wave height and period, you would see that they aren't on grids at all but instead are scalars.
Step7: Because this grid is uniform rectilinear, it is described by a set of BMI methods that are only available for grids of this type. These methods include | Python Code:
from __future__ import print_function
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: <img src="images/csdms_logo.jpg">
Flexural Subsidence
Link to this notebook: https://github.com/csdms/pymt/blob/master/docs/demos/subside.ipynb
Install command: $ conda install notebook pymt_sedflux
This example explores how to use a BMI implementation using sedflux's subsidence model as an example.
Links
sedflux source code: Look at the files that have deltas in their name.
sedflux description on CSDMS: Detailed information on the CEM model.
Interacting with the Subside BMI using Python
Some magic that allows us to view images within the notebook.
End of explanation
from pymt import plugins
subside = plugins.Subside()
Explanation: Import the Subside class, and instantiate it. In Python, a model with a BMI will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!
End of explanation
subside.output_var_names
subside.input_var_names
Explanation: Even though we can't run our waves model yet, we can still get some information about it. Just don't try to run it. Some things we can do with our model are get the names of the input variables.
End of explanation
config_file, config_folder = subside.setup()
subside.initialize(config_file, dir=config_folder)
subside.var["earth_material_load__pressure"]
subside.grid[0].node_shape
Explanation: We can also get information about specific variables. Here we'll look at some info about lithospheric deflections. This is the main input of the Subside model. Notice that BMI components always use CSDMS standard names. With that name we can get information about that variable and the grid that it is on.
OK. We're finally ready to run the model. Well not quite. First we initialize the model with the BMI initialize method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass None, which tells Cem to use some defaults.
End of explanation
import numpy as np
load = np.zeros((500, 500))
load[250, 250] = 1e3
Explanation: Before running the model, let's set an input parameter - the overlying load.
End of explanation
subside.var['lithosphere__increment_of_elevation']
Explanation: The main output variable for this model is deflection. In this case, the CSDMS Standard Name is:
"lithosphere__increment_of_elevation"
First we find out which of Subside's grids contains deflection.
End of explanation
subside.grid[0]
Explanation: With the grid_id, we can now get information about the grid. For instance, the number of dimension and the type of grid (structured, unstructured, etc.). This grid happens to be uniform rectilinear. If you were to look at the "grid" types for wave height and period, you would see that they aren't on grids at all but instead are scalars.
End of explanation
subside.set_value("earth_material_load__pressure", load)
subside.update()
dz = subside.get_value('lithosphere__increment_of_elevation')
plt.imshow(dz.reshape((500, 500)))
load[125, 125] = 2e3
subside.set_value("earth_material_load__pressure", load)
subside.update()
dz = subside.get_value('lithosphere__increment_of_elevation')
plt.imshow(dz.reshape((500, 500)))
plt.colorbar()
Explanation: Because this grid is uniform rectilinear, it is described by a set of BMI methods that are only available for grids of this type. These methods include:
* get_grid_shape
* get_grid_spacing
* get_grid_origin
Allocate memory for the water depth grid and get the current values from cem.
End of explanation |
3,843 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="../../../images/qiskit-heading.gif" alt="Note
Step1: Theoretical background
In addition to the GHZ states, the generalized W states, as proposed by Dür, Vidal and Cirac, in 2000, is a class of interesting examples of multiple qubit entanglement.
A generalized $n$ qubit W state can be written as
Step2: Three-qubit W state, step 1
In this section, the production of a three qubit W state will be examined step by step.
In this circuit, the starting state is now
Step3: Three-qubit W state
Step4: Three-qubit W state, full circuit
In the previous step, we got an histogram compatible with the state
Step5: Now you get an histogram compatible with the final state $|W_{3}\rangle$ through the following steps
Step6: Now, if you used a simulator, you get an histogram clearly compatible with the state | Python Code:
# useful additional packages
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import time
from pprint import pprint
# importing Qiskit
from qiskit import Aer, IBMQ
from qiskit.backends.ibmq import least_busy
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.tools.visualization import plot_histogram
IBMQ.load_accounts()
Explanation: <img src="../../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
W state in multi-qubit systems
The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorial.
For more information about how to use the IBM Q experience (QX), consult the tutorials, or check out the community.
Contributors
Pierre Decoodt, Université Libre de Bruxelles
End of explanation
"Choice of the backend"
# using local qasm simulator
backend = Aer.get_backend('qasm_simulator')
# using IBMQ qasm simulator
# backend = IBMQ.get_backend('ibmq_qasm_simulator')
# using real device
# backend = least_busy(IBMQ.backends(simulator=False))
flag_qx2 = True
if backend.name() == 'ibmqx4':
flag_qx2 = False
print("Your choice for the backend is: ", backend, "flag_qx2 is: ", flag_qx2)
# Here, two useful routine
# Define a F_gate
def F_gate(circ,q,i,j,n,k) :
theta = np.arccos(np.sqrt(1/(n-k+1)))
circ.ry(-theta,q[j])
circ.cz(q[i],q[j])
circ.ry(theta,q[j])
circ.barrier(q[i])
# Define the cxrv gate which uses reverse CNOT instead of CNOT
def cxrv(circ,q,i,j) :
circ.h(q[i])
circ.h(q[j])
circ.cx(q[j],q[i])
circ.h(q[i])
circ.h(q[j])
circ.barrier(q[i],q[j])
Explanation: Theoretical background
In addition to the GHZ states, the generalized W states, as proposed by Dür, Vidal and Cirac, in 2000, is a class of interesting examples of multiple qubit entanglement.
A generalized $n$ qubit W state can be written as :
$$ |W_{n}\rangle \; = \; \sqrt{\frac{1}{n}} \: (\:|10...0\rangle \: + |01...0\rangle \: +...+ |00...1\rangle \:) $$
Here are presented circuits allowing to deterministically produce respectively a three, a four and a five qubit W state.
A 2016 paper by Firat Diker proposes an algorithm in the form of nested boxes allowing the deterministic construction of W states of any size $n$. The experimental setup proposed by the author is essentially an optical assembly including half-wave plates. The setup includes $n-1$ so-called two-qubit $F$ gates (not to be confounded with the Fredkin's three-qubit gate).
It is possible to construct the equivalent of such a $F$ gate on a superconducting quantum computing system using transmon qubits in ground and excited states. A $F_{k,\, k+1}$ gate with control qubit $q_{k}$ and target qubit $q_{k+1}$ is obtained here by:
First a rotation round Y-axis $R_{y}(-\theta_{k})$ applied on $q_{k+1}$
Then a controlled Z-gate $cZ$ in any direction between the two qubits $q_{k}$ and $q_{k+1}$
Finally a rotation round Y-axis $R_{y}(\theta_{k})$ applied on $q_{k+1}$
The matrix representations of a $R_{y}(\theta)$ rotation and of the $cZ$ gate can be found in the "Quantum gates and linear algebra" Jupyter notebook of the Qiskit tutorial.
The value of $\theta_{k}$ depends on $n$ and $k$ following the relationship:
$$\theta_{k} = \arccos \left(\sqrt{\frac{1}{n-k+1}}\right) $$
Note that this formula for $\theta$ is different from the one mentioned in the Diker's paper. This is due to the fact that we use here Y-axis rotation matrices instead of $W$ optical gates composed of half-wave plates.
At the beginning, the qubits are placed in the state: $|\varphi_{0} \rangle \, = \, |10...0 \rangle$.
This is followed by the application of $n-1$ sucessive $F$ gates.
$$|\varphi_{1}\rangle = F_{n-1,\,n}\, ... \, F_{k,\, k+1}\, ... \, F_{2,\, 3} \,F_{1,\, 2}\,|\varphi_{0} \rangle \,= \; \sqrt{\frac{1}{n}} \: (\:|10...0\rangle \: + |11...0\rangle \: +...+ |11...1\rangle \:) $$
Then, $n-1$ $cNOT$ gates are applied. The final circuit is:
$$|W_{n}\rangle \,= cNOT_{n,\, n-1}\, cNOT_{n-1,\, n-2}...cNOT_{k,\, k-1}...cNOT_{2,\, 1}\,\,|\varphi_{1} \rangle$$
Let's launch now in the adventure of producing deterministically W states, on simulator or in the real world!
Now you will have the opportunity to choose your backend.
(If you run the following cells in sequence, you will end with the local simulator, which is a good choice for a first trial).
End of explanation
# 3-qubit W state Step 1
n = 3
q = QuantumRegister(n)
c = ClassicalRegister(n)
W_states = QuantumCircuit(q,c)
W_states.x(q[2]) #start is |100>
F_gate(W_states,q,2,1,3,1) # Applying F12
for i in range(3) :
W_states.measure(q[i] , c[i])
# circuits = ['W_states']
shots = 1024
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('start W state 3-qubit (step 1) on', backend, "N=", shots,time_exp)
result = execute(W_states, backend=backend, shots=shots)
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('end W state 3-qubit (step 1) on', backend, "N=", shots,time_exp)
plot_histogram(result.result().get_counts(W_states))
Explanation: Three-qubit W state, step 1
In this section, the production of a three qubit W state will be examined step by step.
In this circuit, the starting state is now: $ |\varphi_{0} \rangle \, = \, |100\rangle$.
The entire circuit corresponds to:
$$ |W_{3}\rangle \,=\, cNOT_{3,2}\, \, cNOT_{2,1}\, \, F_{2,3} \,
\, F_{1,2} \, \, |\varphi_{0} \rangle \, $$
Run the following cell to see what happens when we first apply $F_{1,2}$.
End of explanation
# 3-qubit W state, first and second steps
n = 3
q = QuantumRegister(n)
c = ClassicalRegister(n)
W_states = QuantumCircuit(q,c)
W_states.x(q[2]) #start is |100>
F_gate(W_states,q,2,1,3,1) # Applying F12
F_gate(W_states,q,1,0,3,2) # Applying F23
for i in range(3) :
W_states.measure(q[i] , c[i])
shots = 1024
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('start W state 3-qubit (steps 1 + 2) on', backend, "N=", shots,time_exp)
result = execute(W_states, backend=backend, shots=shots)
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('end W state 3-qubit (steps 1 + 2) on', backend, "N=", shots,time_exp)
plot_histogram(result.result().get_counts(W_states))
Explanation: Three-qubit W state: adding step 2
In the previous step you obtained an histogram compatible with the following state:
$$ |\varphi_{1} \rangle= F_{1,2}\, |\varphi_{0} \rangle\,=F_{1,2}\, \,|1 0 0 \rangle=\frac{1}{\sqrt{3}} \: |1 0 0 \rangle \: + \sqrt{\frac{2}{3}} \: |1 1 0 \rangle $$
NB: Depending on the backend, it happens that the order of the qubits is modified, but without consequence for the state finally reached.
We seem far from the ultimate goal.
Run the following circuit to obtain $|\varphi_{2} \rangle =F_{2,3}\, \, |\varphi_{1} \rangle$
End of explanation
# 3-qubit W state
n = 3
q = QuantumRegister(n)
c = ClassicalRegister(n)
W_states = QuantumCircuit(q,c)
W_states.x(q[2]) #start is |100>
F_gate(W_states,q,2,1,3,1) # Applying F12
F_gate(W_states,q,1,0,3,2) # Applying F23
if flag_qx2 : # option ibmqx2
W_states.cx(q[1],q[2]) # cNOT 21
W_states.cx(q[0],q[1]) # cNOT 32
else : # option ibmqx4
cxrv(W_states,q,1,2)
cxrv(W_states,q,0,1)
for i in range(3) :
W_states.measure(q[i] , c[i])
shots = 1024
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('start W state 3-qubit on', backend, "N=", shots,time_exp)
result = execute(W_states, backend=backend, shots=shots)
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('end W state 3-qubit on', backend, "N=", shots,time_exp)
plot_histogram(result.result().get_counts(W_states))
Explanation: Three-qubit W state, full circuit
In the previous step, we got an histogram compatible with the state:
$$ |\varphi_{2} \rangle =F_{2,3}\, \, |\varphi_{1} \rangle=F_{2,3}\, \, (\frac{1}{\sqrt{3}} \: |1 0 0 \rangle \: + \sqrt{\frac{2}{3}} \: |1 1 0 )= \frac{1}{\sqrt{3}} \: (|1 0 0 \rangle \: + |1 1 0 \:\rangle + |1 1 1\rangle) $$
NB: Again, depending on the backend, it happens that the order of the qubits is modified, but without consequence for the state finally reached.
It looks like we are nearing the goal.
Indeed, two $cNOT$ gates will make it possible to create a W state.
Run the following cell to see what happens. Did we succeed?
End of explanation
# 4-qubit W state
n = 4
q = QuantumRegister(n)
c = ClassicalRegister(n)
W_states = QuantumCircuit(q,c)
W_states.x(q[3]) #start is |1000>
F_gate(W_states,q,3,2,4,1) # Applying F12
F_gate(W_states,q,2,1,4,2) # Applying F23
F_gate(W_states,q,1,0,4,3) # Applying F34
cxrv(W_states,q,2,3) # cNOT 21
if flag_qx2 : # option ibmqx2
W_states.cx(q[1],q[2]) # cNOT 32
W_states.cx(q[0],q[1]) # cNOT 43
else : # option ibmqx4
cxrv(W_states,q,1,2)
cxrv(W_states,q,0,1)
for i in range(4) :
W_states.measure(q[i] , c[i])
# circuits = ['W_states']
shots = 1024
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('start W state 4-qubit ', backend, "N=", shots,time_exp)
result = execute(W_states, backend=backend, shots=shots)
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('end W state 4-qubit on', backend, "N=", shots,time_exp)
plot_histogram(result.result().get_counts(W_states))
Explanation: Now you get an histogram compatible with the final state $|W_{3}\rangle$ through the following steps:
$$ |\varphi_{3} \rangle = cNOT_{2,1}\, \, |\varphi_{2} \rangle =cNOT_{2,1}\,\frac{1}{\sqrt{3}} \: (|1 0 0 \rangle \: + |1 1 0 \rangle\: + |1 1 1\rangle) = \frac{1}{\sqrt{3}} \: (|1 0 0 \rangle \: + |0 1 0 \: + |0 1 1\rangle) $$
$$ |W_{3} \rangle = cNOT_{3,2}\, \, |\varphi_{3} \rangle =cNOT_{3,2}\,\frac{1}{\sqrt{3}} \: (|1 0 0 \rangle \: + |010 \: \rangle+ |0 1 1\rangle) = \frac{1}{\sqrt{3}} \: (|1 0 0 \rangle \: + |0 1 0 \: + |0 0 1\rangle) $$
Bingo!
Four-qubit W state
In this section, the production of a four-qubit W state will be obtained by extending the previous circuit.
In this circuit, the starting state is now: $ |\varphi_{0} \rangle \, = \, |1000\rangle$.
A $F$ gate was added at the beginning of the circuit and a $cNOT$ gate was added before the measurement phase.
The entire circuit corresponds to:
$$ |W_{4}\rangle \,=\, cNOT_{4,3}\, \, cNOT_{3,2}\, \, cNOT_{2,1}\, \, F_{3,4} \, \, F_{2,3} \, \, F_{1,2} \, \,|\varphi_{0} \rangle \, $$
Run the following circuit and see what happens.
End of explanation
# 5-qubit W state
n = 5
q = QuantumRegister(n)
c = ClassicalRegister(n)
W_states = QuantumCircuit(q,c)
W_states.x(q[4]) #start is |10000>
F_gate(W_states,q,4,3,5,1) # Applying F12
F_gate(W_states,q,3,2,5,2) # Applying F23
F_gate(W_states,q,2,1,5,3) # Applying F34
F_gate(W_states,q,1,0,5,4) # Applying F45
W_states.cx(q[3],q[4]) # cNOT 21
cxrv(W_states,q,2,3) # cNOT 32
if flag_qx2 : # option ibmqx2
W_states.cx(q[1],q[2]) # cNOT 43
W_states.cx(q[0],q[1]) # cNOT 54
else : # option ibmqx4
cxrv(W_states,q,1,2)
cxrv(W_states,q,0,1)
for i in range(5) :
W_states.measure(q[i] , c[i])
shots = 1024
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('start W state 5-qubit on', backend, "N=", shots,time_exp)
result = execute(W_states, backend=backend, shots=shots)
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('end W state 5-qubit on', backend, "N=", shots,time_exp)
plot_histogram(result.result().get_counts(W_states))
Explanation: Now, if you used a simulator, you get an histogram clearly compatible with the state:
$$ |W_{4}\rangle \;=\; \frac{1}{2} \: (\:|1000\rangle + |0100\rangle + |0010\rangle + |0001\rangle \:) $$
If you used a real quantum computer, the columns of the histogram compatible with a $|W_{4}\rangle$ state are not all among the highest one. Errors are spreading...
Five-qubit W state
In this section, a five-qubit W state will be obtained, again by extending the previous circuit.
In this circuit, the starting state is now: $ |\varphi_{0} \rangle = |10000\rangle$.
A $F$ gate was added at the beginning of the circuit and an additionnal $cNOT$ gate was added before the measurement phase.
$$ |W_{5}\rangle = cNOT_{5,4} cNOT_{4,3} cNOT_{3,2} cNOT_{2,1} F_{4,5} F_{3,4} F_{2,3} F_{1,2} |\varphi_{0} \rangle $$
Run the following cell and see what happens.
End of explanation |
3,844 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#In-Built-Functions" data-toc-modified-id="In-Built-Functions-1"><span class="toc-item-num">1 </span>In-Built Functions</a></div><div class="lev1 toc-item"><a href="#Importing-from-Libraries" data-toc-modified-id="Importing-from-Libraries-2"><span class="toc-item-num">2 </span>Importing from Libraries</a></div><div class="lev2 toc-item"><a href="#Best-Practices-in-Importing" data-toc-modified-id="Best-Practices-in-Importing-21"><span class="toc-item-num">2.1 </span>Best Practices in Importing</a></div><div class="lev1 toc-item"><a href="#User-Defined-Functions" data-toc-modified-id="User-Defined-Functions-3"><span class="toc-item-num">3 </span>User Defined Functions</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-31"><span class="toc-item-num">3.1 </span>Exercise</a></div>
# In-Built Functions
So here's some good news!
<img src="images/functions.jpg">
A function are chunks of reusable, modular code, that allows a programmer to write more efficient programs. <br>
Functions can be inbuilt, inherited from a library, or user defined. We will cover all of them in this tutorial. Let's first look at Inbuilt functions. As always, you can read more about these in Python's official documentation on [Inbuilt Functions](https
Step1: So as you can see, you have used a lot of functions already.
Now compare the Python code above to one way to handle a similar problem in C. Just to be clear, C is a very powerful langauage, so this isn't meant to show C in poor light, but to show you an area where Python is much more user friendly.
Importing from Libraries
<img src="images/python.png">
<br>This image above is from the awesome XKCD series. If you're not familiar with it, I encourage you to check them out, hilarious geek humour! <br>
We will often have to import a lot of functions from existing libraries in Python.<br>
Why? Because someone else very generously wrote hundreds or thousands of lines of codes so we don't have to. The large library base of Python is one of the reasons Data Scientists use it.
This is how the import syntax works
Step2: Best Practices in Importing
First off, what I refer to as libraries, others might also refer to as Modules.
Next - it's generally bad to use "from library import *"
Quoting from the Google Python Style Guide
Step3: Let's also look at the 'return' statement. When you print something using the standard print syntax
Step4: As with the print statement, you can perform this in a single line
Step5: For a data scientist, writing efficient functions can come in really handy during the data cleaning phase. Let's see one example.
Step6: One important thing while dealing with functions. What happens inside a function, stays in a function.
<img src="images/vegas.gif">
None of the variables defined inside a function can be called outside of it. Want to test it out?
Step7: Remember, what happens in a function, stays in a function! Except return. That will come back!
Let's write a function to generate a password. In input n will be an integer of any length. Of course, for practical purposes, this could be useless if we enter 100 or 100000.<br>
Let's begin with some starter code.
import random
import string
random.choice(string.ascii_letters)
This will generate a random string. Let's find a way to use this to generate a password of any length.
Step8: Exercise
Here's a challenge for you. Can you modify the above code to make sure that a password is atleast 6 character, but not more than 20 character. The first letter must be a capital letter, and there has to be atleast 1 number. The code below can help with generating random capitalised characters, and numbers.
random.choice('ABCDEFGHIJKLMNOPQRSTUVXYZ')
random.choice('123456789')
It's ok to struggle with this - the more you see different patterns of code, the more you learn. Don't panic if you can't figure it out in 2 minutes. Start out with writing some pseudo-code in plain English (or your native language), tracing out how you would go about this. | Python Code:
# Some functions already covered
nums = [num**2 for num in range(1,11)]
print(nums) #print is a function, atleast Python 3.x onwards
# In Python 2.x - Not a function, it's a statement.
# Will give an error in Python 3.x
print nums
len(nums)
max(nums)
min(nums)
sum(nums)
nums.reverse()
nums
# Reverse a string
# Notice how many functions we use
word = input("Enter a word:")
word = list(word)
word.reverse()
word = ''.join(word)
word
word = input("Enter a word: ")
word = list(word)
word
word.reverse()
word = "".join(word)
word
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#In-Built-Functions" data-toc-modified-id="In-Built-Functions-1"><span class="toc-item-num">1 </span>In-Built Functions</a></div><div class="lev1 toc-item"><a href="#Importing-from-Libraries" data-toc-modified-id="Importing-from-Libraries-2"><span class="toc-item-num">2 </span>Importing from Libraries</a></div><div class="lev2 toc-item"><a href="#Best-Practices-in-Importing" data-toc-modified-id="Best-Practices-in-Importing-21"><span class="toc-item-num">2.1 </span>Best Practices in Importing</a></div><div class="lev1 toc-item"><a href="#User-Defined-Functions" data-toc-modified-id="User-Defined-Functions-3"><span class="toc-item-num">3 </span>User Defined Functions</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-31"><span class="toc-item-num">3.1 </span>Exercise</a></div>
# In-Built Functions
So here's some good news!
<img src="images/functions.jpg">
A function are chunks of reusable, modular code, that allows a programmer to write more efficient programs. <br>
Functions can be inbuilt, inherited from a library, or user defined. We will cover all of them in this tutorial. Let's first look at Inbuilt functions. As always, you can read more about these in Python's official documentation on [Inbuilt Functions](https://docs.python.org/3.6/library/functions.html).
End of explanation
# Import the library
import random
# Initiate a for loo
for i in range(5):
# x is equal to a random value, that we got from the library named random, and the function named random()
x = random.random()
print(round(x,2))
# Circumference of a circle
from math import pi
radius = int(input("Enter the radius in cm: "))
c = 2*pi*radius
area = pi*(radius**2)
print("The circumference of the circle is: ",c)
print("The area of the circle is: ", area)
Explanation: So as you can see, you have used a lot of functions already.
Now compare the Python code above to one way to handle a similar problem in C. Just to be clear, C is a very powerful langauage, so this isn't meant to show C in poor light, but to show you an area where Python is much more user friendly.
Importing from Libraries
<img src="images/python.png">
<br>This image above is from the awesome XKCD series. If you're not familiar with it, I encourage you to check them out, hilarious geek humour! <br>
We will often have to import a lot of functions from existing libraries in Python.<br>
Why? Because someone else very generously wrote hundreds or thousands of lines of codes so we don't have to. The large library base of Python is one of the reasons Data Scientists use it.
This is how the import syntax works:<br>
import libraryName
import libraryName as ln
from libraryName import specificFunction
from libraryName import *
Let's see a few examples below.
End of explanation
# Function to say hello to the user
def say_hello():
name = input("What's your name? ")
print("Hello ", name,"!")
say_hello()
list_a = ["a", 1, 42, 19, "c", "23",1,2,3,4,5,6]
type(list_a)
for i in list_a:
print(type(i))
len(list_a)
def list_scorer(list_name):
if type(list_name) == list:
print("Correctly identified a list of length,",len(list_name),"items.")
for i in list_name:
print(type(i))
else:
print("This is not a list.")
list_scorer(list_a)
list_scorer("Hello")
Explanation: Best Practices in Importing
First off, what I refer to as libraries, others might also refer to as Modules.
Next - it's generally bad to use "from library import *"
Quoting from the Google Python Style Guide:
``
Useimport xfor importing packages and modules.
Usefrom x import ywhere x is the package prefix and y is the module name with no prefix.
Usefrom x import y as z` if two modules named y are to be imported or if y is an inconveniently long name.
For example the module sound.effects.echo may be imported as follows:
from sound.effects import echo
...
echo.EchoFilter(input, output, delay=0.7, atten=4)
Do not use relative names in imports. Even if the module is in the same package, use the full package name. This helps prevent unintentionally importing a package twice.
```
Don't worry if you haven't understood all of that, I just want you to recall some of the keywords mentioned here when we start implementing these principles.
User Defined Functions
By now, you should have noticed a pattern. Every new technique we learn, is progressively more and more powerful. And we also combine many of the techniques we have already learnt. <br>
User defined functions are simply a way for a user - you - to define code blocks that do exactly what you want. Don't want to write code to say "Hello User!" every time? Define a function! <br>
As usual - very simple syntax. <br><br>
def function_name(optional_input):
inputs or variables
functions like print
a return function if needed
<br><br>
Let's break this down in our examples.
End of explanation
def sq_num(num):
squared = num ** 2
return squared
sq_num(10)
Explanation: Let's also look at the 'return' statement. When you print something using the standard print syntax:<br>
print("Hello World!")
print(2**2)
it only displays that output to the screen. So in the case of 2**2 which is 4, the value is merely displayed, but it cannot be used by the program for any purpose.
For that, we 'return' the value to the program. <br>
<img src="images/simples.gif">
End of explanation
def sq_num2(num):
return num**2
sq_num2(5)
Explanation: As with the print statement, you can perform this in a single line:
End of explanation
def clean_up(tel_num):
result = ""
digits = {"0","1","2","3","4","5","6","7","8","9"}
for character in tel_num:
if character in digits:
result = result + character
return result
client_phone_num = "+1-555-123-1234 Barone Sanitation (Call only day time)"
clean_up(client_phone_num)
Explanation: For a data scientist, writing efficient functions can come in really handy during the data cleaning phase. Let's see one example.
End of explanation
def cubist(num):
cubed = num**3
return cubed
cubist(3)
print(cubed) # Will give an error
Explanation: One important thing while dealing with functions. What happens inside a function, stays in a function.
<img src="images/vegas.gif">
None of the variables defined inside a function can be called outside of it. Want to test it out?
End of explanation
import random
import string
random.choice(string.ascii_letters)
def pass_gen(n):
# Initiate a blank password
password = ""
# Remember, n+1
for letter in range(1,n+1):
# We add a random character to our blank password
password = password + random.choice(string.ascii_letters)
return password
pass_gen(8)
Explanation: Remember, what happens in a function, stays in a function! Except return. That will come back!
Let's write a function to generate a password. In input n will be an integer of any length. Of course, for practical purposes, this could be useless if we enter 100 or 100000.<br>
Let's begin with some starter code.
import random
import string
random.choice(string.ascii_letters)
This will generate a random string. Let's find a way to use this to generate a password of any length.
End of explanation
# Your code below
Explanation: Exercise
Here's a challenge for you. Can you modify the above code to make sure that a password is atleast 6 character, but not more than 20 character. The first letter must be a capital letter, and there has to be atleast 1 number. The code below can help with generating random capitalised characters, and numbers.
random.choice('ABCDEFGHIJKLMNOPQRSTUVXYZ')
random.choice('123456789')
It's ok to struggle with this - the more you see different patterns of code, the more you learn. Don't panic if you can't figure it out in 2 minutes. Start out with writing some pseudo-code in plain English (or your native language), tracing out how you would go about this.
End of explanation |
3,845 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lists
<img src="../images/python-logo.png">
Lists are sequences that hold heterogenous data types that are separated by commas between two square brackets. Lists have zero-based indexing, which means that the first element in the list has an index on '0', the second element has an index of '1', and so on. The last element of the list has an index of 'N-1' where N is the length of the list.
Step1: Define a list
Step2: Accesing elements of a list | Python Code:
# import thr random numbers module. More on modules in a future notebook
import random
Explanation: Lists
<img src="../images/python-logo.png">
Lists are sequences that hold heterogenous data types that are separated by commas between two square brackets. Lists have zero-based indexing, which means that the first element in the list has an index on '0', the second element has an index of '1', and so on. The last element of the list has an index of 'N-1' where N is the length of the list.
End of explanation
# empty list
a = list()
# or
a = []
# define a list
a = [1,2,3,4,2,2]
print a
# list of numbers from 0 to 9
a = range(10)
a
Explanation: Define a list
End of explanation
# Python is zer-bases indexing
a[0]
# Get the last element
a[-1]
# Get the next to the last element
a[-2]
a[:]
# Slice the list
a[0:6] # elements with indecies 0, 1, 2, & 3
a = [1,2,2,3,4,4,4,6,7,2,2,2]
# Get the number of occurences of the element 2
a.count(2)
# the original list
a
# remove the element at with index 2 and return that value
a.pop(2)
# a is now modified
a
# delete without return
del a[1] # delete element at index 1
# print a
a
2 not in a
5 in a
# list can contain any type of Python objects, including lists
f = [1, '2', 'a string', [1, ('3', 2)], {'a':1, 'b':2}]
# get element @ index 2
f[2]
# change it
f[2] = 3
f
# length of the list
len(f)
import random
# list comprehension
a = [int(100*random.random()) for i in xrange(150)]
print a
# the same as
a = []
for i in range(150):
a.append(int(100*random.random()))
# get the max and min of a numeric list
max(a), min(a)
# make a tuple into a list
x = (1,2,3,4,5)
list(x)
# add object to the end of the list
x = [1,2,3]
x.append(4)
print x
x.append([6,7,8])
print x
# Appends the contents of seq to list
x.extend([9,10,11,12,[13,14,15]])
print x
x.extend([1,2,3])
x
a = [1,2,3]
b = [4,5,6]
c = a+b
c
# Returns count of how many times obj occurs in list
x.count(3)
# Returns the lowest index in list that obj appears
x.index(10)
# Inserts object obj into list at offset index
print x[3]
x.insert(3, ['a','b','c'])
print x
# Removes and returns last object or obj from list
x.pop()
print x
print x[3]
x.pop(3)
print x
# Removes the first occurrence of obj from list
x = [1,2,2,3,4,5,2,3,4,6,3,4,5,6,2]
x.remove(2)
print x
# Reverses objects of list in place
x.reverse()
print x
# Sort x in place
x.sort()
print x
# duplicate a list
a = [1,2,3]
b = a*5
b
import random
[random.random() for _ in range(0, 10)]
x = [random.randint(0,1000) for _ in range(10)]
print x
random.choice(x)
print range(10)
print range(0,10)
print range(5,16)
print range(-6, 7, 2)
M=[[1,2,3],
[4,5,6],
[7,8,9]]
print M
# put the 2nd column of M in a list
column = []
for row in M:
column.append(row[1])
print column
# list comprehension - another way of extracting the 2nd column of M
column = [row[1] for row in M]
print column
# compute the transpose of the matrix M
[[row[i] for row in M] for i in range(3)]
# get the diagonal elements of M
diag = [M[i][i] for i in [0, 1, 2]]
print diag
# build a list with another list as elements
[[x ** 2, x ** 3] for x in range(4)]
# build a list with an if statement
[[x, x/2, x*2] for x in range(-6, 7, 2) if x > 0]
# does the same thing as above but more
big_list = []
for x in range(-6,7,2):
if x > 0:
big_list.append([x, x/2, x*2])
print big_list
# does the same as above but lots of code
big_list = []
for x in range(-6,7,2):
lil_list = []
if x > 0:
lil_list.append(x)
lil_list.append(x/2)
lil_list.append(x*2)
big_list.append(lil_list)
print big_list
L = ["Good", # clint
"Bad", #
"Ugly"]
print L
Explanation: Accesing elements of a list
End of explanation |
3,846 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BPSK Demodulation in Nonlinear Channels with Deep Neural Networks
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates
Step1: Specify the parameters of the transmission as the fiber length $L$ (in km), the fiber nonlinearity coefficienty $\gamma$ (given in 1/W/km) and the total noise power $P_n$ (given in dBM. The noise is due to amplified spontaneous emission in amplifiers along the link). We assume a model of a dispersion-less fiber affected by nonlinearity. The model, which is described for instance in [1] is given by an iterative application of the equation
$$
x_{k+1} = x_k\exp\left(\jmath\frac{L}{K}\gamma|x_k|^2\right) + n_{k+1},\qquad 0 \leq k < K
$$
where $x_0$ is the channel input (the modulated, complex symbols) and $x_K$ is the channel output. $K$ denotes the number of steps taken to simulate the channel Usually $K=50$ gives a good approximation.
[1] S. Li, C. Häger, N. Garcia, and H. Wymeersch, "Achievable Information Rates for Nonlinear Fiber Communication via End-to-end Autoencoder Learning," Proc. ECOC, Rome, Sep. 2018
Step2: We consider BPSK transmission over this channel.
Show constellation as a function of the fiber input power. When the input power is small, the effect of the nonlinearity is small (as $\jmath\frac{L}{K}\gamma|x_k|^2 \approx 0$) and the transmission is dominated by the additive noise. If the input power becomes larger, the effect of the noise (the noise power is constant) becomes less pronounced, but the constellation rotates due to the larger input power and hence effect of the nonlinearity.
Step3: Helper function to plot the constellation together with the decision region. Note that a bit is decided as "1" if $\sigma(\boldsymbol{\theta}^\mathrm{T}\boldsymbol{r}) > \frac12$, i.e., if $\boldsymbol{\theta}^\mathrm{T}\boldsymbol{r}$ > 0. The decision line is therefore given by $\theta_1\Re{r} + \theta_2\Im{r} = 0$, i.e., $\Im{r} = -\frac{\theta_1}{\theta_2}\Re{r}$
Generate training, validation and testing data sets | Python Code:
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interactive
import ipywidgets as widgets
%matplotlib inline
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print("We are using the following device for learning:",device)
Explanation: BPSK Demodulation in Nonlinear Channels with Deep Neural Networks
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates:
* demodulation of BPSK symbols in highly nonlinear channels using an artificial neural network, implemented via PyTorch
End of explanation
# Length of transmission (in km)
L = 5000
# fiber nonlinearity coefficient
gamma = 1.27
Pn = -21.3 # noise power (in dBm)
Kstep = 50 # number of steps used in the channel model
def simulate_channel(x, Pin):
# modulate bpsk
input_power_linear = 10**((Pin-30)/10)
norm_factor = np.sqrt(input_power_linear);
bpsk = (1 - 2*x) * norm_factor
# noise variance per step
sigma = np.sqrt((10**((Pn-30)/10)) / Kstep / 2)
temp = np.array(bpsk, copy=True)
for i in range(Kstep):
power = np.absolute(temp)**2
rotcoff = (L / Kstep) * gamma * power
temp = temp * np.exp(1j*rotcoff) + sigma*(np.random.randn(len(x)) + 1j*np.random.randn(len(x)))
return temp
Explanation: Specify the parameters of the transmission as the fiber length $L$ (in km), the fiber nonlinearity coefficienty $\gamma$ (given in 1/W/km) and the total noise power $P_n$ (given in dBM. The noise is due to amplified spontaneous emission in amplifiers along the link). We assume a model of a dispersion-less fiber affected by nonlinearity. The model, which is described for instance in [1] is given by an iterative application of the equation
$$
x_{k+1} = x_k\exp\left(\jmath\frac{L}{K}\gamma|x_k|^2\right) + n_{k+1},\qquad 0 \leq k < K
$$
where $x_0$ is the channel input (the modulated, complex symbols) and $x_K$ is the channel output. $K$ denotes the number of steps taken to simulate the channel Usually $K=50$ gives a good approximation.
[1] S. Li, C. Häger, N. Garcia, and H. Wymeersch, "Achievable Information Rates for Nonlinear Fiber Communication via End-to-end Autoencoder Learning," Proc. ECOC, Rome, Sep. 2018
End of explanation
length = 5000
def plot_constellation(Pin):
t = np.random.randint(2,size=length)
r = simulate_channel(t, Pin)
plt.figure(figsize=(6,6))
font = {'size' : 14}
plt.rc('font', **font)
plt.rc('text', usetex=matplotlib.checkdep_usetex(True))
plt.scatter(np.real(r), np.imag(r), c=t, cmap='coolwarm')
plt.xlabel(r'$\Re\{r\}$',fontsize=14)
plt.ylabel(r'$\Im\{r\}$',fontsize=14)
plt.axis('equal')
plt.title('Received constellation (L = %d km, $P_{in} = %1.2f$\,dBm)' % (L, Pin))
#plt.savefig('bpsk_received_zd_%1.2f.pdf' % Pin,bbox_inches='tight')
interactive_update = interactive(plot_constellation, Pin = widgets.FloatSlider(min=-10.0,max=10.0,step=0.1,value=1, continuous_update=False, description='Input Power Pin (dBm)', style={'description_width': 'initial'}, layout=widgets.Layout(width='50%')))
output = interactive_update.children[-1]
output.layout.height = '500px'
interactive_update
Explanation: We consider BPSK transmission over this channel.
Show constellation as a function of the fiber input power. When the input power is small, the effect of the nonlinearity is small (as $\jmath\frac{L}{K}\gamma|x_k|^2 \approx 0$) and the transmission is dominated by the additive noise. If the input power becomes larger, the effect of the noise (the noise power is constant) becomes less pronounced, but the constellation rotates due to the larger input power and hence effect of the nonlinearity.
End of explanation
# helper function to compute the bit error rate
def BER(predictions, labels):
decision = predictions >= 0.5
temp = decision != (labels != 0)
return np.mean(temp)
# set input power
Pin = 3
# validation set. Training examples are generated on the fly
N_valid = 100000
hidden_neurons_1 = 8
hidden_neurons_2 = 14
y_valid = np.random.randint(2,size=N_valid)
r = simulate_channel(y_valid, Pin)
# find extension of data (for normalization and plotting)
ext_x = max(abs(np.real(r)))
ext_y = max(abs(np.imag(r)))
ext_max = max(ext_x,ext_y)*1.2
# scale data to be between 0 and 1
X_valid = torch.from_numpy(np.column_stack((np.real(r), np.imag(r))) / ext_max).float().to(device)
# meshgrid for plotting
mgx,mgy = np.meshgrid(np.linspace(-ext_max,ext_max,200), np.linspace(-ext_max,ext_max,200))
meshgrid = torch.from_numpy(np.column_stack((np.reshape(mgx,(-1,1)),np.reshape(mgy,(-1,1)))) / ext_max).float().to(device)
class Receiver_Network(nn.Module):
def __init__(self, hidden1_neurons, hidden2_neurons):
super(Receiver_Network, self).__init__()
# Linear function, 2 input neurons (real and imaginary part)
self.fc1 = nn.Linear(2, hidden1_neurons)
# Non-linearity
self.activation_function = nn.ELU()
# Linear function (hidden layer)
self.fc2 = nn.Linear(hidden1_neurons, hidden2_neurons)
# Output function
self.fc3 = nn.Linear(hidden2_neurons, 1)
def forward(self, x):
# Linear function, first layer
out = self.fc1(x)
# Non-linearity, first layer
out = self.activation_function(out)
# Linear function, second layer
out = self.fc2(out)
# Non-linearity, second layer
out = self.activation_function(out)
# Linear function, third layer
out = self.fc3(out)
return out
model = Receiver_Network(hidden_neurons_1, hidden_neurons_2)
model.to(device)
sigmoid = nn.Sigmoid()
# channel parameters
norm_factor = np.sqrt(10**((Pin-30)/10));
sigma = np.sqrt((10**((Pn-30)/10)) / Kstep / 2)
# Binary Cross Entropy loss
loss_fn = nn.BCEWithLogitsLoss()
# Adam Optimizer
optimizer = optim.Adam(model.parameters())
# Training parameters
num_epochs = 160
batches_per_epoch = 300
# Vary batch size during training
batch_size_per_epoch = np.linspace(100,10000,num=num_epochs)
validation_BERs = np.zeros(num_epochs)
decision_region_evolution = []
for epoch in range(num_epochs):
batch_labels = torch.empty(int(batch_size_per_epoch[epoch]), device=device)
noise = torch.empty((int(batch_size_per_epoch[epoch]),2), device=device, requires_grad=False)
for step in range(batches_per_epoch):
# sample new mini-batch directory on the GPU (if available)
batch_labels.random_(2)
# channel simulation directly on the GPU
bpsk = ((1 - 2*batch_labels) * norm_factor).unsqueeze(-1) * torch.tensor([1.0,0.0],device=device)
for i in range(Kstep):
power = torch.norm(bpsk, dim=1) ** 2
rotcoff = (L / Kstep) * gamma * power
noise.normal_(mean=0, std=sigma) # sample noise
# phase rotation due to nonlinearity
temp1 = bpsk[:,0] * torch.cos(rotcoff) - bpsk[:,1] * torch.sin(rotcoff)
temp2 = bpsk[:,0] * torch.sin(rotcoff) + bpsk[:,1] * torch.cos(rotcoff)
bpsk = torch.stack([temp1, temp2], dim=1) + noise
bpsk = bpsk / ext_max
outputs = model(bpsk)
# compute loss
loss = loss_fn(outputs.squeeze(), batch_labels)
# compute gradients
loss.backward()
optimizer.step()
# reset gradients
optimizer.zero_grad()
# compute validation BER
out_valid = sigmoid(model(X_valid))
validation_BERs[epoch] = BER(out_valid.detach().cpu().numpy().squeeze(), y_valid)
print('Validation BER after epoch %d: %f (loss %1.8f)' % (epoch, validation_BERs[epoch], loss.detach().cpu().numpy()))
# store decision region for generating the animation
mesh_prediction = sigmoid(model(meshgrid))
decision_region_evolution.append(0.195*mesh_prediction.detach().cpu().numpy() + 0.4)
plt.figure(figsize=(8,8))
plt.contourf(mgx,mgy,decision_region_evolution[-1].reshape(mgy.shape).T,cmap='coolwarm',vmin=0.3,vmax=0.695)
plt.scatter(X_valid[:,0].cpu()*ext_max, X_valid[:,1].cpu() * ext_max, c=y_valid, cmap='coolwarm')
print(Pin)
plt.axis('scaled')
plt.xlabel(r'$\Re\{r\}$',fontsize=16)
plt.ylabel(r'$\Im\{r\}$',fontsize=16)
#plt.title(title,fontsize=16)
#plt.savefig('after_optimization.pdf',bbox_inches='tight')
Explanation: Helper function to plot the constellation together with the decision region. Note that a bit is decided as "1" if $\sigma(\boldsymbol{\theta}^\mathrm{T}\boldsymbol{r}) > \frac12$, i.e., if $\boldsymbol{\theta}^\mathrm{T}\boldsymbol{r}$ > 0. The decision line is therefore given by $\theta_1\Re{r} + \theta_2\Im{r} = 0$, i.e., $\Im{r} = -\frac{\theta_1}{\theta_2}\Re{r}$
Generate training, validation and testing data sets
End of explanation |
3,847 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Jeopardy Questions
Jeopardy is a popular TV show in the US where participants answer questions to win money. It's been running for a few decades, and is a major force in popular culture.
Let's say you want to compete on Jeopardy, and you're looking for any edge you can get to win. In this project, you'll work with a dataset of Jeopardy questions to figure out some patterns in the questions that could help you win.
The dataset is named jeopardy.csv, and contains 20000 rows from the beginning of a full dataset of Jeopardy questions, which you can download here.
Each row in the dataset represents a single question on a single episode of Jeopardy. Here are explanations of each column
Step2: Normalizing Text
Before you can start doing analysis on the Jeopardy questions, you need to normalize all of the text columns (the Question and Answer columns). We covered normalization before, but the idea is to ensure that you lowercase words and remove punctuation so Don't and don't aren't considered to be different words when you compare them.
Step4: Normalizing columns
Now that you've normalized the text columns, there are also some other columns to normalize.
The Value column should also be numeric, to allow you to manipulate it more easily. You'll need to remove the dollar sign from the beginning of each value and convert the column from text to numeric.
The Air Date column should also be a datetime, not a string, to enable you to work with it more easily.
Step6: Answers In Questions
In order to figure out whether to study past questions, study general knowledge, or not study it all, it would be helpful to figure out two things
Step7: Answer terms in the question
The answer only appears in the question about 6% of the time. This isn't a huge number, and means that we probably can't just hope that hearing a question will enable us to figure out the answer. We'll probably have to study.
Recycled Questions
Let's say you want to investigate how often new questions are repeats of older ones. You can't completely answer this, because you only have about 10% of the full Jeopardy question dataset, but you can investigate it at least.
To do this, you can
Step10: Question overlap
There is about 69% overlap between terms in new questions and terms in old questions. This only looks at a small set of questions, and it doesn't look at phrases, it looks at single terms. This makes it relatively insignificant, but it does mean that it's worth looking more into the recycling of questions.
Low Value Vs High Value Questions
Let's say you only want to study questions that pertain to high value questions instead of low value questions. This will help you earn more money when you're on Jeopardy.
You can actually figure out which terms correspond to high-value questions using a chi-squared test. You'll first need to narrow down the questions into two categories
Step11: Applying the Chi-Squared Test
Now that you've found the observed counts for a few terms, you can compute the expected counts and the chi-squared value. | Python Code:
import pandas as pd
# Read the dataset into a Pandas DataFrame
jeopardy = pd.read_csv('../data/jeopardy.csv')
# Print out the first 5 rows
jeopardy.head(5)
# Print out the columns
jeopardy.columns
# Remove the spaces from column names
col_names = jeopardy.columns
col_names = [s.strip() for s in col_names]
jeopardy.columns = col_names
jeopardy.columns
Explanation: Jeopardy Questions
Jeopardy is a popular TV show in the US where participants answer questions to win money. It's been running for a few decades, and is a major force in popular culture.
Let's say you want to compete on Jeopardy, and you're looking for any edge you can get to win. In this project, you'll work with a dataset of Jeopardy questions to figure out some patterns in the questions that could help you win.
The dataset is named jeopardy.csv, and contains 20000 rows from the beginning of a full dataset of Jeopardy questions, which you can download here.
Each row in the dataset represents a single question on a single episode of Jeopardy. Here are explanations of each column:
* Show Number -- the Jeopardy episode number of the show this question was in.
* Air Date -- the date the episode aired.
* Round -- the round of Jeopardy that the question was asked in. Jeopardy has several rounds as each episode progresses.
* Category -- the category of the question.
* Value -- the number of dollars answering the question correctly is worth.
* Question -- the text of the question.
* Answer -- the text of the answer.
End of explanation
import re
def normalize_text(text):
Function to normalize questions and answers.
@param text : str - input string
@return str - normalized version of input string
# Covert the string to lowercase
text = text.lower()
# Remove all punctuation in the string
text = re.sub("[^A-Za-z0-9\s]", "", text)
# Return the string
return text
# Normalize the Question column
jeopardy["clean_question"] = jeopardy["Question"].apply(normalize_text)
# Normalize the Anser column
jeopardy["clean_answer"] = jeopardy["Answer"].apply(normalize_text)
Explanation: Normalizing Text
Before you can start doing analysis on the Jeopardy questions, you need to normalize all of the text columns (the Question and Answer columns). We covered normalization before, but the idea is to ensure that you lowercase words and remove punctuation so Don't and don't aren't considered to be different words when you compare them.
End of explanation
jeopardy.dtypes
def normalize_values(text):
Function to normalize numeric values.
@param text : str - input value as a string
@return int - integer
# Remove any punctuation in the string
text = re.sub("[^A-Za-z0-9\s]", "", text)
# Convert the string to an integer and if there is an error assign 0
try:
text = int(text)
except Exception:
text = 0
return text
# Normalize the Value column
jeopardy["clean_value"] = jeopardy["Value"].apply(normalize_values)
jeopardy["Air Date"] = pd.to_datetime(jeopardy["Air Date"])
jeopardy.dtypes
Explanation: Normalizing columns
Now that you've normalized the text columns, there are also some other columns to normalize.
The Value column should also be numeric, to allow you to manipulate it more easily. You'll need to remove the dollar sign from the beginning of each value and convert the column from text to numeric.
The Air Date column should also be a datetime, not a string, to enable you to work with it more easily.
End of explanation
def count_matches(row):
Function to take in a row in jeopardy as a Series and count the
the number of terms in the answer which match the question.
@param row : pd.Series - row from jeopardy DataFrame
@return int - matches
# Split the clean_answer and clean_question columns on the space character
split_answer = row['clean_answer'].split(' ')
split_question = row['clean_question'].split(' ')
# "The" doesn't have any meaningful use in finding the anser
if "the" in split_answer:
split_answer.remove("the")
# Prevent division by 0 error later
if len(split_answer) == 0:
return 0
# Loop through each item in split_anser, and see if it occurs in split_question
match_count = 0
for item in split_answer:
if item in split_question:
match_count += 1
# Divide match_count by the length of split_answer, and return the result
return match_count / len(split_answer)
# Count how many times terms in clean_anser occur in clean_question
jeopardy["answer_in_question"] = jeopardy.apply(count_matches, axis=1)
# Find the mean of the answer_in_question column
jeopardy["answer_in_question"].mean()
Explanation: Answers In Questions
In order to figure out whether to study past questions, study general knowledge, or not study it all, it would be helpful to figure out two things:
How often the answer is deducible from the question.
How often new questions are repeats of older questions.
You can answer the second question by seeing how often complex words (> 6 characters) reoccur. You can answer the first question by seeing how many times words in the answer also occur in the question. We'll work on the first question now, and come back to the second.
End of explanation
# Create an empty list and an empty set
question_overlap = []
terms_used = set()
# Use the iterrows() DataFrame method to loop through each row of jeopardy
for i, row in jeopardy.iterrows():
split_question = row["clean_question"].split(" ")
split_question = [q for q in split_question if len(q) > 5]
match_count = 0
for word in split_question:
if word in terms_used:
match_count += 1
for word in split_question:
terms_used.add(word)
if len(split_question) > 0:
match_count /= len(split_question)
question_overlap.append(match_count)
jeopardy["question_overlap"] = question_overlap
jeopardy["question_overlap"].mean()
Explanation: Answer terms in the question
The answer only appears in the question about 6% of the time. This isn't a huge number, and means that we probably can't just hope that hearing a question will enable us to figure out the answer. We'll probably have to study.
Recycled Questions
Let's say you want to investigate how often new questions are repeats of older ones. You can't completely answer this, because you only have about 10% of the full Jeopardy question dataset, but you can investigate it at least.
To do this, you can:
* Sort jeopardy in order of ascending air date.
* Maintain a set called terms_used that will be empty initially.
* Iterate through each row of jeopardy.
* Split clean_question into words, remove any word shorter than 6 characters, and check if each word occurs in terms_used.
* If it does, increment a counter.
* Add each word to terms_used.
This will enable you to check if the terms in questions have been used previously or not. Only looking at words greater than 6 characters enables you to filter out words like the and than, which are commonly used, but don't tell you a lot about a question.
End of explanation
def determine_value(row):
Determine if this is a "Low" or "High" value question.
@param row : pd.Series - row from jeopardy DataFrame
@return int - 1 if High Value, 0 if Low Value
value = 0
if row["clean_value"] > 800:
value = 1
return value
jeopardy["high_value"] = jeopardy.apply(determine_value, axis=1)
def count_usage(term):
Take in a word and loops through each row in jeopardy DataFrame and
counts usage. Usage is counted separately for High Value vs Low Value
rows.
@param term : str - word to count usage for
@return (int, int) - (high_count, low_count)
low_count = 0
high_count = 0
for i, row in jeopardy.iterrows():
if term in row["clean_question"].split(" "):
if row["high_value"] == 1:
high_count += 1
else:
low_count += 1
return high_count, low_count
comparison_terms = list(terms_used)[:5]
observed_expected = []
for term in comparison_terms:
observed_expected.append(count_usage(term))
observed_expected
Explanation: Question overlap
There is about 69% overlap between terms in new questions and terms in old questions. This only looks at a small set of questions, and it doesn't look at phrases, it looks at single terms. This makes it relatively insignificant, but it does mean that it's worth looking more into the recycling of questions.
Low Value Vs High Value Questions
Let's say you only want to study questions that pertain to high value questions instead of low value questions. This will help you earn more money when you're on Jeopardy.
You can actually figure out which terms correspond to high-value questions using a chi-squared test. You'll first need to narrow down the questions into two categories:
* Low value -- Any row where Value is less than 800.
* High value -- Any row where Value is greater than 800.
You'll then be able to loop through each of the terms from the last screen, terms_used, and:
* Find the number of low value questions the word occurs in.
* Find the number of high value questions the word occurs in.
* Find the percentage of questions the word occurs in.
* Based on the percentage of questions the word occurs in, find expected counts.
* Compute the chi squared value based on the expected counts and the observed counts for high and low value questions.
You can then find the words with the biggest differences in usage between high and low value questions, by selecting the words with the highest associated chi-squared values. Doing this for all of the words would take a very long time, so we'll just do it for a small sample now.
End of explanation
from scipy.stats import chisquare
import numpy as np
high_value_count = jeopardy[jeopardy["high_value"] == 1].shape[0]
low_value_count = jeopardy[jeopardy["high_value"] == 0].shape[0]
chi_squared = []
for obs in observed_expected:
total = sum(obs)
total_prop = total / jeopardy.shape[0]
high_value_exp = total_prop * high_value_count
low_value_exp = total_prop * low_value_count
observed = np.array([obs[0], obs[1]])
expected = np.array([high_value_exp, low_value_exp])
chi_squared.append(chisquare(observed, expected))
chi_squared
Explanation: Applying the Chi-Squared Test
Now that you've found the observed counts for a few terms, you can compute the expected counts and the chi-squared value.
End of explanation |
3,848 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CSAL4243
Step1: <br>
Task 1
Step4: Linear Regression with Gradient Descent code
Step5: Run Gradient Descent on training data
Step6: Plot trained line on data
Step7: <br>
Task 2
Step8: Upload .csv file to Kaggle.com
Create an account at https
Step9: <br>
Task 3
Step10: <br>
Task 4 | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
from sklearn import linear_model
import matplotlib.pyplot as plt
import matplotlib as mpl
# read house_train.csv data in pandas dataframe df_train using pandas read_csv function
df_train = pd.read_csv('datasets/house_price/train.csv', encoding='utf-8')
# check data by printing first few rows
df_train.head()
# check columns in dataset
df_train.columns
# check correlation matrix, darker means more correlation
corrmat = df_train.corr()
f, aX_train= plt.subplots(figsize=(12, 9))
sns.heatmap(corrmat, vmax=.8, square=True);
# SalePrice correlation matrix with top k variables
k = 10 #number of variables for heatmap
cols = corrmat.nlargest(k, 'SalePrice')['SalePrice'].index
cm = np.corrcoef(df_train[cols].values.T)
sns.set(font_scale=1.25)
hm = sns.heatmap(cm, cbar=True, annot=True, square=True, fmt='.2f', annot_kws={'size': 10}, yticklabels=cols.values, xticklabels=cols.values)
plt.show()
#scatterplot with some important variables
cols = ['SalePrice', 'OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'YearBuilt']
sns.set()
sns.pairplot(df_train[cols], size = 2.5)
plt.show();
Explanation: CSAL4243: Introduction to Machine Learning
Muhammad Mudassir Khan ([email protected])
Assignment 1: Linear Regression
In this assignment you are going to learn how Linear Regression works by using the code for linear regression and gradient descent we have been looking at in the class. You are also going to use linear regression from scikit-learn library for machine learning. You are going to learn how to download data from kaggle (a website for datasets and machine learning) and upload submissions to kaggle competitions. And you will be able to compete with the world.
Overview
Pseudocode
Tasks
Load and analyze data
Task 1: Effect of Learning Rate $\alpha$
Load X and y
Linear Regression with Gradient Descent code
Run Gradient Descent on training data
Plot trained line on data
Task 2: Predict test data output and submit it to Kaggle
Upload .csv file to Kaggle.com
Task 3: Use scikit-learn for Linear Regression
Task 4: Multivariate Linear Regression
Resources
Credits
<br>
<br>
Pseudocode
Linear Regressio with Gradient Descent
Load training data into X_train and y_train
[Optionally] normalize features X_train using $x^i = \frac{x^i - \mu^i}{\rho^i}$ where $\mu^i$ is mean and $\rho^i$ is standard deviation of feature $i$
Initialize hyperparameters
iterations
learning rate $\alpha$
Initialize $\theta_s$
At each iteration
Compute cost using $J(\theta) = \frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$ where $h(x) = \theta_0 + \theta_1 x_1 + \theta_2 x_2 .... + \theta_n x_n$
Update $\theta_s$ using $\begin{align} \; \; & \theta_j := \theta_j - \alpha \frac{1}{m} \sum\limits_{i=1}^{m} (h_\theta(x_{i}) - y_{i}) \cdot x^j_{i} \; & & \text{for j := 0...n} \end{align}$
[Optionally] Break if cost $J(\theta)$ does not change.
<br>
<br>
Download House Prices dataset
The dataset you are going to use in this assignment is called House Prices, available at kaggle. To download the dataset go to dataset data tab. Download 'train.csv', 'test.csv', 'data_description.txt' and 'sample_submission.csv.gz' files. 'train.csv' is going to be used for training the model. 'test.csv' is used to test the model i.e. generalization. 'data_description.txt' contain feature description of the dataset. 'sample_submission.csv.gz' contain sample submission file that you need to generate to be submitted to kaggle.
<br>
Tasks
Effect of Learning Rate $\alpha$
Predict test data output and submit it to Kaggle
Use scikit-learn for Linear Regression
Multivariate Linear Regression
Load and analyze data
End of explanation
# Load X and y variables from pandas dataframe df_train
cols = ['GrLivArea']
X_train = np.array(df_train[cols])
y_train = np.array(df_train[["SalePrice"]])
# Get m = number of samples and n = number of features
m = X_train.shape[0]
n = X_train.shape[1]
# append a column of 1's to X for theta_0
X_train = np.insert(X_train,0,1,axis=1)
Explanation: <br>
Task 1: Effect of Learning Rate $\alpha$
Use Linear Regression code below using X="GrLivArea" as input variable and y="SalePrice" as target variable. Use different values of $\alpha$ given in table below and comment on why they are useful or not and which one is a good choice.
$\alpha=0.000001$:
$\alpha=0.00000001$:
$\alpha=0.000000001$:
<br>
Load X and y
End of explanation
iterations = 1500
alpha = 0.000000001 # change it and find what happens
def h(X, theta): #Linear hypothesis function
hx = np.dot(X,theta)
return hx
def computeCost(theta,X,y): #Cost function
theta is an n- dimensional vector, X is matrix with n- columns and m- rows
y is a matrix with m- rows and 1 column
#note to self: *.shape is (rows, columns)
return float((1./(2*m)) * np.dot((h(X,theta)-y).T,(h(X,theta)-y)))
#Actual gradient descent minimizing routine
def gradientDescent(X,y, theta_start = np.zeros((n+1,1))):
theta_start is an n- dimensional vector of initial theta guess
X is input variable matrix with n- columns and m- rows. y is a matrix with m- rows and 1 column.
theta = theta_start
j_history = [] #Used to plot cost as function of iteration
theta_history = [] #Used to visualize the minimization path later on
for meaninglessvariable in range(iterations):
tmptheta = theta
# append for plotting
j_history.append(computeCost(theta,X,y))
theta_history.append(list(theta[:,0]))
#Simultaneously updating theta values
for j in range(len(tmptheta)):
tmptheta[j] = theta[j] - (alpha/m)*np.sum((h(X,theta) - y)*np.array(X[:,j]).reshape(m,1))
theta = tmptheta
return theta, theta_history, j_history
Explanation: Linear Regression with Gradient Descent code
End of explanation
#Actually run gradient descent to get the best-fit theta values
initial_theta = np.zeros((n+1,1));
theta, theta_history, j_history = gradientDescent(X_train,y_train,initial_theta)
plt.plot(j_history)
plt.title("Convergence of Cost Function")
plt.xlabel("Iteration number")
plt.ylabel("Cost function")
plt.show()
Explanation: Run Gradient Descent on training data
End of explanation
# predict output for training data
hx_train= h(X_train, theta)
# plot it
plt.scatter(X_train[:,1],y_train)
plt.plot(X_train[:,1],hx_train[:,0], color='red')
plt.show()
Explanation: Plot trained line on data
End of explanation
# read data in pandas frame df_test and check first few rows
# write code here
df_test.head()
# check statistics of test data, make sure no data is missing.
print(df_test.shape)
df_test[cols].describe()
# Get X_test, no target variable (SalePrice) provided in test data. It is what we need to predict.
X_test = np.array(df_test[cols])
#Insert the usual column of 1's into the "X" matrix
X_test = np.insert(X_test,0,1,axis=1)
# predict test data labels i.e. y_test
predict = h(X_test, theta)
# save prediction as .csv file
pd.DataFrame({'Id': df_test.Id, 'SalePrice': predict[:,0]}).to_csv("predict1.csv", index=False)
Explanation: <br>
Task 2: Predict test data output and submit it to Kaggle
In this task we will use the model trained above to predict "SalePrice" on test data. Test data has all the input variables/features but no target variable. Out aim is to use the trained model to predict the target variable for test data. This is called generalization i.e. how good your model works on unseen data. The output in the form "Id","SalePrice" in a .csv file should be submitted to kaggle. Please provide your score on kaggle after this step as an image. It will be compared to the 5 feature Linear Regression later.
End of explanation
from IPython.display import Image
Image(filename='images/asgn_01.png', width=500)
Explanation: Upload .csv file to Kaggle.com
Create an account at https://www.kaggle.com
Go to https://www.kaggle.com/c/house-prices-advanced-regression-techniques/submit
Upload "predict1.csv" file created above.
Upload your score as an image below.
End of explanation
# import scikit-learn linear model
from sklearn import linear_model
# get X and y
# write code here
# Create linear regression object
# write code here check link above for example
# Train the model using the training sets. Use fit(X,y) command
# write code here
# The coefficients
print('Intercept: \n', regr.intercept_)
print('Coefficients: \n', regr.coef_)
# The mean squared error
print("Mean squared error: %.2f"
% np.mean((regr.predict(X_train) - y_train) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(X_train, y_train))
# read test X without 1's
# write code here
# predict output for test data. Use predict(X) command.
predict2 = # write code here
# remove negative sales by replacing them with zeros
predict2[predict2<0] = 0
# save prediction as predict2.csv file
# write code here
Explanation: <br>
Task 3: Use scikit-learn for Linear Regression
In this task we are going to use Linear Regression class from scikit-learn library to train the same model. The aim is to move from understanding algorithm to using an exisiting well established library. There is a Linear Regression example available on scikit-learn website as well.
Use the scikit-learn linear regression class to train the model on df_train
Compare the parameters from scikit-learn linear_model.LinearRegression.coef_ to the $\theta_s$ from earlier.
Use the linear_model.LinearRegression.predict on test data and upload it to kaggle. See if your score improves. Provide screenshot.
Note: no need to append 1's to X_train. Scitkit linear regression has parameter called fit_intercept that is by defauly enabled.
End of explanation
# define columns ['OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'YearBuilt']
# write code here
# check features range and statistics. Training dataset looks fine as all features has same count.
df_train[cols].describe()
# Load X and y variables from pandas dataframe df_train
# write code here
# Get m = number of samples and n = number of features
# write code here
#Feature normalizing the columns (subtract mean, divide by standard deviation)
#Store the mean and std for later use
#Note don't modify the original X matrix, use a copy
stored_feature_means, stored_feature_stds = [], []
Xnorm = np.array(X_train).copy()
for icol in range(Xnorm.shape[1]):
stored_feature_means.append(np.mean(Xnorm[:,icol]))
stored_feature_stds.append(np.std(Xnorm[:,icol]))
#Skip the first column if 1's
# if not icol: continue
#Faster to not recompute the mean and std again, just used stored values
Xnorm[:,icol] = (Xnorm[:,icol] - stored_feature_means[-1])/stored_feature_stds[-1]
# check data after normalization
pd.DataFrame(data=Xnorm,columns=cols).describe()
# Run Linear Regression from scikit-learn or code given above.
# write code here. Repeat from above.
# To predict output using ['OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'YearBuilt'] as input features.
# Check features range and statistics to see if there is any missing data.
# As you can see from count "GarageCars" and "TotalBsmtSF" has 1 missing value each.
df_test[cols].describe()
# Replace missing value with the mean of the feature
df_test['GarageCars'] = df_test['GarageCars'].fillna((df_test['GarageCars'].mean()))
df_test['TotalBsmtSF'] = df_test['TotalBsmtSF'].fillna((df_test['TotalBsmtSF'].mean()))
df_test[cols].describe()
# read test X without 1's
# write code here
# predict using trained model
predict3 = # write code here
# replace any negative predicted saleprice by zero
predict3[predict3<0] = 0
# predict target/output variable for test data using the trained model and upload to kaggle.
# write code to save output as predict3.csv here
Explanation: <br>
Task 4: Multivariate Linear Regression
Lastly use columns ['OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'YearBuilt'] and scikit-learn or the code given above to predict output on test data. Upload it to kaggle like earlier and see how much it improves your score.
Everything remains same except dimensions of X changes.
There might be some data missing from the test or train data that you can check using pandas.DataFrame.describe() function. Below we provide some helping functions for removing that data.
End of explanation |
3,849 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NATURAL LANGUAGE PROCESSING APPLICATIONS
In this notebook we will take a look at some indicative applications of natural language processing. We will cover content from nlp.py and text.py, for chapters 22 and 23 of Stuart Russel's and Peter Norvig's book Artificial Intelligence
Step1: We can use this information to build a Naive Bayes Classifier that will be used to categorize sentences (you can read more on Naive Bayes on the learning notebook). The classifier will take as input the probability distribution of bigrams and given a list of bigrams (extracted from the sentence to be classified), it will calculate the probability of the example/sentence coming from each language and pick the maximum.
Let's build our classifier, with the assumption that English is as probable as German (the input is a dictionary with values the text models and keys the tuple language, probability)
Step2: Now we need to write a function that takes as input a sentence, breaks it into a list of bigrams and classifies it with the naive bayes classifier from above.
Once we get the text model for the sentence, we need to unravel it. The text models show the probability of each bigram, but the classifier can't handle that extra data. It requires a simple list of bigrams. So, if the text model shows that a bigram appears three times, we need to add it three times in the list. Since the text model stores the n-gram information in a dictionary (with the key being the n-gram and the value the number of times the n-gram appears) we need to iterate through the items of the dictionary and manually add them to the list of n-grams.
Step3: Now we can start categorizing sentences.
Step4: You can add more languages if you want, the algorithm works for as many as you like! Also, you can play around with n. Here we used 2, but other numbers work too (even though 2 suffices). The algorithm is not perfect, but it has high accuracy even for small samples like the ones we used. That is because English and German are very different languages. The closer together languages are (for example, Norwegian and Swedish share a lot of common ground) the lower the accuracy of the classifier.
AUTHOR RECOGNITION
Another similar application to language recognition is recognizing who is more likely to have written a sentence, given text written by them. Here we will try and predict text from Edwin Abbott and Jane Austen. They wrote Flatland and Pride and Prejudice respectively.
We are optimistic we can determine who wrote what based on the fact that Abbott wrote his novella on much later date than Austen, which means there will be linguistic differences between the two works. Indeed, Flatland uses more modern and direct language while Pride and Prejudice is written in a more archaic tone containing more sophisticated wording.
Similarly with Language Recognition, we will first import the two datasets. This time though we are not looking for connections between characters, since that wouldn't give that great results. Why? Because both authors use English and English follows a set of patterns, as we show earlier. Trying to determine authorship based on this patterns would not be very efficient.
Instead, we will abstract our querying to a higher level. We will use words instead of characters. That way we can more accurately pick at the differences between their writing style and thus have a better chance at guessing the correct author.
Let's go right ahead and import our data
Step5: This time we set the default parameter of the model to 5, instead of 0. If we leave it at 0, then when we get a sentence containing a word we have not seen from that particular author, the chance of that sentence coming from that author is exactly 0 (since to get the probability, we multiply all the separate probabilities; if one is 0 then the result is also 0). To avoid that, we tell the model to add 5 to the count of all the words that appear.
Next we will build the Naive Bayes Classifier
Step6: Now that we have build our classifier, we will start classifying. First, we need to convert the given sentence to the format the classifier needs. That is, a list of words.
Step7: First we will input a sentence that is something Abbott would write. Note the use of square and the simpler language.
Step8: The classifier correctly guessed Abbott.
Next we will input a more sophisticated sentence, similar to the style of Austen.
Step9: The classifier guessed correctly again.
You can try more sentences on your own. Unfortunately though, since the datasets are pretty small, chances are the guesses will not always be correct.
THE FEDERALIST PAPERS
Let's now take a look at a harder problem, classifying the authors of the Federalist Papers. The Federalist Papers are a series of papers written by Alexander Hamilton, James Madison and John Jay towards establishing the United States Constitution.
What is interesting about these papers is that they were all written under a pseudonym, "Publius", to keep the identity of the authors a secret. Only after Hamilton's death, when a list was found written by him detailing the authorship of the papers, did the rest of the world learn what papers each of the authors wrote. After the list was published, Madison chimed in to make a couple of corrections
Step10: Let's see how the text looks. We will print the first 500 characters
Step11: It seems that the text file opens with a license agreement, hardly useful in our case. In fact, the license spans 113 words, while there is also a licensing agreement at the end of the file, which spans 3098 words. We need to remove them. To do so, we will first convert the text into words, to make our lives easier.
Step12: Let's now take a look at the first 100 words
Step13: Much better.
As with any Natural Language Processing problem, it is prudent to do some text pre-processing and clean our data before we start building our model. Remember that all the papers are signed as 'Publius', so we can safely remove that word, since it doesn't give us any information as to the real author.
NOTE
Step14: Now we have to separate the text from a block of words into papers and assign them to their authors. We can see that each paper starts with the word 'federalist', so we will split the text on that word.
The disputed papers are the papers from 49 to 58, from 18 to 20 and paper 64. We want to leave these papers unassigned. Also, note that there are two versions of paper 70; both from Hamilton.
Finally, to keep the implementation intuitive, we add a None object at the start of the papers list to make the list index match up with the paper numbering (for example, papers[5] now corresponds to paper no. 5 instead of the paper no.6 in the 0-indexed Python).
Step15: As we can see, from the undisputed papers Jay wrote 4, Madison 17 and Hamilton 51 (+1 duplicate). Let's now build our word models. The Unigram Word Model again will come in handy.
Step20: Now it is time to build our new Naive Bayes Learner. It is very similar to the one found in learning.py, but with an important difference
Step21: Next we will build our Learner. Note that even though Hamilton wrote the most papers, that doesn't make it more probable that he wrote the rest, so all the class probabilities will be equal. We can change them if we have some external knowledge, which for this tutorial we do not have.
Step22: As usual, the recognize function will take as input a string and after removing capitalization and splitting it into words, will feed it into the Naive Bayes Classifier.
Step23: Now we can start predicting the disputed papers | Python Code:
from utils import open_data
from text import *
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P_flatland = NgramCharModel(2, wordseq)
faust = open_data("GE-text/faust.txt").read()
wordseq = words(faust)
P_faust = NgramCharModel(2, wordseq)
Explanation: NATURAL LANGUAGE PROCESSING APPLICATIONS
In this notebook we will take a look at some indicative applications of natural language processing. We will cover content from nlp.py and text.py, for chapters 22 and 23 of Stuart Russel's and Peter Norvig's book Artificial Intelligence: A Modern Approach.
CONTENTS
Language Recognition
Author Recognition
The Federalist Papers
LANGUAGE RECOGNITION
A very useful application of text models (you can read more on them on the text notebook) is categorizing text into a language. In fact, with enough data we can categorize correctly mostly any text. That is because different languages have certain characteristics that set them apart. For example, in German it is very usual for 'c' to be followed by 'h' while in English we see 't' followed by 'h' a lot.
Here we will build an application to categorize sentences in either English or German.
First we need to build our dataset. We will take as input text in English and in German and we will extract n-gram character models (in this case, bigrams for n=2). For English, we will use Flatland by Edwin Abbott and for German Faust by Goethe.
Let's build our text models for each language, which will hold the probability of each bigram occuring in the text.
End of explanation
from learning import NaiveBayesLearner
dist = {('English', 1): P_flatland, ('German', 1): P_faust}
nBS = NaiveBayesLearner(dist, simple=True)
Explanation: We can use this information to build a Naive Bayes Classifier that will be used to categorize sentences (you can read more on Naive Bayes on the learning notebook). The classifier will take as input the probability distribution of bigrams and given a list of bigrams (extracted from the sentence to be classified), it will calculate the probability of the example/sentence coming from each language and pick the maximum.
Let's build our classifier, with the assumption that English is as probable as German (the input is a dictionary with values the text models and keys the tuple language, probability):
End of explanation
def recognize(sentence, nBS, n):
sentence = sentence.lower()
wordseq = words(sentence)
P_sentence = NgramCharModel(n, wordseq)
ngrams = []
for b, p in P_sentence.dictionary.items():
ngrams += [b]*p
print(ngrams)
return nBS(ngrams)
Explanation: Now we need to write a function that takes as input a sentence, breaks it into a list of bigrams and classifies it with the naive bayes classifier from above.
Once we get the text model for the sentence, we need to unravel it. The text models show the probability of each bigram, but the classifier can't handle that extra data. It requires a simple list of bigrams. So, if the text model shows that a bigram appears three times, we need to add it three times in the list. Since the text model stores the n-gram information in a dictionary (with the key being the n-gram and the value the number of times the n-gram appears) we need to iterate through the items of the dictionary and manually add them to the list of n-grams.
End of explanation
recognize("Ich bin ein platz", nBS, 2)
recognize("Turtles fly high", nBS, 2)
recognize("Der pelikan ist hier", nBS, 2)
recognize("And thus the wizard spoke", nBS, 2)
Explanation: Now we can start categorizing sentences.
End of explanation
from utils import open_data
from text import *
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P_Abbott = UnigramWordModel(wordseq, 5)
pride = open_data("EN-text/pride.txt").read()
wordseq = words(pride)
P_Austen = UnigramWordModel(wordseq, 5)
Explanation: You can add more languages if you want, the algorithm works for as many as you like! Also, you can play around with n. Here we used 2, but other numbers work too (even though 2 suffices). The algorithm is not perfect, but it has high accuracy even for small samples like the ones we used. That is because English and German are very different languages. The closer together languages are (for example, Norwegian and Swedish share a lot of common ground) the lower the accuracy of the classifier.
AUTHOR RECOGNITION
Another similar application to language recognition is recognizing who is more likely to have written a sentence, given text written by them. Here we will try and predict text from Edwin Abbott and Jane Austen. They wrote Flatland and Pride and Prejudice respectively.
We are optimistic we can determine who wrote what based on the fact that Abbott wrote his novella on much later date than Austen, which means there will be linguistic differences between the two works. Indeed, Flatland uses more modern and direct language while Pride and Prejudice is written in a more archaic tone containing more sophisticated wording.
Similarly with Language Recognition, we will first import the two datasets. This time though we are not looking for connections between characters, since that wouldn't give that great results. Why? Because both authors use English and English follows a set of patterns, as we show earlier. Trying to determine authorship based on this patterns would not be very efficient.
Instead, we will abstract our querying to a higher level. We will use words instead of characters. That way we can more accurately pick at the differences between their writing style and thus have a better chance at guessing the correct author.
Let's go right ahead and import our data:
End of explanation
from learning import NaiveBayesLearner
dist = {('Abbott', 1): P_Abbott, ('Austen', 1): P_Austen}
nBS = NaiveBayesLearner(dist, simple=True)
Explanation: This time we set the default parameter of the model to 5, instead of 0. If we leave it at 0, then when we get a sentence containing a word we have not seen from that particular author, the chance of that sentence coming from that author is exactly 0 (since to get the probability, we multiply all the separate probabilities; if one is 0 then the result is also 0). To avoid that, we tell the model to add 5 to the count of all the words that appear.
Next we will build the Naive Bayes Classifier:
End of explanation
def recognize(sentence, nBS):
sentence = sentence.lower()
sentence_words = words(sentence)
return nBS(sentence_words)
Explanation: Now that we have build our classifier, we will start classifying. First, we need to convert the given sentence to the format the classifier needs. That is, a list of words.
End of explanation
recognize("the square is mad", nBS)
Explanation: First we will input a sentence that is something Abbott would write. Note the use of square and the simpler language.
End of explanation
recognize("a most peculiar acquaintance", nBS)
Explanation: The classifier correctly guessed Abbott.
Next we will input a more sophisticated sentence, similar to the style of Austen.
End of explanation
from utils import open_data
from text import *
federalist = open_data("EN-text/federalist.txt").read()
Explanation: The classifier guessed correctly again.
You can try more sentences on your own. Unfortunately though, since the datasets are pretty small, chances are the guesses will not always be correct.
THE FEDERALIST PAPERS
Let's now take a look at a harder problem, classifying the authors of the Federalist Papers. The Federalist Papers are a series of papers written by Alexander Hamilton, James Madison and John Jay towards establishing the United States Constitution.
What is interesting about these papers is that they were all written under a pseudonym, "Publius", to keep the identity of the authors a secret. Only after Hamilton's death, when a list was found written by him detailing the authorship of the papers, did the rest of the world learn what papers each of the authors wrote. After the list was published, Madison chimed in to make a couple of corrections: Hamilton, Madison said, hastily wrote down the list and assigned some papers to the wrong author!
Here we will try and find out who really wrote these mysterious papers.
To solve this we will learn from the undisputed papers to predict the disputed ones. First, let's read the texts from the file:
End of explanation
federalist[:500]
Explanation: Let's see how the text looks. We will print the first 500 characters:
End of explanation
wordseq = words(federalist)
wordseq = wordseq[114:-3098]
Explanation: It seems that the text file opens with a license agreement, hardly useful in our case. In fact, the license spans 113 words, while there is also a licensing agreement at the end of the file, which spans 3098 words. We need to remove them. To do so, we will first convert the text into words, to make our lives easier.
End of explanation
' '.join(wordseq[:100])
Explanation: Let's now take a look at the first 100 words:
End of explanation
wordseq = [w for w in wordseq if w != 'publius']
Explanation: Much better.
As with any Natural Language Processing problem, it is prudent to do some text pre-processing and clean our data before we start building our model. Remember that all the papers are signed as 'Publius', so we can safely remove that word, since it doesn't give us any information as to the real author.
NOTE: Since we are only removing a single word from each paper, this step can be skipped. We add it here to show that processing the data in our hands is something we should always be considering. Oftentimes pre-processing the data in just the right way is the difference between a robust model and a flimsy one.
End of explanation
import re
papers = re.split(r'federalist\s', ' '.join(wordseq))
papers = [p for p in papers if p not in ['', ' ']]
papers = [None] + papers
disputed = list(range(49, 58+1)) + [18, 19, 20, 64]
jay, madison, hamilton = [], [], []
for i, p in enumerate(papers):
if i in disputed or i == 0:
continue
if 'jay' in p:
jay.append(p)
elif 'madison' in p:
madison.append(p)
else:
hamilton.append(p)
len(jay), len(madison), len(hamilton)
Explanation: Now we have to separate the text from a block of words into papers and assign them to their authors. We can see that each paper starts with the word 'federalist', so we will split the text on that word.
The disputed papers are the papers from 49 to 58, from 18 to 20 and paper 64. We want to leave these papers unassigned. Also, note that there are two versions of paper 70; both from Hamilton.
Finally, to keep the implementation intuitive, we add a None object at the start of the papers list to make the list index match up with the paper numbering (for example, papers[5] now corresponds to paper no. 5 instead of the paper no.6 in the 0-indexed Python).
End of explanation
hamilton = ''.join(hamilton)
hamilton_words = words(hamilton)
P_hamilton = UnigramWordModel(hamilton_words, default=1)
madison = ''.join(madison)
madison_words = words(madison)
P_madison = UnigramWordModel(madison_words, default=1)
jay = ''.join(jay)
jay_words = words(jay)
P_jay = UnigramWordModel(jay_words, default=1)
Explanation: As we can see, from the undisputed papers Jay wrote 4, Madison 17 and Hamilton 51 (+1 duplicate). Let's now build our word models. The Unigram Word Model again will come in handy.
End of explanation
import random
import decimal
import math
from decimal import Decimal
decimal.getcontext().prec = 100
def precise_product(numbers):
result = 1
for x in numbers:
result *= Decimal(x)
return result
def log_product(numbers):
result = 0.0
for x in numbers:
result += math.log(x)
return result
def NaiveBayesLearner(dist):
A simple naive bayes classifier that takes as input a dictionary of
Counter distributions and can then be used to find the probability
of a given item belonging to each class.
The input dictionary is in the following form:
ClassName: Counter
attr_dist = {c_name: count_prob for c_name, count_prob in dist.items()}
def predict(example):
Predict the probabilities for each class.
def class_prob(target, e):
attr = attr_dist[target]
return precise_product([attr[a] for a in e])
pred = {t: class_prob(t, example) for t in dist.keys()}
total = sum(pred.values())
for k, v in pred.items():
pred[k] = v / total
return pred
return predict
def NaiveBayesLearnerLog(dist):
A simple naive bayes classifier that takes as input a dictionary of
Counter distributions and can then be used to find the probability
of a given item belonging to each class. It will compute the likelihood by adding the logarithms of probabilities.
The input dictionary is in the following form:
ClassName: Counter
attr_dist = {c_name: count_prob for c_name, count_prob in dist.items()}
def predict(example):
Predict the probabilities for each class.
def class_prob(target, e):
attr = attr_dist[target]
return log_product([attr[a] for a in e])
pred = {t: class_prob(t, example) for t in dist.keys()}
total = -sum(pred.values())
for k, v in pred.items():
pred[k] = v/total
return pred
return predict
Explanation: Now it is time to build our new Naive Bayes Learner. It is very similar to the one found in learning.py, but with an important difference: it doesn't classify an example, but instead returns the probability of the example belonging to each class. This will allow us to not only see to whom a paper belongs to, but also the probability of authorship as well.
We will build two versions of Learners, one will multiply probabilities as is and other will add the logarithms of them.
Finally, since we are dealing with long text and the string of probability multiplications is long, we will end up with the results being rounded to 0 due to floating point underflow. To work around this problem we will use the built-in Python library decimal, which allows as to set decimal precision to much larger than normal.
Note that the logarithmic learner will compute a negative likelihood since the logarithm of values less than 1 will be negative.
Thus, the author with the lesser magnitude of proportion is more likely to have written that paper.
End of explanation
dist = {('Madison', 1): P_madison, ('Hamilton', 1): P_hamilton, ('Jay', 1): P_jay}
nBS = NaiveBayesLearner(dist)
nBSL = NaiveBayesLearnerLog(dist)
Explanation: Next we will build our Learner. Note that even though Hamilton wrote the most papers, that doesn't make it more probable that he wrote the rest, so all the class probabilities will be equal. We can change them if we have some external knowledge, which for this tutorial we do not have.
End of explanation
def recognize(sentence, nBS):
return nBS(words(sentence.lower()))
Explanation: As usual, the recognize function will take as input a string and after removing capitalization and splitting it into words, will feed it into the Naive Bayes Classifier.
End of explanation
print('\nStraightforward Naive Bayes Learner\n')
for d in disputed:
probs = recognize(papers[d], nBS)
results = ['{}: {:.4f}'.format(name, probs[(name, 1)]) for name in 'Hamilton Madison Jay'.split()]
print('Paper No. {}: {}'.format(d, ' '.join(results)))
print('\nLogarithmic Naive Bayes Learner\n')
for d in disputed:
probs = recognize(papers[d], nBSL)
results = ['{}: {:.6f}'.format(name, probs[(name, 1)]) for name in 'Hamilton Madison Jay'.split()]
print('Paper No. {}: {}'.format(d, ' '.join(results)))
Explanation: Now we can start predicting the disputed papers:
End of explanation |
3,850 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
載入 Word2Vec Embeddings
Step1: 資料前處理
Step2: 設計 Graph
Step3: Build Category2Vec
參考論文
Step4: 測試 Category Vec
Step5: 開始轉換成向量
Step6: Load TagVectors
Step7: 進行隨機抽樣驗證 | Python Code:
class PixWord2Vec:
# vocabulary indexing
index2word = None
word2indx = None
# embeddings vector
embeddings = None
# Normailized embeddings vector
final_embeddings = None
# hidden layer's weight and bias
softmax_weights = None
softmax_biases = None
# 此 Model 檔必需要先 Trainig Word2Vec
import pickle
pixword = pickle.load(open("./pixword_cnn_word2vec.pk"))
Explanation: 載入 Word2Vec Embeddings
End of explanation
import numpy as np
import random
import tensorflow as tf
import json
from pyspark import StorageLevel
vocabulary_size = len(pixword.index2word)
print "vocabulary_size" , vocabulary_size
Explanation: 資料前處理
End of explanation
pixword.embeddings.shape
import math
append_size = 1000
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
graph = tf.Graph()
with graph.as_default():
np.random.seed(0)
# doc(tags or category) batch size , this is key !!! And this batch size cant be too large !!
append_size = 1000
# Input data.
train_dataset = tf.placeholder(tf.int32, shape=[None])
train_labels = tf.placeholder(tf.int32, shape=[None, 1])
# Variables.
embeddings = tf.Variable(np.append(pixword.embeddings,
np.random.randn(append_size,128)).reshape(vocabulary_size+append_size,128).astype('float32'))
softmax_weights = tf.Variable(np.append(pixword.embeddings,
np.random.randn(append_size,128)).reshape(vocabulary_size+append_size,128).astype('float32'))
softmax_biases = tf.Variable(np.append(pixword.softmax_biases,[0]*append_size).astype('float32'))
# Model.
# Look up embeddings for inputs.
embed = tf.nn.embedding_lookup(embeddings, train_dataset)
# Compute the softmax loss, using a sample of the negative labels each time.
loss = tf.reduce_mean(
tf.nn.sampled_softmax_loss(softmax_weights, softmax_biases, embed,
train_labels, num_sampled, vocabulary_size))
# Optimizer.
optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
# Compute the similarity between minibatch examples and all embeddings.
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm
init = tf.global_variables_initializer()
session = tf.Session(graph=graph)
session.run(init)
Explanation: 設計 Graph
End of explanation
def train(batch_data,batch_labels):
feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
_, l = session.run([optimizer, loss], feed_dict=feed_dict)
return l
def searchByVec(vec,final_embeddings,scope=5):
sim = np.dot(final_embeddings,vec)
for index in sim.argsort()[-scope:][::-1][1:]:
print pixword.index2word[index],sim[index]
Explanation: Build Category2Vec
參考論文: https://cs.stanford.edu/~quocle/paragraph_vector.pdf
演算法核心概念圖
<img width="50%" src="./doc2vec_concept.png">
基本概念說明: 將 Document( or Category or Tag Set)也是視為一個 embedding vector , 而且這個 embedding vector 的概念就再出現這些關鍵字下用來代表 Document( or Category or Tag Set)
得到一個小結論
當在算 Tag2Vec ,如果想要正確表達 Tag2Vec 與原本 Vocabulary 之間的關,原本的 final_embeddings 必需重新更新一次,其程式碼如下
python
return (final_embeddings[vocabulary_size:vocabulary_size+index+1],final_embeddings[:vocabulary_size])
AVG Vector 用肉眼隨機抽樣觀察效果似乎效果比 Tag2Vec 的效果要來的好,其實驗結果在下面的 Block 中有呈現出來
End of explanation
cate_vec = []
count = 0
def tags2vec(words_set):
np.random.seed(0)
session.run(init)
if len(words_set)>append_size: raise
cat_data = []
cat_label = []
for index , words in enumerate(words_set):
for w in words :
if w not in pixword.word2indx :
continue
wi = pixword.word2indx[w]
cat_data.append(vocabulary_size+index)
cat_label.append([wi])
for _ in range(20):
train(cat_data,cat_label)
final_embeddings = session.run(normalized_embeddings)
return (final_embeddings[vocabulary_size:vocabulary_size+index+1],final_embeddings[:vocabulary_size])
words = [u'旅遊',u'台東']
avg_vec = np.average([pixword.final_embeddings[pixword.word2indx[w]] for w in words],0)
for w in words:
print "#{}#".format(w.encode('utf-8'))
searchByVec(pixword.final_embeddings[pixword.word2indx[w]] ,pixword.final_embeddings)
print
# 單純取這此字的 Vector Mean
print "AVG Vector"
searchByVec(avg_vec,pixword.final_embeddings,scope=20)
print
# 假設有個一 document 包含這些 tag 字 ,所產生的新的 vecotr 所找的新的關鍵字如下
print "Tag Vector"
result = tags2vec([words])
searchByVec(result[0][0],result[1],scope=20)
# read raw data
def checkInVoc(tlist):
r = []
for t in tlist :
if t in pixword.word2indx:
r.append(t)
return r
def merge(x):
x[0]['tags'] = x[1]
return x[0]
test_set = sc.textFile("./data/cuted_test/").map(
json.loads).map(
lambda x : (x,x['tags']) ).mapValues(
checkInVoc).filter(
lambda x : len(x[1])>1)
test_set.persist(StorageLevel.DISK_ONLY)
!rm -rvf ./data/cuted_and_tags/
import json
test_set.map(merge).map(json.dumps).saveAsTextFile("./data/cuted_and_tags/")
class MySentences(object):
def __init__(self, dirname):
self.dirname = dirname
def __iter__(self):
for fname in os.listdir(self.dirname):
if 'crc' in fname : continue
if fname.startswith('_'):continue
for line in open(os.path.join(self.dirname, fname)):
yield line
sc.textFile("./data/cuted_and_tags/").count()
Explanation: 測試 Category Vec
End of explanation
def toVector(docs,tags_set,f):
res_vecs = tags2vec(tags_set)
if len(docs) != len(res_vecs[0]):
print len(docs) , len(res_vecs)
raise
for index,d in enumerate(docs):
d['tag_vec'] = [float(i) for i in list(res_vecs[0][index])]
for d in docs:
jstr = json.dumps(d)
f.write(jstr+'\n')
!rm ./data/cuted_and_vec.json
f = open('./data/cuted_and_vec.json','w')
docs = []
tags_set = []
for doc in MySentences("./data/cuted_and_tags/"):
js_objects = json.loads(doc)
docs.append(js_objects)
tags_set.append(js_objects['tags'])
if len(docs) == 1000:
toVector(docs,tags_set,f)
docs = []
tags_set = []
print '*',
toVector(docs,tags_set,f)
def loadjson(x):
try:
return json.loads(x)
except:
return None
jsondoc = sc.textFile(
"./data/cuted_and_vec.json").map(
loadjson).filter(
lambda x : x!=None)
from operator import add
Explanation: 開始轉換成向量
End of explanation
import json
def loadjson(x):
try:
return json.loads(x)
except:
return None
url_vecs = np.array(jsondoc.map(
lambda x: np.array(x['tag_vec'])).collect())
url_vecs.shape
urls = jsondoc.collect()
def search(wvec,final_embeddings,cate):
# wvec = final_embeddings[windex]
sim = np.dot(final_embeddings,wvec)
result = []
for index in sim.argsort()[-1000:][::-1][1:]:
if urls[index]['category'] == cate and sim[index]>0.9 :
print urls[index]['url'],sim[index],
for tag in urls[index]['tags']:
print tag,
print
return sim
Explanation: Load TagVectors
End of explanation
index = np.random.randint(10000)
print urls[index]['url'],urls[index]['category'],
for tag in urls[index]['tags']:
print tag,
print
print
print "########以下是用 Tag Vecotr 所找出來的 URL #########"
sim = search(url_vecs[index],url_vecs,urls[index]['category'])
print
print
print "########以下是直接用第一個 Tag 直接作比對的結果,效果好非常多 #########"
count = 0
for _,u in enumerate(urls):
for t in u['tags']:
if t == urls[index]['tags'][0] :
count = count + 1
print u['url']
for tt in u['tags']:
print tt,
print
break
if count > 500 : break
Explanation: 進行隨機抽樣驗證
End of explanation |
3,851 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load up the DECALS info tables
Step1: Alright, now lets just pick a few specific bricks that are both in SDSS and have fairly deep g and r data
Step2: And the joint distribution?
Step3: Looks like there isn't much with lots of r and lots of g... 🙁
So we pick one of each.
Step5: Now get the matched SDSS catalogs | Python Code:
bricks = Table.read('decals_dr3/survey-bricks.fits.gz')
bricksdr3 = Table.read('decals_dr3/survey-bricks-dr3.fits.gz')
fn_in_sdss = 'decals_dr3/in_sdss.npy'
try:
bricksdr3['in_sdss'] = np.load(fn_in_sdss)
except:
bricksdr3['in_sdss'] = ['unknown']*len(bricksdr3)
bricksdr3
goodbricks = (bricksdr3['in_sdss'] == 'unknown') & (bricksdr3['nexp_r']>=10)
if np.sum(goodbricks) > 0:
for brick in ProgressBar(bricksdr3[goodbricks], ipython_widget=True):
sc = SkyCoord(brick['ra']*u.deg, brick['dec']*u.deg)
bricksdr3['in_sdss'][bricksdr3['brickname']==brick['brickname']] = 'yes' if in_sdss(sc) else 'no'
np.save('decals_dr3/in_sdss', bricksdr3['in_sdss'])
plt.scatter(bricksdr3['ra'], bricksdr3['dec'],
c=bricksdr3['nexp_r'], lw=0, s=3, vmin=0)
plt.colorbar()
yeses = bricksdr3['in_sdss'] == 'yes'
nos = bricksdr3['in_sdss'] == 'no'
plt.scatter(bricksdr3['ra'][yeses], bricksdr3['dec'][yeses], c='r',lw=0, s=1)
plt.scatter(bricksdr3['ra'][nos], bricksdr3['dec'][nos], c='w',lw=0, s=1)
plt.xlim(0, 360)
plt.ylim(-30, 40)
sdssbricks = bricksdr3[bricksdr3['in_sdss']=='yes']
plt.scatter(sdssbricks['ra'], sdssbricks['dec'],
c=sdssbricks['nexp_r'], lw=0, s=3, vmin=0)
plt.colorbar()
plt.xlim(0, 360)
plt.ylim(-30, 40)
Explanation: Load up the DECALS info tables
End of explanation
maxn = np.max(sdssbricks['nexp_r'])
bins = np.linspace(-1, maxn+1, maxn*3)
plt.hist(sdssbricks['nexp_r'], bins=bins, histtype='step', ec='r',log=True)
plt.hist(sdssbricks['nexp_g'], bins=bins+.1, histtype='step', ec='g',log=True)
plt.hist(sdssbricks['nexp_z'], bins=bins-.1, histtype='step', ec='k',log=True)
plt.xlim(bins[0], bins[-1])
Explanation: Alright, now lets just pick a few specific bricks that are both in SDSS and have fairly deep g and r data
End of explanation
plt.hexbin(sdssbricks['nexp_g'], sdssbricks['nexp_r'],bins='log')
plt.xlabel('g')
plt.ylabel('r')
Explanation: And the joint distribution?
End of explanation
deep_r = np.random.choice(sdssbricks['brickname'][(sdssbricks['nexp_r']>20)&(sdssbricks['nexp_g']>2)])
ra = bricks[bricks['BRICKNAME']==deep_r]['RA'][0]
dec = bricks[bricks['BRICKNAME']==deep_r]['DEC'][0]
print('http://skyserver.sdss.org/dr13/en/tools/chart/navi.aspx?ra={}&dec={}&scale=3.0&opt=P'.format(ra, dec))
deep_r
deep_g = np.random.choice(sdssbricks['brickname'][(sdssbricks['nexp_r']>15)&(sdssbricks['nexp_g']>20)])
ra = bricks[bricks['BRICKNAME']==deep_g]['RA'][0]
dec = bricks[bricks['BRICKNAME']==deep_g]['DEC'][0]
print('http://skyserver.sdss.org/dr13/en/tools/chart/navi.aspx?ra={}&dec={}&scale=3.0'.format(ra, dec))
deep_g
#bricknames = [deep_r, deep_g]
# hard code this from the result above for repeatability
bricknames = ['1193p057', '2208m005']
sdssbricks[np.in1d(sdssbricks['brickname'], bricknames)]
base_url = 'http://portal.nersc.gov/project/cosmo/data/legacysurvey/dr3/'
catalog_fns = []
for nm in bricknames:
url = base_url + 'tractor/{}/tractor-{}.fits'.format(nm[:3], nm)
outfn = 'decals_dr3/catalogs/' + os.path.split(url)[-1]
if os.path.isfile(outfn):
print(outfn, 'already exists')
else:
tmpfn = data.download_file(url)
shutil.move(tmpfn, outfn)
catalog_fns.append(outfn)
catalog_fns
Explanation: Looks like there isn't much with lots of r and lots of g... 🙁
So we pick one of each.
End of explanation
import casjobs
jobs = casjobs.CasJobs(base_url='http://skyserver.sdss.org/CasJobs/services/jobs.asmx', request_type='POST')
# this query template comes from Marla's download_host_sqlfile w/ modifications
query_template =
SELECT p.objId as OBJID,
p.ra as RA, p.dec as DEC,
p.type as PHOTPTYPE, dbo.fPhotoTypeN(p.type) as PHOT_SG,
p.flags as FLAGS,
flags & dbo.fPhotoFlags('SATURATED') as SATURATED,
flags & dbo.fPhotoFlags('BAD_COUNTS_ERROR') as BAD_COUNTS_ERROR,
flags & dbo.fPhotoFlags('BINNED1') as BINNED1,
p.modelMag_u as u, p.modelMag_g as g, p.modelMag_r as r,p.modelMag_i as i,p.modelMag_z as z,
p.modelMagErr_u as u_err, p.modelMagErr_g as g_err,
p.modelMagErr_r as r_err,p.modelMagErr_i as i_err,p.modelMagErr_z as z_err,
p.MODELMAGERR_U,p.MODELMAGERR_G,p.MODELMAGERR_R,p.MODELMAGERR_I,p.MODELMAGERR_Z,
p.EXTINCTION_U, p.EXTINCTION_G, p.EXTINCTION_R, p.EXTINCTION_I, p.EXTINCTION_Z,
p.DERED_U,p.DERED_G,p.DERED_R,p.DERED_I,p.DERED_Z,
p.PETRORAD_U,p.PETRORAD_G,p.PETRORAD_R,p.PETRORAD_I,p.PETRORAD_Z,
p.PETRORADERR_U,p.PETRORADERR_G,p.PETRORADERR_R,p.PETRORADERR_I,p.PETRORADERR_Z,
p.DEVRAD_U,p.DEVRADERR_U,p.DEVRAD_G,p.DEVRADERR_G,p.DEVRAD_R,p.DEVRADERR_R,
p.DEVRAD_I,p.DEVRADERR_I,p.DEVRAD_Z,p.DEVRADERR_Z,
p.DEVAB_U,p.DEVAB_G,p.DEVAB_R,p.DEVAB_I,p.DEVAB_Z,
p.CMODELMAG_U, p.CMODELMAGERR_U, p.CMODELMAG_G,p.CMODELMAGERR_G,
p.CMODELMAG_R, p.CMODELMAGERR_R, p.CMODELMAG_I,p.CMODELMAGERR_I,
p.CMODELMAG_Z, p.CMODELMAGERR_Z,
p.PSFMAG_U, p.PSFMAGERR_U, p.PSFMAG_G, p.PSFMAGERR_G,
p.PSFMAG_R, p.PSFMAGERR_R, p.PSFMAG_I, p.PSFMAGERR_I,
p.PSFMAG_Z, p.PSFMAGERR_Z,
p.FIBERMAG_U, p.FIBERMAGERR_U, p.FIBERMAG_G, p.FIBERMAGERR_G,
p.FIBERMAG_R, p.FIBERMAGERR_R, p.FIBERMAG_I, p.FIBERMAGERR_I,
p.FIBERMAG_Z, p.FIBERMAGERR_Z,
p.FRACDEV_U, p.FRACDEV_G, p.FRACDEV_R, p.FRACDEV_I, p.FRACDEV_Z,
p.Q_U,p.U_U, p.Q_G,p.U_G, p.Q_R,p.U_R, p.Q_I,p.U_I, p.Q_Z,p.U_Z,
p.EXPAB_U, p.EXPRAD_U, p.EXPPHI_U, p.EXPAB_G, p.EXPRAD_G, p.EXPPHI_G,
p.EXPAB_R, p.EXPRAD_R, p.EXPPHI_R, p.EXPAB_I, p.EXPRAD_I, p.EXPPHI_I,
p.EXPAB_Z, p.EXPRAD_Z, p.EXPPHI_Z,
p.FIBER2MAG_R, p.FIBER2MAGERR_R,
p.EXPMAG_R, p.EXPMAGERR_R,
p.PETROR50_R, p.PETROR90_R, p.PETROMAG_R,
p.expMag_r + 2.5*log10(2*PI()*p.expRad_r*p.expRad_r + 1e-20) as SB_EXP_R,
p.petroMag_r + 2.5*log10(2*PI()*p.petroR50_r*p.petroR50_r) as SB_PETRO_R,
ISNULL(w.j_m_2mass,9999) as J, ISNULL(w.j_msig_2mass,9999) as JERR,
ISNULL(w.H_m_2mass,9999) as H, ISNULL(w.h_msig_2mass,9999) as HERR,
ISNULL(w.k_m_2mass,9999) as K, ISNULL(w.k_msig_2mass,9999) as KERR,
ISNULL(s.z, -1) as SPEC_Z, ISNULL(s.zErr, -1) as SPEC_Z_ERR, ISNULL(s.zWarning, -1) as SPEC_Z_WARN,
ISNULL(pz.z,-1) as PHOTOZ, ISNULL(pz.zerr,-1) as PHOTOZ_ERR
FROM dbo.fGetObjFromRectEq({ra1}, {dec1}, {ra2}, {dec2}) n, PhotoPrimary p
{into}
LEFT JOIN SpecObj s ON p.specObjID = s.specObjID
LEFT JOIN PHOTOZ pz ON p.ObjID = pz.ObjID
LEFT join WISE_XMATCH as wx on p.objid = wx.sdss_objid
LEFT join wise_ALLSKY as w on wx.wise_cntr = w.cntr
WHERE n.objID = p.objID
casjobs_tables = jobs.list_tables()
job_ids = []
for bricknm in bricknames:
thisbrick = bricks[bricks['BRICKNAME']==bricknm]
assert len(thisbrick) == 1
thisbrick = thisbrick[0]
intostr = 'INTO mydb.decals_brick_' + bricknm
qry = query_template.format(ra1=thisbrick['RA1'], ra2=thisbrick['RA2'],
dec1=thisbrick['DEC1'], dec2=thisbrick['DEC2'],
into=intostr)
if intostr.split('.')[1] in casjobs_tables:
print(bricknm, 'already present')
continue
job_ids.append(jobs.submit(qry, 'DR13', bricknm))
# wait for the jobs to finish
finished = False
while not finished:
for i in job_ids:
stat = jobs.status(i)[-1]
if stat == 'failed':
raise ValueError('Job {} failed'.format(i))
if stat != 'finished':
time.sleep(1)
break
else:
finished = True
print('Finished jobs', job_ids)
jids = []
for bnm in bricknames:
table_name = 'decals_brick_' + bnm
ofn = 'decals_dr3/catalogs/sdss_comparison_{}.csv'.format(bnm)
if os.path.isfile(ofn):
print(table_name, 'already downloaded')
else:
jids.append(jobs.request_output(table_name,'CSV'))
done_jids = []
while len(done_jids)<len(jids):
time.sleep(1)
for i, bnm in zip(jids, bricknames):
if i in done_jids:
continue
if jobs.status(i)[-1] != 'finished':
continue
ofn = 'decals_dr3/catalogs/sdss_comparison_{}.csv'.format(bnm)
jobs.get_output(i, ofn)
done_jids.append(i)
print(ofn)
Explanation: Now get the matched SDSS catalogs
End of explanation |
3,852 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example of reproducing HHT analysis results in Su et al. 2015
Su et al. 2015
Step1: Running EEMD of the QPO signal and checking the orthogonality of the IMF components
Step2: Reproducing Figure 2 in Su et al. 2015
Step3: Hilbert spectral analysis | Python Code:
from astropy.io import ascii
data = ascii.read('./XTE_J1550_564_30191011500A_2_13kev_001s_0_2505s.txt')
time = data['col1']
rate = data['col2']
dt = time[1] - time[0]
Explanation: Example of reproducing HHT analysis results in Su et al. 2015
Su et al. 2015: "Characterizing Intermittency of 4-Hz Quasi-periodic Oscillation in XTE J1550-564 Using Hilbert-Huang Transform"
Reading the QPO light curve data
End of explanation
from hhtpywrapper.eemd import EEMD
eemd_post_processing = EEMD(rate, 6.0, 100, num_imf=10, seed_no=4, post_processing=True)
eemd_post_processing.get_oi()
Explanation: Running EEMD of the QPO signal and checking the orthogonality of the IMF components
End of explanation
%matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
tstart = int(np.fix(40 / dt))
tend = int(np.fix(50 / dt))
hi_noise = np.sum(eemd_post_processing.imfs[:,:2], axis=1)
c3 = eemd_post_processing.imfs[:,2]
c4 = eemd_post_processing.imfs[:,3]
c5 = eemd_post_processing.imfs[:,4]
low_noise = np.sum(eemd_post_processing.imfs[:,5:], axis=1)
plt.figure()
plt.subplot(611)
plt.plot(time[tstart:tend], rate[tstart:tend]/1000, 'k')
plt.xticks([])
plt.yticks([0, 20, 40])
plt.xlim([40, 50])
plt.ylabel('Data')
plt.subplot(612)
plt.plot(time[tstart:tend], hi_noise[tstart:tend]/1000, 'k')
plt.xticks([])
plt.yticks([-10, 0, 10])
plt.xlim([40, 50])
plt.ylabel(r'$c_{1} : c_{2}$')
plt.subplot(613)
plt.plot(time[tstart:tend], c3[tstart:tend]/1000, 'k')
plt.xticks([])
plt.yticks([-5, 0, 5])
plt.ylabel(r'$c_{3}$')
plt.xlim([40, 50])
plt.subplot(614)
plt.plot(time[tstart:tend], c4[tstart:tend]/1000, 'r')
plt.xticks([])
plt.yticks([-10, 0, 10])
plt.xlim([40, 50])
plt.ylabel(r'$c_{4}$')
plt.subplot(615)
plt.plot(time[tstart:tend], c5[tstart:tend]/1000, 'k')
plt.xticks([])
plt.yticks([-5, 0, 5])
plt.xlim([40, 50])
plt.ylabel(r'$c_{5}$')
plt.subplot(616)
plt.plot(time[tstart:tend], low_noise[tstart:tend]/1000, 'k')
plt.yticks([10, 15, 20, 25])
plt.xticks(np.arange(40,51))
plt.xlim([40, 50])
plt.xlabel('Time (s)')
plt.ylabel(r'$c_{6}$ : residual')
plt.show()
Explanation: Reproducing Figure 2 in Su et al. 2015
End of explanation
from hhtpywrapper.hsa import HSA
# Obtaining the instantaneous frequency and amplitude of the IMF c4 by Hilbert transform
ifa = HSA(c4, dt)
iamp = ifa.iamp
ifreq = ifa.ifreq
# Plot the IMF C4 and its instantaneous frequency and amplitude
plt.figure()
plt.subplot(411)
plt.plot(time[tstart:tend], rate[tstart:tend], 'k')
plt.xticks([])
plt.xlim([40, 50])
plt.ylabel('Data')
plt.subplot(412)
plt.plot(time[tstart:tend], c4[tstart:tend], 'k')
plt.xticks([])
plt.xlim([40, 50])
plt.ylabel(r'$c_{4}$')
plt.subplot(413)
plt.plot(time[tstart:tend], iamp[tstart:tend], 'b')
plt.xticks([])
plt.xlim([40, 50])
plt.ylabel('Amplitude')
plt.subplot(414)
plt.plot(time[tstart:tend], ifreq[tstart:tend], 'r')
plt.ylabel('Frequency (Hz)')
plt.xticks(np.arange(40,51))
plt.xlim([40, 50])
plt.xlabel('Time (s)')
plt.show()
# Plot the Hilbert spectrum
ifa.plot_hs(time, trange=[40, 50], frange=[1.0, 10], tres=1000, fres=1000, hsize=10, sigma=2, colorbar='amplitude')
Explanation: Hilbert spectral analysis
End of explanation |
3,853 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas querying and metadata with Epochs objects
Demonstrating pandas-style string querying with Epochs metadata.
For related uses of
Step1: We can use this metadata attribute to select subsets of Epochs. This
uses the Pandas
Step2: Next we'll choose a subset of words to keep.
Step3: Note that traditional epochs sub-selection still works. The traditional
MNE methods for selecting epochs will supersede the rich metadata querying.
Step4: Below we'll show a more involved example that leverages the metadata
of each epoch. We'll create a new column in our metadata object and use
it to generate averages for many subsets of trials.
Step5: Now we can quickly extract (and plot) subsets of the data. For example, to
look at words split by word length and concreteness
Step6: To compare words which are 4, 5, 6, 7 or 8 letters long
Step7: And finally, for the interaction between concreteness and continuous length
in letters
Step8: <div class="alert alert-info"><h4>Note</h4><p>Creating an | Python Code:
# Authors: Chris Holdgraf <[email protected]>
# Jona Sassenhagen <[email protected]>
# Eric Larson <[email protected]>
# License: BSD (3-clause)
import mne
import numpy as np
import matplotlib.pyplot as plt
# Load the data from the internet
path = mne.datasets.kiloword.data_path() + '/kword_metadata-epo.fif'
epochs = mne.read_epochs(path)
# The metadata exists as a Pandas DataFrame
print(epochs.metadata.head(10))
Explanation: Pandas querying and metadata with Epochs objects
Demonstrating pandas-style string querying with Epochs metadata.
For related uses of :class:mne.Epochs, see the starting tutorial
sphx_glr_auto_tutorials_plot_object_epochs.py.
Sometimes you may have a complex trial structure that cannot be easily
summarized as a set of unique integers. In this case, it may be useful to use
the metadata attribute of :class:mne.Epochs objects. This must be a
:class:pandas.DataFrame where each row corresponds to an epoch, and each
column corresponds to a metadata attribute of each epoch. Columns must
contain either strings, ints, or floats.
In this dataset, subjects were presented with individual words
on a screen, and the EEG activity in response to each word was recorded.
We know which word was displayed in each epoch, as well as
extra information about the word (e.g., word frequency).
Loading the data
First we'll load the data. If metadata exists for an :class:mne.Epochs
fif file, it will automatically be loaded in the metadata attribute.
End of explanation
av1 = epochs['Concreteness < 5 and WordFrequency < 2'].average()
av2 = epochs['Concreteness > 5 and WordFrequency > 2'].average()
joint_kwargs = dict(ts_args=dict(time_unit='s'),
topomap_args=dict(time_unit='s'))
av1.plot_joint(show=False, **joint_kwargs)
av2.plot_joint(show=False, **joint_kwargs)
Explanation: We can use this metadata attribute to select subsets of Epochs. This
uses the Pandas :meth:pandas.DataFrame.query method under the hood.
Any valid query string will work. Below we'll make two plots to compare
between them:
End of explanation
words = ['film', 'cent', 'shot', 'cold', 'main']
epochs['WORD in {}'.format(words)].plot_image(show=False)
Explanation: Next we'll choose a subset of words to keep.
End of explanation
epochs['cent'].average().plot(show=False, time_unit='s')
Explanation: Note that traditional epochs sub-selection still works. The traditional
MNE methods for selecting epochs will supersede the rich metadata querying.
End of explanation
# Create two new metadata columns
metadata = epochs.metadata
is_concrete = metadata["Concreteness"] > metadata["Concreteness"].median()
metadata["is_concrete"] = np.where(is_concrete, 'Concrete', 'Abstract')
is_long = metadata["NumberOfLetters"] > 5
metadata["is_long"] = np.where(is_long, 'Long', 'Short')
epochs.metadata = metadata
Explanation: Below we'll show a more involved example that leverages the metadata
of each epoch. We'll create a new column in our metadata object and use
it to generate averages for many subsets of trials.
End of explanation
query = "is_long == '{0}' & is_concrete == '{1}'"
evokeds = dict()
for concreteness in ("Concrete", "Abstract"):
for length in ("Long", "Short"):
subset = epochs[query.format(length, concreteness)]
evokeds["/".join((concreteness, length))] = list(subset.iter_evoked())
# For the actual visualisation, we store a number of shared parameters.
style_plot = dict(
colors={"Long": "Crimson", "Short": "Cornflowerblue"},
linestyles={"Concrete": "-", "Abstract": ":"},
split_legend=True,
ci=.68,
show_sensors='lower right',
show_legend='lower left',
truncate_yaxis="max_ticks",
picks=epochs.ch_names.index("Pz"),
)
fig, ax = plt.subplots(figsize=(6, 4))
mne.viz.plot_compare_evokeds(evokeds, axes=ax, **style_plot)
plt.show()
Explanation: Now we can quickly extract (and plot) subsets of the data. For example, to
look at words split by word length and concreteness:
End of explanation
letters = epochs.metadata["NumberOfLetters"].unique().astype(int).astype(str)
evokeds = dict()
for n_letters in letters:
evokeds[n_letters] = epochs["NumberOfLetters == " + n_letters].average()
style_plot["colors"] = {n_letters: int(n_letters)
for n_letters in letters}
style_plot["cmap"] = ("# of Letters", "viridis_r")
del style_plot['linestyles']
fig, ax = plt.subplots(figsize=(6, 4))
mne.viz.plot_compare_evokeds(evokeds, axes=ax, **style_plot)
plt.show()
Explanation: To compare words which are 4, 5, 6, 7 or 8 letters long:
End of explanation
evokeds = dict()
query = "is_concrete == '{0}' & NumberOfLetters == {1}"
for concreteness in ("Concrete", "Abstract"):
for n_letters in letters:
subset = epochs[query.format(concreteness, n_letters)]
evokeds["/".join((concreteness, n_letters))] = subset.average()
style_plot["linestyles"] = {"Concrete": "-", "Abstract": ":"}
fig, ax = plt.subplots(figsize=(6, 4))
mne.viz.plot_compare_evokeds(evokeds, axes=ax, **style_plot)
plt.show()
Explanation: And finally, for the interaction between concreteness and continuous length
in letters:
End of explanation
data = epochs.get_data()
metadata = epochs.metadata.copy()
epochs_new = mne.EpochsArray(data, epochs.info, metadata=metadata)
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Creating an :class:`mne.Epochs` object with metadata is done by passing
a :class:`pandas.DataFrame` to the ``metadata`` kwarg as follows:</p></div>
End of explanation |
3,854 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow IO Authors.
Step1: TensorFlow IO에서 PostgreSQL 데이터베이스 읽기
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: PostgreSQL 설치 및 설정하기(선택 사항)
경고
Step3: 필요한 환경 변수 설정하기
다음 환경 변수는 마지막 섹션의 PostgreSQL 설정을 기반으로 합니다. 다른 설정이 있거나 기존 데이터베이스를 사용하는 경우 적절하게 변경해야 합니다.
Step4: PostgreSQL 서버에서 데이터 준비하기
이 튜토리얼에서는 데모 목적으로 데이터베이스를 생성하고 데이터베이스에 일부 데이터를 채웁니다. 이 가이드에 사용된 데이터는 UCI 머신러닝 리포지토리에서 제공되는 대기 품질 데이터세트에서 가져온 것입니다.
다음은 대기 질 데이터세트의 일부를 미리 나타낸 것입니다.
날짜 | 시간 | CO (GT) | PT08.S1 (CO) | NMHC (GT) | C6H6 (GT) | PT08.S2 (NMHC) | NOx (GT) | PT08.S3 (NOx) | NO2 (GT) | PT08.S4 (NO2) | PT08.S5 (O3) | 티 | RH | AH
--- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | ---
10/03/2004 | 18.00.00 | 2,6 | 1360 | 150 | 11,9 | 1046 | 166 | 1056 | 113 | 1692 년 | 1268 년 | 13,6 | 48,9 | 0,7578
10/03/2004 | 19.00.00 | 2 | 1292 | 112 | 9,4 | 955 | 103 | 1174 년 | 92 | 1559 년 | 972 | 13,3 | 47,7 | 0,7255
10/03/2004 | 20.00.00 | 2,2 | 1402 | 88 | 9,0 | 939 | 131 | 1140 년 | 114 | 1555 년 | 1074 | 11,9 | 54,0 | 0,7502
10/03/2004 | 21.00.00 | 2,2 | 1376 | 80 | 9,2 | 948 | 172 | 1092 | 122 | 1584 년 | 1203 년 | 11,0 | 60,0 | 0,7867
10/03/2004 | 22.00.00 | 1,6 | 1272 | 51 | 6,5 | 836 | 131 | 1205 년 | 116 | 1490 | 1110 년 | 11,2 | 59,6 | 0,7888
대기 품질 데이터세트 및 UCI 머신러닝 리포지토리에 대한 자세한 정보는 참고 문헌 섹션에서 확인할 수 있습니다.
데이터 준비를 단순화하기 위해 대기 품질 데이터세트의 SQL 버전이 준비되었으며 AirQualityUCI.sql로 제공됩니다.
테이블을 만드는 문은 다음과 같습니다.
CREATE TABLE AirQualityUCI (
Date DATE,
Time TIME,
CO REAL,
PT08S1 INT,
NMHC REAL,
C6H6 REAL,
PT08S2 INT,
NOx REAL,
PT08S3 INT,
NO2 REAL,
PT08S4 INT,
PT08S5 INT,
T REAL,
RH REAL,
AH REAL
);
데이터베이스에 테이블을 만들고 데이터를 채우는 전체 명령은 다음과 같습니다.
Step5: PostgreSQL 서버에서 데이터세트를 만들고 TensorFlow에서 사용하기
PostgreSQL 서버에서 데이터세트를 만들려면 query 및 endpoint 인수를 사용하여 tfio.experimental.IODataset.from_sql을 호출하기만 하면 됩니다. query는 테이블의 선택 열에 대한 SQL 쿼리이고 endpoint 인수는 주소 및 데이터베이스 이름입니다.
Step6: 위의 dataset.element_spec 출력에서 알 수 있듯이 생성된 Dataset의 요소는 데이터베이스 테이블의 열 이름을 키로 사용하는 python dict 개체입니다. 추가 작업을 적용하면 매우 편리합니다. 예를 들어 Dataset의 nox 및 no2 필드를 모두 선택하고 차이를 계산할 수 있습니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow IO Authors.
End of explanation
try:
%tensorflow_version 2.x
except Exception:
pass
!pip install tensorflow-io
Explanation: TensorFlow IO에서 PostgreSQL 데이터베이스 읽기
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/io/tutorials/postgresql"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/io/tutorials/postgresql.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/io/tutorials/postgresql.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/io/tutorials/postgresql.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
개요
이 튜토리얼에서는 PostgreSQL 데이터베이스 서버에서 tf.data.Dataset를 생성하는 방법을 보여줍니다. 생성된 Dataset를 훈련이나 추론 목적으로 tf.keras로 전달할 수 있습니다.
SQL 데이터베이스는 데이터 과학자에게 중요한 데이터 소스입니다. 가장 널리 사용되는 오픈 소스 SQL 데이터베이스 중 하나인 PostgreSQL은 기업에서 중요 데이터와 트랜잭션 데이터를 전사적으로 저장하는 데 보편적으로 사용됩니다. PostgreSQL 데이터베이스 서버에서 직접 Dataset를 만들고 훈련 또는 추론 목적으로 Dataset를 tf.keras로 전달하면 데이터 파이프라인이 크게 간소화되고 데이터 과학자들이 머신러닝 모델을 빌드하는 데 집중할 수 있습니다.
설정 및 사용법
필요한 tensorflow-io 패키지를 설치하고 런타임 다시 시작하기
End of explanation
# Install postgresql server
!sudo apt-get -y -qq update
!sudo apt-get -y -qq install postgresql
!sudo service postgresql start
# Setup a password `postgres` for username `postgres`
!sudo -u postgres psql -U postgres -c "ALTER USER postgres PASSWORD 'postgres';"
# Setup a database with name `tfio_demo` to be used
!sudo -u postgres psql -U postgres -c 'DROP DATABASE IF EXISTS tfio_demo;'
!sudo -u postgres psql -U postgres -c 'CREATE DATABASE tfio_demo;'
Explanation: PostgreSQL 설치 및 설정하기(선택 사항)
경고: 이 노트북은 Google Colab에서만 실행되도록 설계되었습니다. 여기서는 시스템에 패키지를 설치하고 sudo 액세스가 필요합니다. 로컬 Jupyter 노트북에서 실행하려면 주의해서 진행해야 합니다.
Google Colab에서 사용법을 데모하기 위해 PostgreSQL 서버를 설치합니다. 암호와 빈 데이터베이스도 필요합니다.
Google Colab에서 이 노트북을 실행하지 않거나 기존 데이터베이스를 사용하려는 경우 다음 설정을 건너뛰고 다음 섹션으로 진행하세요.
End of explanation
%env TFIO_DEMO_DATABASE_NAME=tfio_demo
%env TFIO_DEMO_DATABASE_HOST=localhost
%env TFIO_DEMO_DATABASE_PORT=5432
%env TFIO_DEMO_DATABASE_USER=postgres
%env TFIO_DEMO_DATABASE_PASS=postgres
Explanation: 필요한 환경 변수 설정하기
다음 환경 변수는 마지막 섹션의 PostgreSQL 설정을 기반으로 합니다. 다른 설정이 있거나 기존 데이터베이스를 사용하는 경우 적절하게 변경해야 합니다.
End of explanation
!curl -s -OL https://github.com/tensorflow/io/raw/master/docs/tutorials/postgresql/AirQualityUCI.sql
!PGPASSWORD=$TFIO_DEMO_DATABASE_PASS psql -q -h $TFIO_DEMO_DATABASE_HOST -p $TFIO_DEMO_DATABASE_PORT -U $TFIO_DEMO_DATABASE_USER -d $TFIO_DEMO_DATABASE_NAME -f AirQualityUCI.sql
Explanation: PostgreSQL 서버에서 데이터 준비하기
이 튜토리얼에서는 데모 목적으로 데이터베이스를 생성하고 데이터베이스에 일부 데이터를 채웁니다. 이 가이드에 사용된 데이터는 UCI 머신러닝 리포지토리에서 제공되는 대기 품질 데이터세트에서 가져온 것입니다.
다음은 대기 질 데이터세트의 일부를 미리 나타낸 것입니다.
날짜 | 시간 | CO (GT) | PT08.S1 (CO) | NMHC (GT) | C6H6 (GT) | PT08.S2 (NMHC) | NOx (GT) | PT08.S3 (NOx) | NO2 (GT) | PT08.S4 (NO2) | PT08.S5 (O3) | 티 | RH | AH
--- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | ---
10/03/2004 | 18.00.00 | 2,6 | 1360 | 150 | 11,9 | 1046 | 166 | 1056 | 113 | 1692 년 | 1268 년 | 13,6 | 48,9 | 0,7578
10/03/2004 | 19.00.00 | 2 | 1292 | 112 | 9,4 | 955 | 103 | 1174 년 | 92 | 1559 년 | 972 | 13,3 | 47,7 | 0,7255
10/03/2004 | 20.00.00 | 2,2 | 1402 | 88 | 9,0 | 939 | 131 | 1140 년 | 114 | 1555 년 | 1074 | 11,9 | 54,0 | 0,7502
10/03/2004 | 21.00.00 | 2,2 | 1376 | 80 | 9,2 | 948 | 172 | 1092 | 122 | 1584 년 | 1203 년 | 11,0 | 60,0 | 0,7867
10/03/2004 | 22.00.00 | 1,6 | 1272 | 51 | 6,5 | 836 | 131 | 1205 년 | 116 | 1490 | 1110 년 | 11,2 | 59,6 | 0,7888
대기 품질 데이터세트 및 UCI 머신러닝 리포지토리에 대한 자세한 정보는 참고 문헌 섹션에서 확인할 수 있습니다.
데이터 준비를 단순화하기 위해 대기 품질 데이터세트의 SQL 버전이 준비되었으며 AirQualityUCI.sql로 제공됩니다.
테이블을 만드는 문은 다음과 같습니다.
CREATE TABLE AirQualityUCI (
Date DATE,
Time TIME,
CO REAL,
PT08S1 INT,
NMHC REAL,
C6H6 REAL,
PT08S2 INT,
NOx REAL,
PT08S3 INT,
NO2 REAL,
PT08S4 INT,
PT08S5 INT,
T REAL,
RH REAL,
AH REAL
);
데이터베이스에 테이블을 만들고 데이터를 채우는 전체 명령은 다음과 같습니다.
End of explanation
import os
import tensorflow_io as tfio
endpoint="postgresql://{}:{}@{}?port={}&dbname={}".format(
os.environ['TFIO_DEMO_DATABASE_USER'],
os.environ['TFIO_DEMO_DATABASE_PASS'],
os.environ['TFIO_DEMO_DATABASE_HOST'],
os.environ['TFIO_DEMO_DATABASE_PORT'],
os.environ['TFIO_DEMO_DATABASE_NAME'],
)
dataset = tfio.experimental.IODataset.from_sql(
query="SELECT co, pt08s1 FROM AirQualityUCI;",
endpoint=endpoint)
print(dataset.element_spec)
Explanation: PostgreSQL 서버에서 데이터세트를 만들고 TensorFlow에서 사용하기
PostgreSQL 서버에서 데이터세트를 만들려면 query 및 endpoint 인수를 사용하여 tfio.experimental.IODataset.from_sql을 호출하기만 하면 됩니다. query는 테이블의 선택 열에 대한 SQL 쿼리이고 endpoint 인수는 주소 및 데이터베이스 이름입니다.
End of explanation
dataset = tfio.experimental.IODataset.from_sql(
query="SELECT nox, no2 FROM AirQualityUCI;",
endpoint=endpoint)
dataset = dataset.map(lambda e: (e['nox'] - e['no2']))
# check only the first 20 record
dataset = dataset.take(20)
print("NOx - NO2:")
for difference in dataset:
print(difference.numpy())
Explanation: 위의 dataset.element_spec 출력에서 알 수 있듯이 생성된 Dataset의 요소는 데이터베이스 테이블의 열 이름을 키로 사용하는 python dict 개체입니다. 추가 작업을 적용하면 매우 편리합니다. 예를 들어 Dataset의 nox 및 no2 필드를 모두 선택하고 차이를 계산할 수 있습니다.
End of explanation |
3,855 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
Step1: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
Step2: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise
Step3: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise
Step4: If you built labels correctly, you should see the next output.
Step5: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise
Step6: Exercise
Step7: If you build features correctly, it should look like that cell output below.
Step8: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise
Step9: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step10: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise
Step11: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise
Step12: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation
Step13: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise
Step14: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[
Step15: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
Step16: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
Step17: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
Step18: Testing | Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment_network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment_network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
# Create your dictionary that maps vocab words to integers here
vocab_to_int =
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints =
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels =
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
Explanation: If you built labels correctly, you should see the next output.
End of explanation
# Filter out that review with 0 length
reviews_ints =
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
seq_len = 200
features =
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
features[:10,:100]
Explanation: If you build features correctly, it should look like that cell output below.
End of explanation
split_frac = 0.8
train_x, val_x =
train_y, val_y =
val_x, test_x =
val_y, test_y =
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2501, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
n_words = len(vocab)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ =
labels_ =
keep_prob =
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding =
embed =
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
with graph.as_default():
# Your basic LSTM cell
lstm =
# Add dropout to the cell
drop =
# Stack up multiple LSTM layers, for deep learning
cell =
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
with graph.as_default():
outputs, final_state =
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
Explanation: Testing
End of explanation |
3,856 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Choropleth from the Brazil's northeast
<hr>
<div style="text-align
Step1: Data
Step2: We found some misinformation about the na me of the municipalities regarding to IBGE information and the GeoJson information. Below we sumarize what we have found that there is no match
Step3: Removing Nazária from the municipalities of IBGE
Step4: Choropleth
<hr>
After all the procediments to make the population data and the GeoJson data match with the municipalities names we could now proceed to create the choropleth itself.
Step5: We could utilize a threshold scale function to differenciate the cities by color. One of most used practices is do linearly split the range of the data with a function like Numpy function
python
np.linspace(MIN,MAX, STEPS, TYPE).tolist()
Branca library also has a function to create a threshold scale however we did not made use of this functions because we did not liked to linearly split the range of population and match the colors based on this. Linearly spliting the threshold will only show the extremity, all the villages and towns and the megacities. So, we make a manual split, putting the minimum population has the lower level and the max population the upper range of the threhold. We divided the following cities in 250K, 800K, 1.5M and 2M. Making the division in that way we could see the main cities and all the other greaty majority of all cities, under 150k people, could be classified in the same manner/color.
|Threshold Scale |Min | 2 | 3 | 4 | 5 | MAX |
|----------------|-----|------|-------|--------|--------|--------|
|np.linspace |1228 |591779|1182331| 1772882| 2363434| 2953986|
|our division |20000|100000|300000 | 1000000| 1500000| 2500000| | Python Code:
#System libraries
import os
import sys
#Basic libraries for data analysis
import numpy as np
from numpy import random
import pandas as pd
#Choropleth necessary libraries
##GeoJson data
import json
##Necessary to create shapes in folium
from shapely.geometry import Polygon
from shapely.geometry import Point
##Choropleth itself
import folium
##Colormap
from branca.colormap import linear
Explanation: Choropleth from the Brazil's northeast
<hr>
<div style="text-align: justify">In this notebook we utilize Folium libraries to create a choropleth of the northeast of Brazil. According to Wikpedia (https://en.wikipedia.org/wiki/Choropleth_map) a choropleth map (from Greek χῶρος ("area/region") + πλῆθος ("multitude")) is a thematic map in which areas are shaded or patterned in proportion to the measurement of the statistical variable being displayed on the map, such as population density or per-capita income. In this notebook we will make a choropleth with the numbers of population of Brazil's northeast according to Brazil's CENSUS 2010 - https://ww2.ibge.gov.br/english/estatistica/populacao/censo2010/</div>
<strong>Group components:</strong>
<ul>
<li>Marco Olimpio - marco.olimpio at gmail</li>
<li>Rebecca Betwel - bekbetwel at gmail</li>
</ul>
<strong>Short explanation video (PT-BR):</strong>https://youtu.be/2JaCGJ2HU40
<h2>The begining</h2>
<hr>
Below we have the very beginning of the kernel itself. Firtly we load all necessary libraries and the data collected and after start analysing it.
End of explanation
# dataset name
dataset_pop_2017 = os.path.join('data', 'population_2017.csv')
# read the data to a dataframe
data2017 = pd.read_csv(dataset_pop_2017)
# eliminate spaces in name of columns
data2017.columns = [cols.replace(' ', '_') for cols in data2017.columns]
data2017.head()
# Filtering data about northeast of Brazil
dataStateNames = data2017[(data2017['UF'] == 'RN') | (data2017['UF'] == 'PB') | (data2017['UF'] == 'PE') | (data2017['UF'] == 'MA') | (data2017['UF'] == 'CE') | (data2017['UF'] == 'BA') | (data2017['UF'] == 'AL') | (data2017['UF'] == 'PI') | (data2017['UF'] == 'SE')]
# Used to diff municipalities
#dataStateNames.to_csv('nomesIBGE_CidadesOrdenado.csv')
# Sort dataset by city name
dataStateNames = dataStateNames.sort_values('NOME_DO_MUNICÍPIO')
dataStateNames
# searching the files in geojson/geojs-xx-mun.json
ma_states = os.path.join('geojson', 'geojs-21-mun.json')
pi_states = os.path.join('geojson', 'geojs-22-mun.json')
ce_states = os.path.join('geojson', 'geojs-23-mun.json')
rn_states = os.path.join('geojson', 'geojs-24-mun.json')
pb_states = os.path.join('geojson', 'geojs-25-mun.json')
pe_states = os.path.join('geojson', 'geojs-26-mun.json')
al_states = os.path.join('geojson', 'geojs-27-mun.json')
se_states = os.path.join('geojson', 'geojs-28-mun.json')
ba_states = os.path.join('geojson', 'geojs-29-mun.json')
# load the data and use 'latin-1'encoding because the accent
geo_json_data_ma = json.load(open(ma_states,encoding='latin-1'))
geo_json_data_pi = json.load(open(pi_states,encoding='latin-1'))
geo_json_data_ce = json.load(open(ce_states,encoding='latin-1'))
geo_json_data_rn = json.load(open(rn_states,encoding='latin-1'))
geo_json_data_pb = json.load(open(pb_states,encoding='latin-1'))
geo_json_data_pe = json.load(open(pe_states,encoding='latin-1'))
geo_json_data_al = json.load(open(al_states,encoding='latin-1'))
geo_json_data_se = json.load(open(se_states,encoding='latin-1'))
geo_json_data_ba = json.load(open(ba_states,encoding='latin-1'))
#Merging all files in a single json structure
geo_json_data_northeast = geo_json_data_ma
geo_json_data_northeast['features'].extend(geo_json_data_pi['features'])
geo_json_data_northeast['features'].extend(geo_json_data_ce['features'])
geo_json_data_northeast['features'].extend(geo_json_data_rn['features'])
geo_json_data_northeast['features'].extend(geo_json_data_pb['features'])
geo_json_data_northeast['features'].extend(geo_json_data_pe['features'])
geo_json_data_northeast['features'].extend(geo_json_data_al['features'])
geo_json_data_northeast['features'].extend(geo_json_data_se['features'])
geo_json_data_northeast['features'].extend(geo_json_data_ba['features'])
# Used to diff municipalities
i=0
for cities in geo_json_data_northeast['features'][:]:
#print(str(i)+' '+cities['properties']['name'])
print(cities['properties']['name'])
i = i+1
Explanation: Data: Importing, arranging and putting all together
<hr>
End of explanation
#Belém de São Francisco -> Belém do São Francisco
geo_json_data_northeast['features'][1031]['properties']['description'] = 'Belém do São Francisco'
geo_json_data_northeast['features'][1031]['properties']['name'] = 'Belém do São Francisco'
print(geo_json_data_northeast['features'][1031]['properties']['name'])
#Campo de Santana -> Tacima
geo_json_data_northeast['features'][1003]['properties']['description'] = 'Tacima'
geo_json_data_northeast['features'][1003]['properties']['name'] = 'Tacima'
print(geo_json_data_northeast['features'][1003]['properties']['name'])
#Gracho Cardoso -> Graccho Cardoso
geo_json_data_northeast['features'][1324]['properties']['description'] = 'Graccho Cardoso'
geo_json_data_northeast['features'][1324]['properties']['name'] = 'Graccho Cardoso'
print(geo_json_data_northeast['features'][1324]['properties']['name'])
#Iguaraci -> Iguaracy
geo_json_data_northeast['features'][1089]['properties']['description'] = 'Iguaracy'
geo_json_data_northeast['features'][1089]['properties']['name'] = 'Iguaracy'
print(geo_json_data_northeast['features'][1089]['properties']['name'])
# Itapagé -> Itapajé
geo_json_data_northeast['features'][526]['properties']['description'] = 'Itapajé'
geo_json_data_northeast['features'][526]['properties']['name'] = 'Itapajé'
print(geo_json_data_northeast['features'][526]['properties']['name'])
# Santarém -> Joca Claudino
geo_json_data_northeast['features'][964]['properties']['description'] = 'Joca Claudino'
geo_json_data_northeast['features'][964]['properties']['name'] = 'Joca Claudino'
print(geo_json_data_northeast['features'][964]['properties']['name'])
# Lagoa do Itaenga -> Lagoa de Itaenga
geo_json_data_northeast['features'][1111]['properties']['description'] = 'Lagoa de Itaenga'
geo_json_data_northeast['features'][1111]['properties']['name'] = 'Lagoa de Itaenga'
print(geo_json_data_northeast['features'][1111]['properties']['name'])
# Quixabá -> Quixaba
geo_json_data_northeast['features'][1144]['properties']['description'] = 'Quixaba'
geo_json_data_northeast['features'][1144]['properties']['name'] = 'Quixaba'
print(geo_json_data_northeast['features'][1144]['properties']['name'])
# Quixabá -> Quixaba
geo_json_data_northeast['features'][946]['properties']['description'] = 'Quixaba'
geo_json_data_northeast['features'][946]['properties']['name'] = 'Quixaba'
print(geo_json_data_northeast['features'][946]['properties']['name'])
# Presidente Juscelino->Serra Caiada
geo_json_data_northeast['features'][736]['properties']['description'] = 'Serra Caiada'
geo_json_data_northeast['features'][736]['properties']['name'] = 'Serra Caiada'
print(geo_json_data_northeast['features'][736]['properties']['name'])
# Seridó->São Vicente do Seridó
geo_json_data_northeast['features'][990]['properties']['description'] = 'São Vicente do Seridó'
geo_json_data_northeast['features'][990]['properties']['name'] = 'São Vicente do Seridó'
print(geo_json_data_northeast['features'][990]['properties']['name'])
dataStateNames[(dataStateNames['NOME_DO_MUNICÍPIO']=='Nazária')]
Explanation: We found some misinformation about the na me of the municipalities regarding to IBGE information and the GeoJson information. Below we sumarize what we have found that there is no match:
and
|State | IBGE | GEOJSON |Current name | Reference |
|------|------------------------|-----------------------|------------------------|---------------------------------|
|PE| Belém do São Francisco | Belém de São Francisco| Belém do São Francisco | https://pt.wikipedia.org/wiki/Bel%C3%A9m_do_S%C3%A3o_Francisco |
|PB| Tacima | Campo de Santana | Tacima | https://en.wikipedia.org/wiki/Tacima |
|SE| Graccho Cardoso | Gracho Cardoso | Graccho Cardoso | https://pt.wikipedia.org/wiki/Graccho_Cardoso |
|PE| Iguaracy | Iguaraci | Iguaracy | https://pt.wikipedia.org/wiki/Iguaracy |
|CE| Itapajé | Itapagé | Itapajé | https://pt.wikipedia.org/wiki/Itapajé |
|PB| Joca Claudino | Santarém | Joca Claudino | https://pt.wikipedia.org/wiki/Joca_Claudino |
|PE| Lagoa de Itaenga | Lagoa do Itaenga | Lagoa de Itaenga |https://pt.wikipedia.org/wiki/Lagoa_de_Itaenga |
|PI| Nazária | <NO INFO> | Nazária | https://pt.wikipedia.org/wiki/Naz%C3%A1ria |
|PE|Quixaba | Quixabá | Quixaba | https://pt.wikipedia.org/wiki/Quixaba_(Pernambuco) |
|PB|Quixaba | Quixabá | Quixaba | https://pt.wikipedia.org/wiki/Quixaba_(Para%C3%ADba) |
|RN| Serra Caiada | Presidente Juscelino | Serra Caiada | https://pt.wikipedia.org/wiki/Serra_Caiada
|PB| São Vicente do Seridó | Seridó | São Vicente do Seridó | https://pt.wikipedia.org/wiki/S%C3%A3o_Vicente_do_Serid%C3%B3
Another references:
https://ww2.ibge.gov.br/home/estatistica/populacao/estimativa2011/tab_Municipios_TCU.pdf
https://biblioteca.ibge.gov.br/visualizacao/dtbs/pernambuco/quixaba.pdf
We did not found any geojson information about the municipalitie <strong>Nazária - PI</strong> and we decided to eliminate Nazária from the IBGE data because Nazaria is a emancipated munipalitie from Teresina, capital of Terezina, and the data about the territory is attached to Teresina.
End of explanation
# Removing Nazária from the municipalities of IBGE
dataStateNames = dataStateNames[dataStateNames['NOME_DO_MUNICÍPIO']!='Nazária']
len(dataStateNames)
dataStateNames[dataStateNames['NOME_DO_MUNICÍPIO']=='Nazária']
cities_ne = []
# list all cities in the state
for city in geo_json_data_northeast['features']:
cities_ne.append(city['properties']['description'])
len(cities_ne)
frames = [dataRN, dataPB, dataPE, dataMA, dataCE, dataBA, dataPI, dataSE]
dataNordeste = pd.concat(frames)
print(len(dataNordeste))
#adjusting to the correct data type
dataNordeste['COD._UF'] = dataNordeste['COD._UF'].astype(int)
dataNordeste['COD._MUNIC'] = dataNordeste['COD._MUNIC'].astype(int)
dataNordeste['POPULAÇÃO_ESTIMADA'] = dataNordeste['POPULAÇÃO_ESTIMADA'].astype(int)
dataNordeste.dtypes
Explanation: Removing Nazária from the municipalities of IBGE
End of explanation
dataNordeste.head()
dataNordeste_dictionary = dataNordeste.set_index('NOME_DO_MUNICÍPIO')['POPULAÇÃO_ESTIMADA']
print(len(dataNordeste))
dataNordeste['id'] = dataNordeste['UF']+dataNordeste['NOME_DO_MUNICÍPIO']
dataNordeste_dict = dataNordeste.set_index('id')['POPULAÇÃO_ESTIMADA']
print(len(dataNordeste_dictionary))
print(len(dataNordeste_dict))
colorscale = linear.YlGnBu.scale(dataNordeste['POPULAÇÃO_ESTIMADA'].min(), dataNordeste['POPULAÇÃO_ESTIMADA'].max())
colorscale
# Create a map object
#Centering at Brazil's northeast
m8 = folium.Map(
location = [-10.116657, -42.542580],
zoom_start=6,
tiles='cartodbpositron'
)
Explanation: Choropleth
<hr>
After all the procediments to make the population data and the GeoJson data match with the municipalities names we could now proceed to create the choropleth itself.
End of explanation
m8.add_child(folium.LatLngPopup())
# create a threshold of legend
threshold_scale = np.linspace(dataNordeste['POPULAÇÃO_ESTIMADA'].min(),
dataNordeste['POPULAÇÃO_ESTIMADA'].max(), 6, dtype=int).tolist()
print(threshold_scale)
#threshold_scale = [dataNordeste['POPULAÇÃO_ESTIMADA'].min(), 250000, 800000, 150000, 200000, dataNordeste['POPULAÇÃO_ESTIMADA'].max()]
threshold_scale = [20000,100000,300000,1000000,1500000,2500000]
print(threshold_scale)
m8.choropleth(
geo_data=geo_json_data_northeast,
data=dataNordeste,
columns=['NOME_DO_MUNICÍPIO', 'POPULAÇÃO_ESTIMADA'],
key_on='feature.properties.name',
fill_color='YlGnBu',
legend_name='Population estimation (2017)',
highlight=True,
threshold_scale = threshold_scale,
line_color='green',
line_weight=0.2,
line_opacity=0.6
)
m8.save('outputFolium.html')
Explanation: We could utilize a threshold scale function to differenciate the cities by color. One of most used practices is do linearly split the range of the data with a function like Numpy function
python
np.linspace(MIN,MAX, STEPS, TYPE).tolist()
Branca library also has a function to create a threshold scale however we did not made use of this functions because we did not liked to linearly split the range of population and match the colors based on this. Linearly spliting the threshold will only show the extremity, all the villages and towns and the megacities. So, we make a manual split, putting the minimum population has the lower level and the max population the upper range of the threhold. We divided the following cities in 250K, 800K, 1.5M and 2M. Making the division in that way we could see the main cities and all the other greaty majority of all cities, under 150k people, could be classified in the same manner/color.
|Threshold Scale |Min | 2 | 3 | 4 | 5 | MAX |
|----------------|-----|------|-------|--------|--------|--------|
|np.linspace |1228 |591779|1182331| 1772882| 2363434| 2953986|
|our division |20000|100000|300000 | 1000000| 1500000| 2500000|
End of explanation |
3,857 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise
Step8: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Exercise
Step9: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise
Step10: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise
Step11: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
Step12: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise
Step13: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step14: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise
Step15: Saved checkpoints
Read up on saving and loading checkpoints here
Step16: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step17: Here, pass in the path to a checkpoint and sample from the network. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
text[:100]
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
encoded[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
len(vocab)
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the batch size and number of batches we can make
batch_size = len(arr) // n_seqs
n_batches = batch_size // n_steps
# Keep only enough characters to make full batches
arr = arr[:n_batches * n_seqs * n_steps]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n: n + n_steps]
# The targets, shifted by one
y = np.concatenate((arr[:, n +1 : n + n_steps], arr[:, n:n+1]), axis=1)
yield x, y
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/[email protected]" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
Exercise: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself.
End of explanation
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps])
targets = tf.placeholder(tf.int32, [batch_size, num_steps])
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32)
return inputs, targets, keep_prob
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise: Create the input placeholders in the function below.
End of explanation
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell outputs
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob = keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Exercise: Below, implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
lstm_output: List of output tensors from the LSTM layer
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
L = in_size
N = out_size
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# Concatenate lstm_output over axis 1 (the columns)
seq_output = tf.concat(lstm_output, axis=1)
# Reshape seq_output to a 2D tensor with lstm_size columns
x = tf.reshape(seq_output, [-1, L])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
# Create the weight and bias variables here
softmax_w = tf.Variable(tf.truncated_normal((L, N), stddev = 0.1))
softmax_b = tf.Variable(tf.zeros([out_size]))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits)
return out, logits
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise: Implement the output layer in the function below.
End of explanation
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per sequence per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = y_reshaped))
return loss
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise: Implement the loss calculation in the function below.
End of explanation
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN with tf.nn.dynamic_rnn
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state = self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network.
End of explanation
batch_size = 10 # Sequences per batch
num_steps = 50 # Number of sequence steps per batch
lstm_size = 128 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.01 # Learning rate
keep_prob = 0.5 # Dropout keep probability
Explanation: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU.
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation |
3,858 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-1', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: CSIR-CSIRO
Source ID: SANDBOX-1
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:54
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
3,859 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EventVestor
Step1: Let's go over the columns
Step2: We've done much of the data processing for you. Fields like timestamp and sid are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the sid across all our equity databases.
We can select columns and rows with ease. Below, we'll fetch all fifty-cent dividends.
Step3: Finally, suppose we want a DataFrame of that data, but we only want the sid, timestamp, and div_type
Step4: <a id='pipeline'></a>
Pipeline Overview
Accessing the data in your algorithms & research
The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows
Step5: Now that we've imported the data, let's take a look at which fields are available for each dataset.
You'll find the dataset, the available fields, and the datatypes for each of those fields.
Step6: Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline.
This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread
Step7: Taking what we've seen from above, let's see how we'd move that into the backtester. | Python Code:
# import the dataset
# from quantopian.interactive.data.eventvestor import dividends as dataset
# or if you want to import the free dataset, use:
from quantopian.interactive.data.eventvestor import dividends_free as dataset
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
# Let's use blaze to understand the data a bit using Blaze dshape()
dataset.dshape
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
dataset.count()
# Let's see what the data looks like. We'll grab the first three rows.
dataset[:3]
Explanation: EventVestor: Dividend Announcements
In this notebook, we'll take a look at EventVestor's Cash Dividend Announcement dataset, available on the Quantopian Store. This dataset spans January 01, 2007 through the current day, and documents cash dividend announcements, including special dividends.
Notebook Contents
There are two ways to access the data and you'll find both of them listed below. Just click on the section you'd like to read through.
<a href='#interactive'><strong>Interactive overview</strong></a>: This is only available on Research and uses blaze to give you access to large amounts of data. Recommended for exploration and plotting.
<a href='#pipeline'><strong>Pipeline overview</strong></a>: Data is made available through pipeline which is available on both the Research & Backtesting environment. Recommended for custom factor development and moving back & forth between research/backtesting.
Free samples and limits
One key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.
There is a free version of this dataset as well as a paid one. The free sample includes data until 2 months prior to the current date.
To access the most up-to-date values for this data set for trading a live algorithm (as with other partner sets), you need to purchase acess to the full set.
With preamble in place, let's get started:
<a id='interactive'></a>
Interactive Overview
Accessing the data with Blaze and Interactive on Research
Partner datasets are available on Quantopian Research through an API service known as Blaze. Blaze provides the Quantopian user with a convenient interface to access very large datasets, in an interactive, generic manner.
Blaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.
It is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization.
Helpful links:
* Query building for Blaze
* Pandas-to-Blaze dictionary
* SQL-to-Blaze dictionary.
Once you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using:
from odo import odo
odo(expr, pandas.DataFrame)
To see how this data can be used in your algorithm, search for the Pipeline Overview section of this notebook or head straight to <a href='#pipeline'>Pipeline Overview</a>
End of explanation
fiftyc = dataset[(dataset.div_amount==0.5) & (dataset['div_currency']=='$')]
# When displaying a Blaze Data Object, the printout is automatically truncated to ten rows.
fiftyc.sort('timestamp')
Explanation: Let's go over the columns:
- event_id: the unique identifier for this event.
- asof_date: EventVestor's timestamp of event capture.
- trade_date: for event announcements made before trading ends, trade_date is the same as event_date. For announcements issued after market close, trade_date is next market open day.
- symbol: stock ticker symbol of the affected company.
- event_type: this should always be Dividend.
- event_headline: a brief description of the event
- event_phase: the inclusion of this field is likely an error on the part of the data vendor. We're currently attempting to resolve this.
- div_type: dividend type. Values include no change, increase, decrease, initiation, defer, suspend, omission, stock, special.
Note QoQ = quarter-on-quarter.
- div_amount: dividend payment amount in local currency
- div_currency: dividend payment currency code. Values include $, BRL, CAD, CHF, EUR, GBP, JPY.
- div_ex_date: ex-dividend date
- div_record_date: dividend payment record date
- div_pay_date: dividend payment date
- event_rating: this is always 1. The meaning of this is uncertain.
- timestamp: this is our timestamp on when we registered the data.
- sid: the equity's unique identifier. Use this instead of the symbol.
We've done much of the data processing for you. Fields like timestamp and sid are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the sid across all our equity databases.
We can select columns and rows with ease. Below, we'll fetch all fifty-cent dividends.
End of explanation
fifty_df = odo(fiftyc, pd.DataFrame)
reduced = fifty_df[['sid','div_type','timestamp']]
# When printed: pandas DataFrames display the head(30) and tail(30) rows, and truncate the middle.
reduced
Explanation: We've done much of the data processing for you. Fields like timestamp and sid are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the sid across all our equity databases.
We can select columns and rows with ease. Below, we'll fetch all fifty-cent dividends.
End of explanation
fifty_df = odo(fiftyc, pd.DataFrame)
reduced = fifty_df[['sid','div_type','timestamp']]
# When printed: pandas DataFrames display the head(30) and tail(30) rows, and truncate the middle.
reduced
Explanation: Finally, suppose we want a DataFrame of that data, but we only want the sid, timestamp, and div_type:
End of explanation
# Import necessary Pipeline modules
from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
from quantopian.pipeline.factors import AverageDollarVolume
# Import the datasets available
from quantopian.pipeline.data.eventvestor import (
DividendsByExDate,
DividendsByPayDate,
DividendsByAnnouncementDate,
)
from quantopian.pipeline.factors.eventvestor import (
BusinessDaysSincePreviousExDate,
BusinessDaysUntilNextExDate,
BusinessDaysSincePreviousPayDate,
BusinessDaysUntilNextPayDate,
BusinessDaysSinceDividendAnnouncement,
)
Explanation: <a id='pipeline'></a>
Pipeline Overview
Accessing the data in your algorithms & research
The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows:
Import the data set here
from quantopian.pipeline.data.eventvestor import (
DividendsByExDate,
DividendsByPayDate,
DividendsByAnnouncement
)
Then in intialize() you could do something simple like adding the raw value of one of the fields to your pipeline:
pipe.add(DividendsByExDate.next_date.latest, 'next_dividends')
End of explanation
print "Here are the list of available fields per dataset:"
print "---------------------------------------------------\n"
def _print_fields(dataset):
print "Dataset: %s\n" % dataset.__name__
print "Fields:"
for field in list(dataset.columns):
print "%s - %s" % (field.name, field.dtype)
print "\n"
for data in (DividendsByExDate, DividendsByPayDate, DividendsByAnnouncementDate):
_print_fields(data)
print "---------------------------------------------------\n"
Explanation: Now that we've imported the data, let's take a look at which fields are available for each dataset.
You'll find the dataset, the available fields, and the datatypes for each of those fields.
End of explanation
# Let's see what this data looks like when we run it through Pipeline
# This is constructed the same way as you would in the backtester. For more information
# on using Pipeline in Research view this thread:
# https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
pipe = Pipeline()
pipe.add(DividendsByExDate.next_date.latest, 'next_ex_date')
pipe.add(DividendsByExDate.previous_date.latest, 'prev_ex_date')
pipe.add(DividendsByExDate.next_amount.latest, 'next_amount')
pipe.add(DividendsByExDate.previous_amount.latest, 'prev_amount')
pipe.add(DividendsByExDate.next_currency.latest, 'next_currency')
pipe.add(DividendsByExDate.previous_currency.latest, 'prev_currency')
pipe.add(DividendsByExDate.next_type.latest, 'next_type')
pipe.add(DividendsByExDate.previous_type.latest, 'prev_type')
# Setting some basic liquidity strings (just for good habit)
dollar_volume = AverageDollarVolume(window_length=20)
top_1000_most_liquid = dollar_volume.rank(ascending=False) < 1000
pipe.set_screen(top_1000_most_liquid & DividendsByExDate.previous_amount.latest.notnan())
# The show_graph() method of pipeline objects produces a graph to show how it is being calculated.
pipe.show_graph(format='png')
# run_pipeline will show the output of your pipeline
pipe_output = run_pipeline(pipe, start_date='2013-11-01', end_date='2013-11-25')
pipe_output
Explanation: Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline.
This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread:
https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
End of explanation
# This section is only importable in the backtester
from quantopian.algorithm import attach_pipeline, pipeline_output
# General pipeline imports
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import AverageDollarVolume
# Import the datasets available
from quantopian.pipeline.data.eventvestor import (
DividendsByExDate,
DividendsByPayDate,
DividendsByAnnouncementDate,
)
from quantopian.pipeline.factors.eventvestor import (
BusinessDaysSincePreviousExDate,
BusinessDaysUntilNextExDate,
BusinessDaysSinceDividendAnnouncement,
)
def make_pipeline():
# Create our pipeline
pipe = Pipeline()
# Screen out penny stocks and low liquidity securities.
dollar_volume = AverageDollarVolume(window_length=20)
is_liquid = dollar_volume.rank(ascending=False) < 1000
# Create the mask that we will use for our percentile methods.
base_universe = (is_liquid)
# Add pipeline factors
pipe.add(DividendsByExDate.next_date.latest, 'next_ex_date')
pipe.add(DividendsByExDate.previous_date.latest, 'prev_ex_date')
pipe.add(DividendsByExDate.next_amount.latest, 'next_amount')
pipe.add(DividendsByExDate.previous_amount.latest, 'prev_amount')
pipe.add(DividendsByExDate.next_currency.latest, 'next_currency')
pipe.add(DividendsByExDate.previous_currency.latest, 'prev_currency')
pipe.add(DividendsByExDate.next_type.latest, 'next_type')
pipe.add(DividendsByExDate.previous_type.latest, 'prev_type')
pipe.add(BusinessDaysUntilNextExDate(), 'business_days')
# Set our pipeline screens
pipe.set_screen(is_liquid)
return pipe
def initialize(context):
attach_pipeline(make_pipeline(), "pipeline")
def before_trading_start(context, data):
results = pipeline_output('pipeline')
Explanation: Taking what we've seen from above, let's see how we'd move that into the backtester.
End of explanation |
3,860 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
리스트 조건제시법(List Comprehension)
주요 내용
주어진 리스트를 이용하여 특정 성질을 만족하는 새로운 리스트를 생성하고자 할 때
리스트 조건제시법을 활용하면 매우 효율적인 코딩을 할 수 있다.
리스트 조건제시법은 집합을 정의할 때 사용하는 조건제시법과 매우 유사하다.
예를 들어,0부터 1억 사이에 있는 홀수들을 원소로 갖는 집합을 정의하려면
두 가지 방법을 활용할 수 있다.
원소나열법
{1, 3, 5, 7, 9, 11, ..., 99999999}
중간에 사용한 점점점(...) 중략기호는 0부터 1억 사이의 총 5천만개의 홀수를
나열하는 것은 불가능하기에 사용한 기호이다.
실제로 1초에 하나씩 숫자를 적는다 해도 5천만 초, 약 1년 8개월이 걸린다.
조건제시법
{ x | 0 <= x <= 1억, 단 x는 홀수}
여기서는 조건제시법을 활용하여 새로운 리스트를 생성하는 방법을 알아본다.
오늘의 주요 예제
$y = x^2$ 함수의 그래프를 아래와 같이 그려보자.
단, $x$는 -10에서 10사이의 값을 가진다.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="images/pyplot_exp.png" style="width
Step1: 아니면, 반복문을 활용할 수 있다.
while 반복문
Step2: for 반복문
Step3: 예제
이제 0부터 1억 사이의 모든 홀수를 순서대로 담고 있는 리스트를 원소나열법으로 구현할 수 있을까?
답은 '아니다'이다. 집합을 정의할 때처럼 생략기호를 사용할 수는 있지만, 제대로 작동하지 않는다.
예를 들어, 0부터 1억 사이의 모든 홀수들의 리스트를 아래와 같이 선언해 보자.
Step4: 확인하면 학교에서 배운 것과 비슷하게 작동하는 것처럼 보인다.
Step5: 주의
Step6: 위와 같이 작동하는 이유는 생략된 부분이 어떤 규칙으로 나열되는지 파이썬 해석기가 알지 못하기 때문이다.
반면에 반복문을 활용하는 것은 언제든지 가능하다.
예를 들어, 아래 함수는 0부터 정해진 숫자 사이의 모든 홀수를 순서대로 담은 리스트를 생성하려 리턴한다.
Step7: 0과 20 사이의 홀수들의 리스트는 다음과 같다.
Step8: 이제 0과 1억 사이의 홀수들의 리스트를 생성해보자.
주의
Step9: 좀 오래 걸린다.
사용하는 컴퓨터 사양에 따라 시간차이가 발생하지만 1억보다 작은 5천만 개의 홀수를 생성하는
데에 최신 노트북인 경우 10여초 걸린다.
홀수들의 리스트가 제대로 생성되었는지를 확인하기 위해 처음 20개의 홀수를 확인해보자.
Step10: 부록
Step11: 이제 질문을 좀 다르게 하자.
odd_number 함수를 좀 더 간결하게 정의할 수 없을까?
이에 대해 파이썬에서는 리스트 조건제시법이라는 기술을 제공한다.
이 기술을 모든 언어가 지원하지는 않는다.
예를 들어, C# 언어는 from ... where ... select ... 가 비슷한 역할을 지원하지만 좀 다르고,
Java 언어에서는 함수 인터페이스를 이용하여 비슷한 기능을 구현할 수 있다.
리스트 조건제시법 이해
리스트 조건제시법은 집합 정의에 사용되는 조건제시법과 매우 비슷하게 작동한다.
예를 들어, 0부터 1억 사이의 홀수들을 순서대로 항목으로 갖는 리스트를 생성하는 과정을
설명하면서 조건제시법의 이해를 돕고자 한다.
먼저, 앞서 개요에서 설명한 대로 0부터 1억 사이의 홀수들의 집합을
조건제시법으로로 표현한다.
{x | 0 <= x <= 100000000, 단 x는 홀수}
이제 집합기호를 리스트 기호로 대체한다.
[x | 0 <= x <= 100000000, 단 x는 홀수]
집합의 짝대기(|) 기호는 for로 대체한다.
[x for 0 <= x <= 100000000, 단 x는 홀수]
짝대기 기호 오른편에 위치하고, 변수 x가 어느 범위에서 움직이는지를 설명하는
부등식인 0 <= x <= 100000000 부분을 파이썬 수식으로 변경한다.
주로, 기존에 정의된 리스트를 사용하거나 range() 함수를 활용하여
범위를 x in ... 형식으로 지정한다.
[x for x in range(100000000+1), 단 x는 홀수]
마지막으로 변수 x에 대한 제한조건인 단 x는 홀수 부분을
파이썬의 if 문장으로 변경한다.
예를 들어, x는 홀수는 파이썬의 x % 2 == 1로 나타낼 수 있다.
[x for x in range(100000001) if x % 2 == 1]
Step12: 예제
0부터 1억 사이의 홀수들의 제곱을 항목으로 갖는 리스트를 조건제시법으로 생성할 수 있다.
Step13: 물론 앞서 만든 odd_100M을 재활용할 수 있다.
Step14: 예제
0부터 1억 사이의 홀수들을 항목으로 갖는 리스트를 다른 조건제시법으로 생성해보자.
먼저, 모든 홀수는 2*x + 1의 모양을 갖는다는 점에 주의한다.
따라서 1억보다 작은 홀수는 아래와 같이 생성할 수 있다.
Step15: 이 방식은 좀 더 쉬워 보인다. if 문이 없기 때문이다.
위에서 사용한 조건제시법을 for 반복문을 이용하여 구현하면 아래처럼 할 수 있다.
Step16: 오늘의 주요 예제 해결
$y = x^2$ 함수의 그래프를 그리고자 한다.
그래프를 그리기 위해 matplotlib.pyplot 이란 모듈을 이용한다.
아래 코드처럼 퍼센트 기호(%)로 시작하는 코드는 쥬피터 노트북에만 사용하는 코드이며,
아래 코드는 쥬피터 노트북에 그래프를 직접 나타내기 위해 사용한다.
spyder 등 파이썬 에디터를 사용하는 경우 필요하지 않는 코드이다.
Step17: matplotlib.pyplot 모듈 이름이 길어서 보통은 plt 라고 줄여서 부른다.
Step18: 그래프를 그리기 위해서는 먼저 필요한 만큼의 점을 찍어야 한다.
2차원 그래프의 점은 x좌표와 y좌표의 쌍으로 이루어져 있음을 기억한다.
그리고 파이썬의 경우 점들의 그래프를 그리기 위해서는 점들의 x좌표 값들의 리스트와
y좌표 값들의 리스트를 제공해야 한다.
기본적으로 점을 많이 찍을 수록 보다 정확한 그래프를 그릴 수 있지만 몇 개의 점으로도
그럴싸한 그래프를 그릴 수 있다.
예를 들어, (-10, 100), (-5, 25), (0, 0), (5, 25), (10, 100)
다섯 개의 점을 잇는 그래프를 그리기 위해
xs = [-10, -5, 0, 5, 10]
와
ys = [100, 25, 0, 25, 100]
의 각각의 점들의 x좌표 값들의 리스트와 y좌표 값들의 리스트를 활용한다.
ys 리스트의 각각의 항목은 xs 리스트의 동일한 위치에 해당하는 항목의 제곱임에 주의하라.
Step19: 보다 많은 점을 찍으면 보다 부드러운 그래프를 얻을 수 있다.
Step20: 연습문제
연습
수학에서 사용되는 대표적인 지수함수인 $f(x) = e^x$는 math 모듈의 exp()로 정의되어 있다.
아래 리스트를 조건제시법으로 구현하라.
$$[e^1, e^3, e^5, e^7, e^9]$$
주의
Step21: 연습
아래 리스트를 조건제시법으로 구현하라.
$$[e^3, e^6, e^9, e^{12}, e^{15}]$$
힌트
Step22: 연습
조건제시법은 데이터를 처리하는 데에 매우 효과적이다.
예를 들어, 어떤 영어 문장에 사용된 단어들의 길이를 분석할 수 있다.
아래와 같이 파이썬을 소개하는 문장이 있다.
Step23: 위 문장에 사용된 단어들의 길이를 분석하기 위해 먼저 위 문장을 단어로 쪼갠다.
이를 위해, 문자열에 사용하는 split() 메소드를 사용한다.
Step24: 위 words 리스트의 각 항목의 문자열들을 모두 대문자로 바꾼 단어와 그리고 해당 항목의 문자열의 길이를 항목으로 갖는 튜플들의 리스트를 작성하고자 한다.
[('PYTHON', 6), ('IS', 2), ....]
반복문을 이용하여 아래와 같이 작성할 수 있다.
Step25: 리스트 조건제시법으로는 아래와 같이 보다 간결하게 구현할 수 있다.
Step26: 처음 다섯 개의 단어만 다루고자 할 경우에는 아래처럼 하면 된다.
Step27: 아래처럼 인덱스에 제한을 가하는 방식도 가능하다. 즉, if 문을 추가로 활용한다.
Step28: 질문
Step29: 연습
머신러닝의 인공신경만(Artificial Neural Network) 분야에서 활성화 함수(activation function)로 많이 사용되는 ReLU(Rectified Linear Unit) 함수를 그래프로 그려보자. ReLu 함수의 정의는 다음과 같다.
$$
f(x) = \begin{cases} 0 & x <0 \text{ 인 경우,} \ 1 & x \ge 0 \text{ 인 경우.}\end{cases}
$$
참조 | Python Code:
odd_20 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
Explanation: 리스트 조건제시법(List Comprehension)
주요 내용
주어진 리스트를 이용하여 특정 성질을 만족하는 새로운 리스트를 생성하고자 할 때
리스트 조건제시법을 활용하면 매우 효율적인 코딩을 할 수 있다.
리스트 조건제시법은 집합을 정의할 때 사용하는 조건제시법과 매우 유사하다.
예를 들어,0부터 1억 사이에 있는 홀수들을 원소로 갖는 집합을 정의하려면
두 가지 방법을 활용할 수 있다.
원소나열법
{1, 3, 5, 7, 9, 11, ..., 99999999}
중간에 사용한 점점점(...) 중략기호는 0부터 1억 사이의 총 5천만개의 홀수를
나열하는 것은 불가능하기에 사용한 기호이다.
실제로 1초에 하나씩 숫자를 적는다 해도 5천만 초, 약 1년 8개월이 걸린다.
조건제시법
{ x | 0 <= x <= 1억, 단 x는 홀수}
여기서는 조건제시법을 활용하여 새로운 리스트를 생성하는 방법을 알아본다.
오늘의 주요 예제
$y = x^2$ 함수의 그래프를 아래와 같이 그려보자.
단, $x$는 -10에서 10사이의 값을 가진다.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="images/pyplot_exp.png" style="width:350">
</td>
</tr>
</table>
</p>
특정 성질을 만족하는 리스트 생성하기
예제
0부터 20 사이의 모든 홀수를 순서대로 담고 있는 리스트를 어떻게 구현할까?
집합의 경우에서처럼 원소나열법 또는 조건제시법을 활용할 수 있다.
End of explanation
i = 0
odd_20 = []
while i <= 20:
if i % 2 == 1:
odd_20.append(i)
i += 1
print(odd_20)
Explanation: 아니면, 반복문을 활용할 수 있다.
while 반복문: 리스트의 append() 메소드를 활용한다.
End of explanation
odd_20 = []
for i in range(21):
if i % 2 == 1:
odd_20.append(i)
print(odd_20)
Explanation: for 반복문: range() 함수를 활용한다.
End of explanation
odd_nums = [1, 3, 5, 7, 9, 11, ..., 99999999]
Explanation: 예제
이제 0부터 1억 사이의 모든 홀수를 순서대로 담고 있는 리스트를 원소나열법으로 구현할 수 있을까?
답은 '아니다'이다. 집합을 정의할 때처럼 생략기호를 사용할 수는 있지만, 제대로 작동하지 않는다.
예를 들어, 0부터 1억 사이의 모든 홀수들의 리스트를 아래와 같이 선언해 보자.
End of explanation
print(odd_nums)
Explanation: 확인하면 학교에서 배운 것과 비슷하게 작동하는 것처럼 보인다.
End of explanation
odd_nums[:10]
Explanation: 주의: Ellipsis는 생략을 나타낸다.
하지만 처음 10개의 홀수를 얻기 위해 슬라이싱을 사용하면 다음과 같이 엉뚱하게 나온다.
End of explanation
def odd_number(num):
L=[]
for i in range(num):
if i%2 == 1:
L.append(i)
return L
Explanation: 위와 같이 작동하는 이유는 생략된 부분이 어떤 규칙으로 나열되는지 파이썬 해석기가 알지 못하기 때문이다.
반면에 반복문을 활용하는 것은 언제든지 가능하다.
예를 들어, 아래 함수는 0부터 정해진 숫자 사이의 모든 홀수를 순서대로 담은 리스트를 생성하려 리턴한다.
End of explanation
odd_number(20)
Explanation: 0과 20 사이의 홀수들의 리스트는 다음과 같다.
End of explanation
odd_100M = odd_number(100000000)
Explanation: 이제 0과 1억 사이의 홀수들의 리스트를 생성해보자.
주의: 아래와 같은 명령어는 실행하지 말자. 5천만개의 숫자를 출력하는 바보같은 일은 하지 말아야 한다.
print(odd_number(100000000))
End of explanation
print(odd_100M[:20])
Explanation: 좀 오래 걸린다.
사용하는 컴퓨터 사양에 따라 시간차이가 발생하지만 1억보다 작은 5천만 개의 홀수를 생성하는
데에 최신 노트북인 경우 10여초 걸린다.
홀수들의 리스트가 제대로 생성되었는지를 확인하기 위해 처음 20개의 홀수를 확인해보자.
End of explanation
import time
start_time = time.clock()
odd_100M = odd_number(100000000)
end_time = time.clock()
print(end_time - start_time, "초")
Explanation: 부록: 프로그램 실행시간 측정하기
프로그램의 실행시간을 확인하려면 time 모듈의 clock() 함수를 활용하면 된다.
clock() 함수의 리턴값은 이 함수를 호출할 때까지 걸린 프로세스 시간을 나타낸다.
프로세스 시간의 의미를 이해하지 못해도 상관 없다.
대신에 time 모듈의 clock() 함수의 활용법을 한 번쯤 본 것으로 만족한다.
End of explanation
odd_100M = [x for x in range(100000001) if x % 2 == 1]
odd_100M[:10]
Explanation: 이제 질문을 좀 다르게 하자.
odd_number 함수를 좀 더 간결하게 정의할 수 없을까?
이에 대해 파이썬에서는 리스트 조건제시법이라는 기술을 제공한다.
이 기술을 모든 언어가 지원하지는 않는다.
예를 들어, C# 언어는 from ... where ... select ... 가 비슷한 역할을 지원하지만 좀 다르고,
Java 언어에서는 함수 인터페이스를 이용하여 비슷한 기능을 구현할 수 있다.
리스트 조건제시법 이해
리스트 조건제시법은 집합 정의에 사용되는 조건제시법과 매우 비슷하게 작동한다.
예를 들어, 0부터 1억 사이의 홀수들을 순서대로 항목으로 갖는 리스트를 생성하는 과정을
설명하면서 조건제시법의 이해를 돕고자 한다.
먼저, 앞서 개요에서 설명한 대로 0부터 1억 사이의 홀수들의 집합을
조건제시법으로로 표현한다.
{x | 0 <= x <= 100000000, 단 x는 홀수}
이제 집합기호를 리스트 기호로 대체한다.
[x | 0 <= x <= 100000000, 단 x는 홀수]
집합의 짝대기(|) 기호는 for로 대체한다.
[x for 0 <= x <= 100000000, 단 x는 홀수]
짝대기 기호 오른편에 위치하고, 변수 x가 어느 범위에서 움직이는지를 설명하는
부등식인 0 <= x <= 100000000 부분을 파이썬 수식으로 변경한다.
주로, 기존에 정의된 리스트를 사용하거나 range() 함수를 활용하여
범위를 x in ... 형식으로 지정한다.
[x for x in range(100000000+1), 단 x는 홀수]
마지막으로 변수 x에 대한 제한조건인 단 x는 홀수 부분을
파이썬의 if 문장으로 변경한다.
예를 들어, x는 홀수는 파이썬의 x % 2 == 1로 나타낼 수 있다.
[x for x in range(100000001) if x % 2 == 1]
End of explanation
odd_100M_square = [x**2 for x in range(100000000) if x % 2== 1]
odd_100M_square[:10]
Explanation: 예제
0부터 1억 사이의 홀수들의 제곱을 항목으로 갖는 리스트를 조건제시법으로 생성할 수 있다.
End of explanation
odd_100M_square = [x**2 for x in odd_100M]
odd_100M_square[:10]
Explanation: 물론 앞서 만든 odd_100M을 재활용할 수 있다.
End of explanation
odd_100M2 = [2 * x + 1 for x in range(50000000)]
odd_100M2[:10]
Explanation: 예제
0부터 1억 사이의 홀수들을 항목으로 갖는 리스트를 다른 조건제시법으로 생성해보자.
먼저, 모든 홀수는 2*x + 1의 모양을 갖는다는 점에 주의한다.
따라서 1억보다 작은 홀수는 아래와 같이 생성할 수 있다.
End of explanation
odd_100M2 = []
for x in range(50000000):
odd_100M2.append(2*x+1)
odd_100M2[:10]
Explanation: 이 방식은 좀 더 쉬워 보인다. if 문이 없기 때문이다.
위에서 사용한 조건제시법을 for 반복문을 이용하여 구현하면 아래처럼 할 수 있다.
End of explanation
%matplotlib inline
Explanation: 오늘의 주요 예제 해결
$y = x^2$ 함수의 그래프를 그리고자 한다.
그래프를 그리기 위해 matplotlib.pyplot 이란 모듈을 이용한다.
아래 코드처럼 퍼센트 기호(%)로 시작하는 코드는 쥬피터 노트북에만 사용하는 코드이며,
아래 코드는 쥬피터 노트북에 그래프를 직접 나타내기 위해 사용한다.
spyder 등 파이썬 에디터를 사용하는 경우 필요하지 않는 코드이다.
End of explanation
import matplotlib.pyplot as plt
Explanation: matplotlib.pyplot 모듈 이름이 길어서 보통은 plt 라고 줄여서 부른다.
End of explanation
### 그래프 준비 시작 ###
# 여기부터 아래 세 개의 우물정 표시 부분까지는 그래프를 그리기 위해 준비하는 부분이다.
# 이해하려 하지 말고 그냥 기억만 해두면 된다.
# 그림을 그리기 위한 도화지를 준비하는 용도이다.
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
# x축은 아래에, y축은 그림의 중심에 위치하도록 한다.
ax.spines['left'].set_position('center')
ax.spines['bottom'].set_position('zero')
# 그래프를 둘러싸는 상자를 없앤다.
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
### 그래프 그리기 준비 끝 ###
# x좌표와 y좌표 값들의 리스트를 제공한다.
# 여기서는 조건제시법을 활용한다.
xs = [x for x in range(-10, 11, 5)]
ys = [x**2 for x in xs]
# 이제 plot() 함수를 호출하여 그래프를 그린다.
plt.plot(xs, ys)
plt.show()
Explanation: 그래프를 그리기 위해서는 먼저 필요한 만큼의 점을 찍어야 한다.
2차원 그래프의 점은 x좌표와 y좌표의 쌍으로 이루어져 있음을 기억한다.
그리고 파이썬의 경우 점들의 그래프를 그리기 위해서는 점들의 x좌표 값들의 리스트와
y좌표 값들의 리스트를 제공해야 한다.
기본적으로 점을 많이 찍을 수록 보다 정확한 그래프를 그릴 수 있지만 몇 개의 점으로도
그럴싸한 그래프를 그릴 수 있다.
예를 들어, (-10, 100), (-5, 25), (0, 0), (5, 25), (10, 100)
다섯 개의 점을 잇는 그래프를 그리기 위해
xs = [-10, -5, 0, 5, 10]
와
ys = [100, 25, 0, 25, 100]
의 각각의 점들의 x좌표 값들의 리스트와 y좌표 값들의 리스트를 활용한다.
ys 리스트의 각각의 항목은 xs 리스트의 동일한 위치에 해당하는 항목의 제곱임에 주의하라.
End of explanation
### 그래프 준비 시작 ###
# 여기부터 아래 세 개의 우물정 표시 부분까지는 그래프를 그리기 위해 준비하는 부분이다.
# 이해하려 하지 말고 그냥 기억만 해두면 된다.
# 그림을 그리기 위한 도화지를 준비하는 용도이다.
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
# x축은 아래에, y축은 그림의 중심에 위치하도록 한다.
ax.spines['left'].set_position('center')
ax.spines['bottom'].set_position('zero')
# 그래프를 둘러싸는 상자를 없앤다.
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
### 그래프 그리기 준비 끝 ###
# x좌표와 y좌표 값들의 리스트를 제공한다.
# 여기서는 조건제시법을 활용한다.
xs = [x for x in range(-10, 11)]
ys = [x**2 for x in xs]
# 이제 plot() 함수를 호출하여 그래프를 그린다.
plt.plot(xs, ys)
plt.show()
Explanation: 보다 많은 점을 찍으면 보다 부드러운 그래프를 얻을 수 있다.
End of explanation
from math import exp
[exp(n) for n in range(10) if n % 2 == 1]
Explanation: 연습문제
연습
수학에서 사용되는 대표적인 지수함수인 $f(x) = e^x$는 math 모듈의 exp()로 정의되어 있다.
아래 리스트를 조건제시법으로 구현하라.
$$[e^1, e^3, e^5, e^7, e^9]$$
주의: $e$의 값은 대략 2.718 정도이다.
견본답안:
End of explanation
[exp(3*n) for n in range(1,6)]
Explanation: 연습
아래 리스트를 조건제시법으로 구현하라.
$$[e^3, e^6, e^9, e^{12}, e^{15}]$$
힌트: range(1, 6)을 활용할 수 있다.
견본답안:
End of explanation
about_python = 'Python is a general-purpose programming language. \
It is becoming more and more popular \
for doing data science.'
Explanation: 연습
조건제시법은 데이터를 처리하는 데에 매우 효과적이다.
예를 들어, 어떤 영어 문장에 사용된 단어들의 길이를 분석할 수 있다.
아래와 같이 파이썬을 소개하는 문장이 있다.
End of explanation
words = about_python.split()
words
Explanation: 위 문장에 사용된 단어들의 길이를 분석하기 위해 먼저 위 문장을 단어로 쪼갠다.
이를 위해, 문자열에 사용하는 split() 메소드를 사용한다.
End of explanation
L =[]
for x in words:
L.append((x.upper(), len(x)))
L
Explanation: 위 words 리스트의 각 항목의 문자열들을 모두 대문자로 바꾼 단어와 그리고 해당 항목의 문자열의 길이를 항목으로 갖는 튜플들의 리스트를 작성하고자 한다.
[('PYTHON', 6), ('IS', 2), ....]
반복문을 이용하여 아래와 같이 작성할 수 있다.
End of explanation
[(x.upper(), len(x)) for x in words]
Explanation: 리스트 조건제시법으로는 아래와 같이 보다 간결하게 구현할 수 있다.
End of explanation
[(x.upper(), len(x)) for x in words[:5]]
Explanation: 처음 다섯 개의 단어만 다루고자 할 경우에는 아래처럼 하면 된다.
End of explanation
[(words[n].upper(), len(words[n])) for n in range(len(words)) if n < 5]
Explanation: 아래처럼 인덱스에 제한을 가하는 방식도 가능하다. 즉, if 문을 추가로 활용한다.
End of explanation
[(x.strip('.').upper(), len(x.strip('.'))) for x in words]
Explanation: 질문:
위 단어들 중에서 'language.'와 'science.' 두 경우에 마침표가 사용되었다.
마침표를 제외한 단어의 길이를 표시하도록 위 코드를 수정하라.
힌트: strip() 문자열 메소드를 활용한다.
견본답안:
End of explanation
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.spines['left'].set_position('center')
ax.spines['bottom'].set_position('zero')
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
xs = [x for x in range(-10, 11)]
ys = [max(0, x) for x in xs]
plt.plot(xs, ys)
plt.show()
Explanation: 연습
머신러닝의 인공신경만(Artificial Neural Network) 분야에서 활성화 함수(activation function)로 많이 사용되는 ReLU(Rectified Linear Unit) 함수를 그래프로 그려보자. ReLu 함수의 정의는 다음과 같다.
$$
f(x) = \begin{cases} 0 & x <0 \text{ 인 경우,} \ 1 & x \ge 0 \text{ 인 경우.}\end{cases}
$$
참조: ReLU 함수에 대한 간단한 설명은 여기에서 확인할 수 있다.
견본답안:
End of explanation |
3,861 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Executed
Step1: Load software and filenames definitions
Step2: Data folder
Step3: List of data files
Step4: Data load
Initial loading of the data
Step5: Laser alternation selection
At this point we have only the timestamps and the detector numbers
Step6: We need to define some parameters
Step7: We should check if everithing is OK with an alternation histogram
Step8: If the plot looks good we can apply the parameters with
Step9: Measurements infos
All the measurement data is in the d variable. We can print it
Step10: Or check the measurements duration
Step11: Compute background
Compute the background using automatic threshold
Step12: Burst search and selection
Step13: Fret fit
Max position of the Kernel Density Estimation (KDE)
Step14: Weighted mean of $E$ of each burst
Step15: Gaussian fit (no weights)
Step16: Gaussian fit (using burst size as weights)
Step17: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE)
Step18: The Maximum likelihood fit for a Gaussian population is the mean
Step19: Computing the weighted mean and weighted standard deviation we get
Step20: Save data to file
Step21: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
Step22: This is just a trick to format the different variables | Python Code:
ph_sel_name = "None"
data_id = "17d"
# data_id = "7d"
Explanation: Executed: Mon Mar 27 11:36:36 2017
Duration: 9 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
from fretbursts import *
init_notebook()
from IPython.display import display
Explanation: Load software and filenames definitions
End of explanation
data_dir = './data/singlespot/'
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
Explanation: Data folder:
End of explanation
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
data_id
Explanation: List of data files:
End of explanation
d = loader.photon_hdf5(filename=files_dict[data_id])
Explanation: Data load
Initial loading of the data:
End of explanation
d.ph_times_t, d.det_t
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
plot_alternation_hist(d)
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
loader.alex_apply_period(d)
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
d
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
d.time_max
Explanation: Or check the measurements duration:
End of explanation
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
d_orig = d
d = bext.burst_search_and_gate(d, m=10, F=7)
assert d.dir_ex == 0
assert d.leakage == 0
print(d.ph_sel)
dplot(d, hist_fret);
# if data_id in ['7d', '27d']:
# ds = d.select_bursts(select_bursts.size, th1=20)
# else:
# ds = d.select_bursts(select_bursts.size, th1=30)
ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)
n_bursts_all = ds.num_bursts[0]
def select_and_plot_ES(fret_sel, do_sel):
ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)
ds_do = ds.select_bursts(select_bursts.ES, **do_sel)
bpl.plot_ES_selection(ax, **fret_sel)
bpl.plot_ES_selection(ax, **do_sel)
return ds_fret, ds_do
ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)
if data_id == '7d':
fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)
do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '12d':
fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '17d':
fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '22d':
fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '27d':
fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
bandwidth = 0.03
n_bursts_fret = ds_fret.num_bursts[0]
n_bursts_fret
dplot(ds_fret, hist2d_alex, scatter_alpha=0.1);
nt_th1 = 50
dplot(ds_fret, hist_size, which='all', add_naa=False)
xlim(-0, 250)
plt.axvline(nt_th1)
Th_nt = np.arange(35, 120)
nt_th = np.zeros(Th_nt.size)
for i, th in enumerate(Th_nt):
ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)
nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th
plt.figure()
plot(Th_nt, nt_th)
plt.axvline(nt_th1)
nt_mean = nt_th[np.where(Th_nt == nt_th1)][0]
nt_mean
Explanation: Burst search and selection
End of explanation
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')
E_fitter = ds_fret.E_fitter
E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
E_fitter.fit_histogram(mfit.factory_gaussian(center=0.5))
E_fitter.fit_res[0].params.pretty_print()
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(E_fitter, ax=ax[0])
mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))
display(E_fitter.params*100)
# ds_fret.add(E_fitter = E_fitter)
# dplot(ds_fret, hist_fret_kde, weights='size', bins=np.r_[-0.2:1.2:bandwidth], bandwidth=bandwidth);
# plt.axvline(E_pr_fret_kde, ls='--', color='r')
# print(ds_fret.ph_sel, E_pr_fret_kde)
Explanation: Fret fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
ds_fret.fit_E_m(weights='size')
Explanation: Weighted mean of $E$ of each burst:
End of explanation
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
Explanation: Gaussian fit (no weights):
End of explanation
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')
E_kde_w = E_fitter.kde_max_pos[0]
E_gauss_w = E_fitter.params.loc[0, 'center']
E_gauss_w_sig = E_fitter.params.loc[0, 'sigma']
E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))
E_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr
E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr
Explanation: Gaussian fit (using burst size as weights):
End of explanation
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)
S_fitter = ds_fret.S_fitter
S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(S_fitter, ax=ax[0])
mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))
display(S_fitter.params*100)
S_kde = S_fitter.kde_max_pos[0]
S_gauss = S_fitter.params.loc[0, 'center']
S_gauss_sig = S_fitter.params.loc[0, 'sigma']
S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))
S_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr
S_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr
Explanation: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
S = ds_fret.S[0]
S_ml_fit = (S.mean(), S.std())
S_ml_fit
Explanation: The Maximum likelihood fit for a Gaussian population is the mean:
End of explanation
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)
S_mean = np.dot(weights, S)/weights.sum()
S_std_dev = np.sqrt(
np.dot(weights, (S - S_mean)**2)/weights.sum())
S_wmean_fit = [S_mean, S_std_dev]
S_wmean_fit
Explanation: Computing the weighted mean and weighted standard deviation we get:
End of explanation
sample = data_id
Explanation: Save data to file
End of explanation
variables = ('sample n_bursts_all n_bursts_fret '
'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr '
'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr '
'nt_mean\n')
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-raw-AND-gate.csv', 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
Explanation: This is just a trick to format the different variables:
End of explanation |
3,862 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Вопросы по прошлому занятию
* Почему файлы лучше всего открывать через with?
* Зачем нужен Git?
* Как переместить файл из папки "/some/folder" в папку "/another/dir"?
* Зачем нужен subprocess.PIPE?
* Как вывести JSON-строку "красиво" c отступом в два пробела?
* Как сделать POST-запрос через requests?
Разбор задачи 1
https
Step1: Разбор задачи 2
https
Step2: Разбор задачи 3
https
Step3: Регулярные выражения
Зачем они нужны?
Step4: Хороший сайт для практики
https
Step5: "Итератор" - это объект с интерфейсом перебора
"Генератор" - это итератор, в котором при переборе запускается дополнительный код
Step6: Безымянные функции
Step7: Функция вызывается ТОЛЬКО на итерации цикла
Внутри Python на каждой итерации вызывает next(our_gen)
Как такое написать?
Step8: Числа Фибоначчи!
Step9: "Генератор" - это функция с запоминанием последнего результата и несколькими точками входа-выхода
Step10: Пример пайплайна с генераторами
Step11: Еще почитать про генераторы
Step12: Передаем аргументы пачкой
Step13: Декораторы
Декоратор - это функция, которая принимает функцию и возвращает функцию
Step14: Упражнение на декораторы
Напишите декоратор, который принимает функцию с одним аргументом и кэширует результат ее выполнения, то есть хранит словарь "аргумент->результат".
https | Python Code:
from collections import Counter
def checkio(arr):
counts = Counter(arr)
return [
w for w in arr if counts[w] > 1
]
Explanation: Вопросы по прошлому занятию
* Почему файлы лучше всего открывать через with?
* Зачем нужен Git?
* Как переместить файл из папки "/some/folder" в папку "/another/dir"?
* Зачем нужен subprocess.PIPE?
* Как вывести JSON-строку "красиво" c отступом в два пробела?
* Как сделать POST-запрос через requests?
Разбор задачи 1
https://py.checkio.org/mission/non-unique-elements/
Как посчитать, сколько раз элемент встречается в списке?
Как отфильтровать список?
End of explanation
def checkio(nums):
return sorted(nums, key=abs)
Explanation: Разбор задачи 2
https://py.checkio.org/mission/absolute-sorting/
Как вообще отсортировать список?
Какой аргумент поможет брать при сортировке абсолютное значение?
End of explanation
def checkio(board):
for i in range(3):
if board[i][0] == board[i][1] == board[i][2] != ".":
return board[i][0]
if board[0][i] == board[1][i] == board[2][i] != ".":
return board[0][i]
if board[0][0] == board[1][1] == board[2][2] != ".":
return board[0][0]
if board[0][2] == board[1][1] == board[2][0] != ".":
return board[0][2]
return "D"
Explanation: Разбор задачи 3
https://py.checkio.org/mission/x-o-referee/
Можно сравнивать более двух элементов за раз:
if a == b == c != d
End of explanation
import re
re.match(r"^\+7\d{10}$", "+78005553535").group(0) # проверить, соответствует ли строка целиком
a = "Пишите мне на адрес админ@суперхакер.рф или [email protected]! Чмоки!"
re.search(r"\w+@\w+\.\w{2,5}", a).group(0) # найти первое вхождение
re.findall(r"\w+@\w+\.\w{2,5}", a) # найти все вхождения
my_re = re.compile(r"ya_regulyarko") # позволит делать my_re.match(s), my_re.search(s), etc.
# Можно задавать диапазон: r"[A-Za-z]"
Explanation: Регулярные выражения
Зачем они нужны?
End of explanation
b = [
a * 10 for a in range(10)
if a % 2 == 0
]
b
c = (
a * 10 for a in range(10)
if a % 2 == 0
)
c
for num in c:
print(num)
Explanation: Хороший сайт для практики
https://regexr.com/
Опять парсинг
Загрузите в переменную содержимое файла test.html и найдите в нем все уникальные имена, не используя bs4. Именем считаются любые два слова, оба начинающиеся с большой буквы и разделенные пробелом.
Как в регулярном выражении обозначается пробел?
Итераторы и генераторы
End of explanation
a = [5, 6, 7, 8, 9, 10]
b = [3, 2, 1, 0, -1, -2]
def pow2(num):
return num ** 2
def divisible_by_3(num):
return num % 3 == 0
map(pow2, a) # “лениво” применить funс ко всем элементам списка
zip(a, b) # брать попарно элементы из двух списков
# а что вернет zip(a[::2], a[1::2])?
filter(divisible_by_3, a) # брать только те элементы, для которых func вернет True
Explanation: "Итератор" - это объект с интерфейсом перебора
"Генератор" - это итератор, в котором при переборе запускается дополнительный код
End of explanation
# короткие безымянные функции
a = [5, 6, 7]
map(lambda num: num ** 2, a)
filter(lambda num: num % 3 == 0, a)
Explanation: Безымянные функции
End of explanation
# то же самое, что
# gen_squares = (i * i for i in range(n))
def gen_squares(n):
for i in range(n):
yield i * i
mygen = gen_squares(5)
next(mygen)
Explanation: Функция вызывается ТОЛЬКО на итерации цикла
Внутри Python на каждой итерации вызывает next(our_gen)
Как такое написать?
End of explanation
def gen(n):
if n <= 1:
return 0
elif n == 2:
return 1
a = 0
b = 1
for _ in range(2, n):
a, b = b, a + b
return b
# Минутка "вопросов на собеседовании" - а если через рекурсию?
# Ответ - красиво, но в этом случае так делать нельзя!
def gen2(n):
if n <= 1:
return 0
elif n == 2:
return 1
return gen2(n - 2) + gen2(n - 1)
def gen():
a = 0
b = 1
yield a
yield b
while True:
a, b = b, a + b
yield b
Explanation: Числа Фибоначчи!
End of explanation
# единственный способ получить значение из генератора - это проитерироваться по нему!
# "[1]" не поможет, но поможет next()
# можно явно привести к листу, но так делать не стоит - все окажется в памяти
a = gen()
for i, num in enumerate(a):
if i == 5:
print(num)
break
Explanation: "Генератор" - это функция с запоминанием последнего результата и несколькими точками входа-выхода
End of explanation
from collections import Counter
with open("USlocalopendataportals.csv", "r") as testfile:
recs = (l.split(",") for l in testfile)
next(recs) # пропускаем заголовок
owners = (rec[3] for rec in recs)
print(Counter(owners)["Government"])
Explanation: Пример пайплайна с генераторами
End of explanation
def my_function(*args, **kwargs):
print(args)
print(kwargs)
my_function(1, "foo", nick="Mushtandoid", arg=123)
Explanation: Еще почитать про генераторы:
http://dabeaz.com/generators-uk/index.html
http://dabeaz.com/coroutines/index.html
Еще об аргументах функций
End of explanation
def summator(a, b):
return a + b
a = [45, 78]
print(summator(*a))
print(summator(**{"a": 11, "b": 34}))
Explanation: Передаем аргументы пачкой
End of explanation
import time
def my_function(a, b):
time.sleep(2)
return a + b
def timer(func):
def decoy_func(*args, **kwargs):
t = time.time()
res = func(*args, **kwargs)
print("Execution time: {0}".format(time.time() - t))
return res
return decoy_func
@timer
def my_function(a, b):
time.sleep(2)
return a + b
my_function(5, 6)
Explanation: Декораторы
Декоратор - это функция, которая принимает функцию и возвращает функцию
End of explanation
import telepot
bot = telepot.Bot('422088359:AAHgC2o92CHWrMfP8pRnsFAFcqy6epY5wuk')
bot.getMe()
Explanation: Упражнение на декораторы
Напишите декоратор, который принимает функцию с одним аргументом и кэширует результат ее выполнения, то есть хранит словарь "аргумент->результат".
https://docs.python.org/3/library/functools.html#functools.lru_cache
Telegram!
Качаем, регистрируемся: https://telegram.org/
Главный бот для всех ботов: https://telegram.me/botfather
Любой бот требует токен для работы. Токен получается через главного бота.
/start # начало диалога
/newbot # создаем нового бота
Набираем имя бота (латиницей)
Набираем логин для бота (можно такой же, как имя, но должен заканчиваться на bot)
Копируем себе токен
pip install telepot
End of explanation |
3,863 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Tutorial #01
Simple Linear Model
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
This tutorial demonstrates the basic workflow of using TensorFlow with a simple linear model. After loading the so-called MNIST data-set with images of hand-written digits, we define and optimize a simple mathematical model in TensorFlow. The results are then plotted and discussed.
You should be familiar with basic linear algebra, Python and the Jupyter Notebook editor. It also helps if you have a basic understanding of Machine Learning and classification.
Imports
Step1: This was developed using Python 3.6.1 (Anaconda) and TensorFlow version
Step2: Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
Step3: The MNIST data-set has now been loaded and consists of 70.000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
Step4: One-Hot Encoding
The data-set has been loaded as so-called One-Hot encoding. This means the labels have been converted from a single number to a vector whose length equals the number of possible classes. All elements of the vector are zero except for the $i$'th element which is one and means the class is $i$. For example, the One-Hot encoded labels for the first 5 images in the test-set are
Step5: We also need the classes as single numbers for various comparisons and performance measures, so we convert the One-Hot encoded vectors to a single number by taking the index of the highest element. Note that the word 'class' is a keyword used in Python so we need to use the name 'cls' instead.
Step6: We can now see the class for the first five images in the test-set. Compare these to the One-Hot encoded vectors above. For example, the class for the first image is 7, which corresponds to a One-Hot encoded vector where all elements are zero except for the element with index 7.
Step7: Data dimensions
The data dimensions are used in several places in the source-code below. In computer programming it is generally best to use variables and constants rather than having to hard-code specific numbers every time that number is used. This means the numbers only have to be changed in one single place. Ideally these would be inferred from the data that has been read, but here we just write the numbers.
Step8: Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
Step9: Plot a few images to see if data is correct
Step10: TensorFlow Graph
The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
A TensorFlow graph consists of the following parts which will be detailed below
Step11: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
Step12: Finally we have the placeholder variable for the true class of each image in the placeholder variable x. These are integers and the dimensionality of this placeholder variable is set to [None] which means the placeholder variable is a one-dimensional vector of arbitrary length.
Step13: Variables to be optimized
Apart from the placeholder variables that were defined above and which serve as feeding input data into the model, there are also some model variables that must be changed by TensorFlow so as to make the model perform better on the training data.
The first variable that must be optimized is called weights and is defined here as a TensorFlow variable that must be initialized with zeros and whose shape is [img_size_flat, num_classes], so it is a 2-dimensional tensor (or matrix) with img_size_flat rows and num_classes columns.
Step14: The second variable that must be optimized is called biases and is defined as a 1-dimensional tensor (or vector) of length num_classes.
Step15: Model
This simple mathematical model multiplies the images in the placeholder variable x with the weights and then adds the biases.
The result is a matrix of shape [num_images, num_classes] because x has shape [num_images, img_size_flat] and weights has shape [img_size_flat, num_classes], so the multiplication of those two matrices is a matrix with shape [num_images, num_classes] and then the biases vector is added to each row of that matrix.
Note that the name logits is typical TensorFlow terminology, but other people may call the variable something else.
Step16: Now logits is a matrix with num_images rows and num_classes columns, where the element of the $i$'th row and $j$'th column is an estimate of how likely the $i$'th input image is to be of the $j$'th class.
However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or large, so we want to normalize them so that each row of the logits matrix sums to one, and each element is limited between zero and one. This is calculated using the so-called softmax function and the result is stored in y_pred.
Step17: The predicted class can be calculated from the y_pred matrix by taking the index of the largest element in each row.
Step18: Cost-function to be optimized
To make the model better at classifying the input images, we must somehow change the variables for weights and biases. To do this we first need to know how well the model currently performs by comparing the predicted output of the model y_pred to the desired output y_true.
The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the weights and biases of the model.
TensorFlow has a built-in function for calculating the cross-entropy. Note that it uses the values of the logits because it also calculates the softmax internally.
Step19: We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
Step20: Optimization method
Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the basic form of Gradient Descent where the step-size is set to 0.5.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
Step21: Performance measures
We need a few more performance measures to display the progress to the user.
This is a vector of booleans whether the predicted class equals the true class of each image.
Step22: This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
Step23: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
Step24: Initialize variables
The variables for weights and biases must be initialized before we start optimizing them.
Step25: Helper-function to perform optimization iterations
There are 50.000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore use Stochastic Gradient Descent which only uses a small batch of images in each iteration of the optimizer.
Step26: Function for performing a number of optimization iterations so as to gradually improve the weights and biases of the model. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples.
Step27: Helper-functions to show performance
Dict with the test-set data to be used as input to the TensorFlow graph. Note that we must use the correct names for the placeholder variables in the TensorFlow graph.
Step28: Function for printing the classification accuracy on the test-set.
Step29: Function for printing and plotting the confusion matrix using scikit-learn.
Step30: Function for plotting examples of images from the test-set that have been mis-classified.
Step31: Helper-function to plot the model weights
Function for plotting the weights of the model. 10 images are plotted, one for each digit that the model is trained to recognize.
Step32: Performance before any optimization
The accuracy on the test-set is 9.8%. This is because the model has only been initialized and not optimized at all, so it always predicts that the image shows a zero digit, as demonstrated in the plot below, and it turns out that 9.8% of the images in the test-set happens to be zero digits.
Step33: Performance after 1 optimization iteration
Already after a single optimization iteration, the model has increased its accuracy on the test-set to 40.7% up from 9.8%. This means that it mis-classifies the images about 6 out of 10 times, as demonstrated on a few examples below.
Step34: The weights can also be plotted as shown below. Positive weights are red and negative weights are blue. These weights can be intuitively understood as image-filters.
For example, the weights used to determine if an image shows a zero-digit have a positive reaction (red) to an image of a circle, and have a negative reaction (blue) to images with content in the centre of the circle.
Similarly, the weights used to determine if an image shows a one-digit react positively (red) to a vertical line in the centre of the image, and react negatively (blue) to images with content surrounding that line.
Note that the weights mostly look like the digits they're supposed to recognize. This is because only one optimization iteration has been performed so the weights are only trained on 100 images. After training on several thousand images, the weights become more difficult to interpret because they have to recognize many variations of how digits can be written.
Step35: Performance after 10 optimization iterations
Step36: Performance after 1000 optimization iterations
After 1000 optimization iterations, the model only mis-classifies about one in ten images. As demonstrated below, some of the mis-classifications are justified because the images are very hard to determine with certainty even for humans, while others are quite obvious and should have been classified correctly by a good model. But this simple model cannot reach much better performance and more complex models are therefore needed.
Step37: The model has now been trained for 1000 optimization iterations, with each iteration using 100 images from the training-set. Because of the great variety of the images, the weights have now become difficult to interpret and we may doubt whether the model truly understands how digits are composed from lines, or whether the model has just memorized many different variations of pixels.
Step38: We can also print and plot the so-called confusion matrix which lets us see more details about the mis-classifications. For example, it shows that images actually depicting a 5 have sometimes been mis-classified as all other possible digits, but mostly either 3, 6 or 8.
Step39: We are now done using TensorFlow, so we close the session to release its resources. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
Explanation: TensorFlow Tutorial #01
Simple Linear Model
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
This tutorial demonstrates the basic workflow of using TensorFlow with a simple linear model. After loading the so-called MNIST data-set with images of hand-written digits, we define and optimize a simple mathematical model in TensorFlow. The results are then plotted and discussed.
You should be familiar with basic linear algebra, Python and the Jupyter Notebook editor. It also helps if you have a basic understanding of Machine Learning and classification.
Imports
End of explanation
tf.__version__
Explanation: This was developed using Python 3.6.1 (Anaconda) and TensorFlow version:
End of explanation
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets("data/MNIST/", one_hot=True)
Explanation: Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
End of explanation
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
Explanation: The MNIST data-set has now been loaded and consists of 70.000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
End of explanation
data.test.labels[0:5, :]
Explanation: One-Hot Encoding
The data-set has been loaded as so-called One-Hot encoding. This means the labels have been converted from a single number to a vector whose length equals the number of possible classes. All elements of the vector are zero except for the $i$'th element which is one and means the class is $i$. For example, the One-Hot encoded labels for the first 5 images in the test-set are:
End of explanation
data.test.cls = np.array([label.argmax() for label in data.test.labels])
Explanation: We also need the classes as single numbers for various comparisons and performance measures, so we convert the One-Hot encoded vectors to a single number by taking the index of the highest element. Note that the word 'class' is a keyword used in Python so we need to use the name 'cls' instead.
End of explanation
data.test.cls[0:5]
Explanation: We can now see the class for the first five images in the test-set. Compare these to the One-Hot encoded vectors above. For example, the class for the first image is 7, which corresponds to a One-Hot encoded vector where all elements are zero except for the element with index 7.
End of explanation
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of classes, one class for each of 10 digits.
num_classes = 10
Explanation: Data dimensions
The data dimensions are used in several places in the source-code below. In computer programming it is generally best to use variables and constants rather than having to hard-code specific numbers every time that number is used. This means the numbers only have to be changed in one single place. Ideally these would be inferred from the data that has been read, but here we just write the numbers.
End of explanation
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
End of explanation
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
Explanation: Plot a few images to see if data is correct
End of explanation
x = tf.placeholder(tf.float32, [None, img_size_flat])
Explanation: TensorFlow Graph
The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
A TensorFlow graph consists of the following parts which will be detailed below:
Placeholder variables used to change the input to the graph.
Model variables that are going to be optimized so as to make the model perform better.
The model which is essentially just a mathematical function that calculates some output given the input in the placeholder variables and the model variables.
A cost measure that can be used to guide the optimization of the variables.
An optimization method which updates the variables of the model.
In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial.
Placeholder variables
Placeholder variables serve as the input to the graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.
First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional vector or matrix. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.
End of explanation
y_true = tf.placeholder(tf.float32, [None, num_classes])
Explanation: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
End of explanation
y_true_cls = tf.placeholder(tf.int64, [None])
Explanation: Finally we have the placeholder variable for the true class of each image in the placeholder variable x. These are integers and the dimensionality of this placeholder variable is set to [None] which means the placeholder variable is a one-dimensional vector of arbitrary length.
End of explanation
weights = tf.Variable(tf.zeros([img_size_flat, num_classes]))
Explanation: Variables to be optimized
Apart from the placeholder variables that were defined above and which serve as feeding input data into the model, there are also some model variables that must be changed by TensorFlow so as to make the model perform better on the training data.
The first variable that must be optimized is called weights and is defined here as a TensorFlow variable that must be initialized with zeros and whose shape is [img_size_flat, num_classes], so it is a 2-dimensional tensor (or matrix) with img_size_flat rows and num_classes columns.
End of explanation
biases = tf.Variable(tf.zeros([num_classes]))
Explanation: The second variable that must be optimized is called biases and is defined as a 1-dimensional tensor (or vector) of length num_classes.
End of explanation
logits = tf.matmul(x, weights) + biases
Explanation: Model
This simple mathematical model multiplies the images in the placeholder variable x with the weights and then adds the biases.
The result is a matrix of shape [num_images, num_classes] because x has shape [num_images, img_size_flat] and weights has shape [img_size_flat, num_classes], so the multiplication of those two matrices is a matrix with shape [num_images, num_classes] and then the biases vector is added to each row of that matrix.
Note that the name logits is typical TensorFlow terminology, but other people may call the variable something else.
End of explanation
y_pred = tf.nn.softmax(logits)
Explanation: Now logits is a matrix with num_images rows and num_classes columns, where the element of the $i$'th row and $j$'th column is an estimate of how likely the $i$'th input image is to be of the $j$'th class.
However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or large, so we want to normalize them so that each row of the logits matrix sums to one, and each element is limited between zero and one. This is calculated using the so-called softmax function and the result is stored in y_pred.
End of explanation
y_pred_cls = tf.argmax(y_pred, axis=1)
Explanation: The predicted class can be calculated from the y_pred matrix by taking the index of the largest element in each row.
End of explanation
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits,
labels=y_true)
Explanation: Cost-function to be optimized
To make the model better at classifying the input images, we must somehow change the variables for weights and biases. To do this we first need to know how well the model currently performs by comparing the predicted output of the model y_pred to the desired output y_true.
The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the weights and biases of the model.
TensorFlow has a built-in function for calculating the cross-entropy. Note that it uses the values of the logits because it also calculates the softmax internally.
End of explanation
cost = tf.reduce_mean(cross_entropy)
Explanation: We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
End of explanation
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(cost)
Explanation: Optimization method
Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the basic form of Gradient Descent where the step-size is set to 0.5.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
End of explanation
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
Explanation: Performance measures
We need a few more performance measures to display the progress to the user.
This is a vector of booleans whether the predicted class equals the true class of each image.
End of explanation
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Explanation: This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
End of explanation
session = tf.Session()
Explanation: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
End of explanation
session.run(tf.global_variables_initializer())
Explanation: Initialize variables
The variables for weights and biases must be initialized before we start optimizing them.
End of explanation
batch_size = 100
Explanation: Helper-function to perform optimization iterations
There are 50.000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore use Stochastic Gradient Descent which only uses a small batch of images in each iteration of the optimizer.
End of explanation
def optimize(num_iterations):
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
# Note that the placeholder for y_true_cls is not set
# because it is not used during training.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
Explanation: Function for performing a number of optimization iterations so as to gradually improve the weights and biases of the model. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples.
End of explanation
feed_dict_test = {x: data.test.images,
y_true: data.test.labels,
y_true_cls: data.test.cls}
Explanation: Helper-functions to show performance
Dict with the test-set data to be used as input to the TensorFlow graph. Note that we must use the correct names for the placeholder variables in the TensorFlow graph.
End of explanation
def print_accuracy():
# Use TensorFlow to compute the accuracy.
acc = session.run(accuracy, feed_dict=feed_dict_test)
# Print the accuracy.
print("Accuracy on test-set: {0:.1%}".format(acc))
Explanation: Function for printing the classification accuracy on the test-set.
End of explanation
def print_confusion_matrix():
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the predicted classifications for the test-set.
cls_pred = session.run(y_pred_cls, feed_dict=feed_dict_test)
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
# Make various adjustments to the plot.
plt.tight_layout()
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Function for printing and plotting the confusion matrix using scikit-learn.
End of explanation
def plot_example_errors():
# Use TensorFlow to get a list of boolean values
# whether each test-image has been correctly classified,
# and a list for the predicted class of each image.
correct, cls_pred = session.run([correct_prediction, y_pred_cls],
feed_dict=feed_dict_test)
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
Explanation: Function for plotting examples of images from the test-set that have been mis-classified.
End of explanation
def plot_weights():
# Get the values for the weights from the TensorFlow variable.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Create figure with 3x4 sub-plots,
# where the last 2 sub-plots are unused.
fig, axes = plt.subplots(3, 4)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Only use the weights for the first 10 sub-plots.
if i<10:
# Get the weights for the i'th digit and reshape it.
# Note that w.shape == (img_size_flat, 10)
image = w[:, i].reshape(img_shape)
# Set the label for the sub-plot.
ax.set_xlabel("Weights: {0}".format(i))
# Plot the image.
ax.imshow(image, vmin=w_min, vmax=w_max, cmap='seismic')
# Remove ticks from each sub-plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Helper-function to plot the model weights
Function for plotting the weights of the model. 10 images are plotted, one for each digit that the model is trained to recognize.
End of explanation
print_accuracy()
plot_example_errors()
Explanation: Performance before any optimization
The accuracy on the test-set is 9.8%. This is because the model has only been initialized and not optimized at all, so it always predicts that the image shows a zero digit, as demonstrated in the plot below, and it turns out that 9.8% of the images in the test-set happens to be zero digits.
End of explanation
optimize(num_iterations=1)
print_accuracy()
plot_example_errors()
Explanation: Performance after 1 optimization iteration
Already after a single optimization iteration, the model has increased its accuracy on the test-set to 40.7% up from 9.8%. This means that it mis-classifies the images about 6 out of 10 times, as demonstrated on a few examples below.
End of explanation
plot_weights()
Explanation: The weights can also be plotted as shown below. Positive weights are red and negative weights are blue. These weights can be intuitively understood as image-filters.
For example, the weights used to determine if an image shows a zero-digit have a positive reaction (red) to an image of a circle, and have a negative reaction (blue) to images with content in the centre of the circle.
Similarly, the weights used to determine if an image shows a one-digit react positively (red) to a vertical line in the centre of the image, and react negatively (blue) to images with content surrounding that line.
Note that the weights mostly look like the digits they're supposed to recognize. This is because only one optimization iteration has been performed so the weights are only trained on 100 images. After training on several thousand images, the weights become more difficult to interpret because they have to recognize many variations of how digits can be written.
End of explanation
# We have already performed 1 iteration.
optimize(num_iterations=9)
print_accuracy()
plot_example_errors()
plot_weights()
Explanation: Performance after 10 optimization iterations
End of explanation
# We have already performed 10 iterations.
optimize(num_iterations=990)
print_accuracy()
plot_example_errors()
Explanation: Performance after 1000 optimization iterations
After 1000 optimization iterations, the model only mis-classifies about one in ten images. As demonstrated below, some of the mis-classifications are justified because the images are very hard to determine with certainty even for humans, while others are quite obvious and should have been classified correctly by a good model. But this simple model cannot reach much better performance and more complex models are therefore needed.
End of explanation
plot_weights()
Explanation: The model has now been trained for 1000 optimization iterations, with each iteration using 100 images from the training-set. Because of the great variety of the images, the weights have now become difficult to interpret and we may doubt whether the model truly understands how digits are composed from lines, or whether the model has just memorized many different variations of pixels.
End of explanation
print_confusion_matrix()
Explanation: We can also print and plot the so-called confusion matrix which lets us see more details about the mis-classifications. For example, it shows that images actually depicting a 5 have sometimes been mis-classified as all other possible digits, but mostly either 3, 6 or 8.
End of explanation
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
Explanation: We are now done using TensorFlow, so we close the session to release its resources.
End of explanation |
3,864 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Magnetic Inversion
Objective
Step1: Now that we have all our spatial components, we can create our linear system. For a single location and single component of the data, the system would look like this
Step2: Once we have our problem, we can use the inversion tools in SimPEG to run our inversion
Step3: Inversion has converged. We can plot sections through the model.
Step4: Great, we have a 3D model of susceptibility, but the job is not done yet.
A VERY important step of the inversion workflow is to look at how well the model can predict the observed data.
The figure below compares the observed, predicted and normalized residual. | Python Code:
from SimPEG import Mesh
from SimPEG.Utils import mkvc, surface2ind_topo
from SimPEG import Maps
from SimPEG import Regularization
from SimPEG import DataMisfit
from SimPEG import Optimization
from SimPEG import InvProblem
from SimPEG import Directives
from SimPEG import Inversion
from SimPEG import PF
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# First we need to define the direction of the inducing field
# As a simple case, we pick a vertical inducing field of magnitude 50,000nT.
# From old convention, field orientation is given as an azimuth from North
# (positive clockwise) and dip from the horizontal (positive downward).
H0 = (60000.,90.,0.)
# Create a mesh
dx = 5.
hxind = [(dx,5,-1.3), (dx, 10), (dx,5,1.3)]
hyind = [(dx,5,-1.3), (dx, 10), (dx,5,1.3)]
hzind = [(dx,5,-1.3),(dx, 10)]
mesh = Mesh.TensorMesh([hxind, hyind, hzind], 'CCC')
# Get index of the center
midx = int(mesh.nCx/2)
midy = int(mesh.nCy/2)
# Lets create a simple Gaussian topo and set the active cells
[xx,yy] = np.meshgrid(mesh.vectorNx,mesh.vectorNy)
zz = -np.exp( ( xx**2 + yy**2 )/ 75**2 ) + mesh.vectorNz[-1]
topo = np.c_[mkvc(xx),mkvc(yy),mkvc(zz)] # We would usually load a topofile
actv = surface2ind_topo(mesh,topo,'N') # Go from topo to actv cells
actv = np.asarray([inds for inds, elem in enumerate(actv, 1) if elem], dtype = int) - 1
#nC = mesh.nC
#actv = np.asarray(range(mesh.nC))
# Create active map to go from reduce space to full
actvMap = Maps.InjectActiveCells(mesh, actv, -100)
nC = len(actv)
# Create and array of observation points
xr = np.linspace(-20., 20., 20)
yr = np.linspace(-20., 20., 20)
X, Y = np.meshgrid(xr, yr)
# Let just put the observation above the topo
Z = -np.exp( ( X**2 + Y**2 )/ 75**2 ) + mesh.vectorNz[-1] + 5.
#Z = np.ones(shape(X)) * mesh.vectorCCz[-1]
# Create a MAGsurvey
rxLoc = np.c_[mkvc(X.T), mkvc(Y.T), mkvc(Z.T)]
rxLoc = PF.BaseMag.RxObs(rxLoc)
srcField = PF.BaseMag.SrcField([rxLoc],param = H0)
survey = PF.BaseMag.LinearSurvey(srcField)
Explanation: Linear Magnetic Inversion
Objective:
In this tutorial we will create a simple magnetic problem from scratch using the SimPEG framework.
We are using the integral form of the magnetostatic problem. In the absence of free-currents or changing magnetic field, magnetic material can give rise to a secondary magnetic field according to:
$$\vec b = \frac{\mu_0}{4\pi} \int_{V} \vec M \cdot \nabla \nabla \left(\frac{1}{r}\right) \; dV $$
Where $\mu_0$ is the magnetic permealitity of free-space, $\vec M$ is the magnetization per unit volume and $r$ defines the distance between the observed field $\vec b$ and the magnetized object. Assuming a purely induced response, the strenght of magnetization can be written as:
$$ \vec M = \mu_0 \kappa \vec H_0 $$
where $\vec H$ is an external inducing magnetic field, and $\kappa$ the magnetic susceptibility of matter.
As derived by Sharma 1966, the integral can be evaluated for rectangular prisms such that:
$$ \vec b(P) = \mathbf{T} \cdot \vec H_0 \; \kappa $$
Where the tensor matrix $\bf{T}$ relates the three components of magnetization $\vec M$ to the components of the field $\vec b$:
$$\mathbf{T} =
\begin{pmatrix}
T_{xx} & T_{xy} & T_{xz} \
T_{yx} & T_{yy} & T_{yz} \
T_{zx} & T_{zy} & T_{zz}
\end{pmatrix} $$
In general, we discretize the earth into a collection of cells, each contributing to the magnetic data such that:
$$\vec b(P) = \sum_{j=1}^{nc} \mathbf{T}_j \cdot \vec H_0 \; \kappa_j$$
giving rise to a linear problem.
The remaining of this notebook goes through all the important components of a 3D magnetic experiment. From mesh creation, topography, data and inverse problem.
Enjoy.
End of explanation
# We can now create a susceptibility model and generate data
# Lets start with a simple block in half-space
model = np.zeros((mesh.nCx,mesh.nCy,mesh.nCz))
model[(midx-2):(midx+2),(midy-2):(midy+2),-6:-2] = 0.02
model = mkvc(model)
model = model[actv]
# Create active map to go from reduce set to full
actvMap = Maps.InjectActiveCells(mesh, actv, -100)
# Creat reduced identity map
idenMap = Maps.IdentityMap(nP = nC)
# Create the forward model operator
prob = PF.Magnetics.MagneticIntegral(mesh, chiMap=idenMap, actInd=actv)
# Pair the survey and problem
survey.pair(prob)
# Compute linear forward operator and compute some data
d = prob.fields(model)
# Plot the model
m_true = actvMap * model
m_true[m_true==-100] = np.nan
plt.figure()
ax = plt.subplot(212)
mesh.plotSlice(m_true, ax = ax, normal = 'Y', ind=midy, grid=True, clim = (0., model.max()), pcolorOpts={'cmap':'viridis'})
plt.title('A simple block model.')
plt.xlabel('x'); plt.ylabel('z')
plt.gca().set_aspect('equal', adjustable='box')
# We can now generate data
data = d + np.random.randn(len(d)) # We add some random Gaussian noise (1nT)
wd = np.ones(len(data))*1. # Assign flat uncertainties
plt.subplot(221)
plt.imshow(d.reshape(X.shape), extent=[xr.min(), xr.max(), yr.min(), yr.max()])
plt.title('True data.')
plt.gca().set_aspect('equal', adjustable='box')
plt.colorbar()
plt.subplot(222)
plt.imshow(data.reshape(X.shape), extent=[xr.min(), xr.max(), yr.min(), yr.max()])
plt.title('Data + Noise')
plt.gca().set_aspect('equal', adjustable='box')
plt.colorbar()
plt.tight_layout()
# Create distance weights from our linera forward operator
wr = np.sum(prob.G**2.,axis=0)**0.5
wr = ( wr/np.max(wr) )
wr_FULL = actvMap * wr
wr_FULL[wr_FULL==-100] = np.nan
plt.figure()
ax = plt.subplot()
mesh.plotSlice(wr_FULL, ax = ax, normal = 'Y', ind=midx, grid=True, clim = (0, wr.max()),pcolorOpts={'cmap':'viridis'})
plt.title('Distance weighting')
plt.xlabel('x');plt.ylabel('z')
plt.gca().set_aspect('equal', adjustable='box')
Explanation: Now that we have all our spatial components, we can create our linear system. For a single location and single component of the data, the system would look like this:
$$ b_x =
\begin{bmatrix}
T_{xx}^1 &... &T_{xx}^{nc} & T_{xy}^1 & ... & T_{xy}^{nc} & T_{xz}^1 & ... & T_{xz}^{nc}\
\end{bmatrix}
\begin{bmatrix}
\mathbf{M}_x \ \mathbf{M}_y \ \mathbf{M}_z
\end{bmatrix} \ $$
where each of $T_{xx},\;T_{xy},\;T_{xz}$ are [nc x 1] long. For the $y$ and $z$ component, we need the two other rows of the tensor $\mathbf{T}$.
In our simple induced case, the magnetization direction $\mathbf{M_x,\;M_y\;,Mz}$ are known and assumed to be constant everywhere, so we can reduce the size of the system such that:
$$ \vec{\mathbf{d}}_{\text{pred}} = (\mathbf{T\cdot M})\; \kappa$$
In most geophysical surveys, we are not collecting all three components, but rather the magnitude of the field, or $Total\;Magnetic\;Intensity$ (TMI) data.
Because the inducing field is really large, we will assume that the anomalous fields are parallel to $H_0$:
$$ d^{TMI} = \hat H_0 \cdot \vec d$$
We then end up with a much smaller system:
$$ d^{TMI} = \mathbf{F\; \kappa}$$
where $\mathbf{F} \in \mathbb{R}^{nd \times nc}$ is our $forward$ operator.
End of explanation
#survey.makeSyntheticData(data, std=0.01)
survey.dobs= data
survey.std = wd
survey.mtrue = model
# Create a regularization
reg = Regularization.Sparse(mesh, indActive=actv, mapping=idenMap)
reg.cell_weights = wr
reg.norms = [0, 1, 1, 1]
reg.eps_p = 1e-3
reg.eps_1 = 1e-3
dmis = DataMisfit.l2_DataMisfit(survey)
dmis.W = 1/wd
# Add directives to the inversion
opt = Optimization.ProjectedGNCG(maxIter=100 ,lower=0.,upper=1., maxIterLS = 20, maxIterCG= 10, tolCG = 1e-3)
invProb = InvProblem.BaseInvProblem(dmis, reg, opt)
betaest = Directives.BetaEstimate_ByEig()
# Here is where the norms are applied
# Use pick a treshold parameter empirically based on the distribution of model
# parameters (run last cell to see the histogram before and after IRLS)
IRLS = Directives.Update_IRLS(f_min_change = 1e-3, minGNiter=3)
update_Jacobi = Directives.Update_lin_PreCond()
inv = Inversion.BaseInversion(invProb, directiveList=[betaest, IRLS, update_Jacobi])
m0 = np.ones(nC)*1e-4
mrec = inv.run(m0)
Explanation: Once we have our problem, we can use the inversion tools in SimPEG to run our inversion:
End of explanation
# Here is the recovered susceptibility model
ypanel = midx
zpanel = -4
m_l2 = actvMap * IRLS.l2model
m_l2[m_l2==-100] = np.nan
m_lp = actvMap * mrec
m_lp[m_lp==-100] = np.nan
m_true = actvMap * model
m_true[m_true==-100] = np.nan
plt.figure()
#Plot L2 model
ax = plt.subplot(231)
mesh.plotSlice(m_l2, ax = ax, normal = 'Z', ind=zpanel, grid=True, clim = (0., model.max()),pcolorOpts={'cmap':'viridis'})
plt.plot(([mesh.vectorCCx[0],mesh.vectorCCx[-1]]),([mesh.vectorCCy[ypanel],mesh.vectorCCy[ypanel]]),color='w')
plt.title('Plan l2-model.')
plt.gca().set_aspect('equal')
plt.ylabel('y')
ax.xaxis.set_visible(False)
plt.gca().set_aspect('equal', adjustable='box')
# Vertica section
ax = plt.subplot(234)
mesh.plotSlice(m_l2, ax = ax, normal = 'Y', ind=midx, grid=True, clim = (0., model.max()),pcolorOpts={'cmap':'viridis'})
plt.plot(([mesh.vectorCCx[0],mesh.vectorCCx[-1]]),([mesh.vectorCCz[zpanel],mesh.vectorCCz[zpanel]]),color='w')
plt.plot(([mesh.vectorCCx[0],mesh.vectorCCx[-1]]),([Z.min(),Z.max()]),color='k')
plt.title('E-W l2-model.')
plt.gca().set_aspect('equal')
plt.xlabel('x')
plt.ylabel('z')
plt.gca().set_aspect('equal', adjustable='box')
#Plot Lp model
ax = plt.subplot(232)
mesh.plotSlice(m_lp, ax = ax, normal = 'Z', ind=zpanel, grid=True, clim = (0., model.max()),pcolorOpts={'cmap':'viridis'})
plt.plot(([mesh.vectorCCx[0],mesh.vectorCCx[-1]]),([mesh.vectorCCy[ypanel],mesh.vectorCCy[ypanel]]),color='w')
plt.title('Plan lp-model.')
plt.gca().set_aspect('equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.gca().set_aspect('equal', adjustable='box')
# Vertical section
ax = plt.subplot(235)
mesh.plotSlice(m_lp, ax = ax, normal = 'Y', ind=midx, grid=True, clim = (0., model.max()),pcolorOpts={'cmap':'viridis'})
plt.plot(([mesh.vectorCCx[0],mesh.vectorCCx[-1]]),([mesh.vectorCCz[zpanel],mesh.vectorCCz[zpanel]]),color='w')
plt.title('E-W lp-model.')
plt.gca().set_aspect('equal')
ax.yaxis.set_visible(False)
plt.xlabel('x')
plt.gca().set_aspect('equal', adjustable='box')
#Plot True model
ax = plt.subplot(233)
mesh.plotSlice(m_true, ax = ax, normal = 'Z', ind=zpanel, grid=True, clim = (0., model.max()),pcolorOpts={'cmap':'viridis'})
plt.plot(([mesh.vectorCCx[0],mesh.vectorCCx[-1]]),([mesh.vectorCCy[ypanel],mesh.vectorCCy[ypanel]]),color='w')
plt.title('Plan true model.')
plt.gca().set_aspect('equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.gca().set_aspect('equal', adjustable='box')
# Vertical section
ax = plt.subplot(236)
mesh.plotSlice(m_true, ax = ax, normal = 'Y', ind=midx, grid=True, clim = (0., model.max()),pcolorOpts={'cmap':'viridis'})
plt.plot(([mesh.vectorCCx[0],mesh.vectorCCx[-1]]),([mesh.vectorCCz[zpanel],mesh.vectorCCz[zpanel]]),color='w')
plt.title('E-W true model.')
plt.gca().set_aspect('equal')
plt.xlabel('x')
ax.yaxis.set_visible(False)
plt.gca().set_aspect('equal', adjustable='box')
Explanation: Inversion has converged. We can plot sections through the model.
End of explanation
# Plot predicted data and residual
plt.figure()
pred = prob.fields(mrec) #: this is matrix multiplication!!
plt.subplot(221)
plt.imshow(data.reshape(X.shape))
plt.title('Observed data.')
plt.gca().set_aspect('equal', adjustable='box')
plt.colorbar()
plt.subplot(222)
plt.imshow(pred.reshape(X.shape))
plt.title('Predicted data.')
plt.gca().set_aspect('equal', adjustable='box')
plt.colorbar()
plt.subplot(223)
plt.imshow(data.reshape(X.shape) - pred.reshape(X.shape))
plt.title('Residual data.')
plt.gca().set_aspect('equal', adjustable='box')
plt.colorbar()
plt.subplot(224)
plt.imshow( (data.reshape(X.shape) - pred.reshape(X.shape)) / wd.reshape(X.shape) )
plt.title('Normalized Residual')
plt.gca().set_aspect('equal', adjustable='box')
plt.colorbar()
plt.tight_layout()
Explanation: Great, we have a 3D model of susceptibility, but the job is not done yet.
A VERY important step of the inversion workflow is to look at how well the model can predict the observed data.
The figure below compares the observed, predicted and normalized residual.
End of explanation |
3,865 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step15: Table of Contents
<p><div class="lev1"><a href="#Bayesian-Networks-Essays"><span class="toc-item-num">1 </span>Bayesian Networks Essays</a></div><div class="lev2"><a href="#Cancer-test"><span class="toc-item-num">1.1 </span>Cancer test</a></div><div class="lev3"><a href="#Definition"><span class="toc-item-num">1.1.1 </span>Definition</a></div><div class="lev3"><a href="#Two-Cancer-Test"><span class="toc-item-num">1.1.2 </span>Two Cancer Test</a></div><div class="lev3"><a href="#Conditional-Independent"><span class="toc-item-num">1.1.3 </span>Conditional Independent</a></div><div class="lev2"><a href="#Happinnes-Hipothesys"><span class="toc-item-num">1.2 </span>Happinnes Hipothesys</a></div><div class="lev2"><a href="#References"><span class="toc-item-num">1.3 </span>References</a></div>
# Bayesian Networks Essays
Step16: Cancer test
Step17: Definition
$
P(C) = 0.01\
P(\neg C) = 0.99\
P(+|C) = 0.9\
P(-|C) = 0.1\
P(+|\neg C) = 0.2\
P(-|\neg C) = 0.8
$
Step18: Two Cancer Test
Step19: Conditional Independent
Step21: Happinnes Hipothesys | Python Code:
from IPython.display import HTML, display
from nxpd import draw
from functools import wraps
from itertools import permutations
import networkx as nx
import matplotlib.pyplot as plt
import numpy as np
import re
%matplotlib inline
Σ = sum
def auto_display(f):
@wraps(f)
def _f(self, *args, **kwargs):
verbose = self.verbose
self.verbose = False
term = f(self, *args, **kwargs)
self.verbose = verbose
self.display(term)
return self.values[term]
return _f
def draw_graph(
graph, labels=None
):
# create networkx graph
G = nx.DiGraph()
G.graph['dpi'] = 120
G.add_nodes_from(set([
graph[k1][k2]
for k1 in range(len(graph))
for k2 in range(len(graph[k1]))
]))
G.add_edges_from(graph)
return draw(G, show='ipynb')
def EX(ex, P):
return eval(ex)
class BayesianNetework():
Some useful rules derived from [Kolgomorov Axioms]
(https://en.wikipedia.org/wiki/Probability_axioms)
for random variables:
prior_list = set([])
values = {}
verbose = False
display_formula = False
def __init__(self, verbose: bool=False, display_formula: bool=False):
self.verbose = verbose
self.display_formula = display_formula
self.values = {}
self.prior_list = set([])
def display(self, of: str):
if self.verbose:
print('P(%s)=%s' % (of, self.values[of]))
def check(self, of: str):
return of in self.values
@auto_display
def P(self, of: str):
return of
def prior(
self, of: str=None, value: float=None
):
P(C): the prior probability (what we know before the evidence)
if not value is None:
self.values[of] = value
self.prior_list |= {of}
return self.P(of)
def likelihood(
self, of: [str]=None, given: [str]=None, value: float=None
):
P(+|C): the likelihood of the data given the hypothesis
if isinstance(of, str):
of = [of]
if isinstance(given, str):
given = [given]
_of = '.'.join(of)
for g in permutations(given, len(given)):
_given = ','.join(g)
term = '%s|%s' % (_of, _given)
self.values[term] = value
return self.P(term)
def chain_rule():
Chain Rule:
P(A1,A2…An)=P(A1)P(A2|A1)…P(An|A1,A2…An−1)
pass
@auto_display
def p_joint(self, A: str, B: str):
term = '%s,%s' % (A, B)
self.values[term] = self.P(A) * self.P(B)
return term
@auto_display
def p_total(self, of: [str], given: [str]):
Total Probability
P(A|C) = ∑i P(A|C,Bi) * P(Bi|C)
# P(+1|+2)=P(+1|+2,C)P(C|+2)+P(+1|+2,¬C)P(¬C|+2)=…=0.2304
P = self.P
term = '%s|%s' % (of, given)
exprs = []
for prior in self.prior_list:
if given.replace('!', '') in prior.replace('!', ''):
continue
if not self.check('%s|%s,%s' % (of, given, prior)):
continue
exprs.append(
"P('%s|%s,%s') * P('%s|%s')" % (
of, given, prior, prior, given
)
)
if self.display_formula:
print('\nΣ', exprs)
self.values[term] = Σ(map(lambda ex: EX(ex, P), exprs))
return term
@auto_display
def p_marginal(self, of: str):
Bi∩Bj=∅,∑iBi=Ω:
P(A) = ∑iP(A,Bi)
P = self.p_joint
self.values[of] = sum([
P('%s|%s' % (of, b), b)
for b in self.prior_list
])
return of
@auto_display
def bayes_rules(self, of: [str], given: [str]):
P(A|B,C) = (P(B|A,C)*P(A|C))/P(B|C)
Example:
P(C|+) = (P(+|C)*P(C))/P(+)
P = self.P
_of = '.'.join(of)
_given = ','.join(given)
_likelihood = '%s|%s' % (_given, _of)
_prior = _of
_evidence = _given
term = ('%s|%s' % (_of, _given))
self.values[term] = (
P(_likelihood) * P(_prior)
)/P(_evidence)
return term
@auto_display
def conditional(self, of: str, given: str):
Conditional Probability:
P(A,B)=P(B|A)P(A)=P(A|B)P(B)
P = self.P
term = '%s|%s' % (of, given)
self.values[term] = None
return term
@auto_display
def conditional_independent(self, p1: str, p2: str):
self.values[p1] = self.P(p2)
return p1
@auto_display
def evidence(self, of: str):
P(+): e.g. the evidence
the marginal probability of the test is positive
self.values[of] = self.p_marginal(of)
return of
@auto_display
def proportional_posterior(self, of: [str], given: [str]):
Posterior probability ∝ Likelihood × Prior probability
P = self.P
p = {}
_of = '.'.join(of)
_given = ','.join(given)
term = '%s|%s' % (_of, _given)
for i, _prior in enumerate(self.prior_list):
_p_likelihood = []
for _given in given:
_likelihood = '%s|%s' % (_given, _prior)
_p_likelihood.append(P(_likelihood))
p[_prior] = np.prod(np.array(_p_likelihood)) * P(_prior)
self.values[term] = p[_of]/sum(p.values())
return term
@auto_display
def posterior(self, of: [str], given: [str]):
P(C|+): the posterior probability, the new belief after the evidence
is processed, using Bayes’ rule.
The posterior probability can be written in the memorable form as
Posterior probability ∝ Likelihood × Prior probability
if isinstance(of, str):
of = [of]
if isinstance(given, str):
given = [given]
_of = '.'.join(of)
_given = ','.join(given)
term = '%s|%s' % (_of, _given)
if _given in self.values:
self.values[term] = self.bayes_rules(of=of, given=given)
else:
self.values[term] = self.proportional_posterior(
of=of, given=given
)
return term
Explanation: Table of Contents
<p><div class="lev1"><a href="#Bayesian-Networks-Essays"><span class="toc-item-num">1 </span>Bayesian Networks Essays</a></div><div class="lev2"><a href="#Cancer-test"><span class="toc-item-num">1.1 </span>Cancer test</a></div><div class="lev3"><a href="#Definition"><span class="toc-item-num">1.1.1 </span>Definition</a></div><div class="lev3"><a href="#Two-Cancer-Test"><span class="toc-item-num">1.1.2 </span>Two Cancer Test</a></div><div class="lev3"><a href="#Conditional-Independent"><span class="toc-item-num">1.1.3 </span>Conditional Independent</a></div><div class="lev2"><a href="#Happinnes-Hipothesys"><span class="toc-item-num">1.2 </span>Happinnes Hipothesys</a></div><div class="lev2"><a href="#References"><span class="toc-item-num">1.3 </span>References</a></div>
# Bayesian Networks Essays
End of explanation
# Graphs
graph = [('C', '+1')]
draw_graph(graph)
Explanation: Cancer test
End of explanation
bnet = BayesianNetework(verbose=True)
# Prior Probability
print('\nPrior Probability')
P = bnet.prior
P('C', 0.01)
P('!C', 1-P('C'))
# likelihood of the data given the hypothesis
print('\nlikelihood of the data given the hypothesis')
P = bnet.likelihood
P(of='+', given='C', value=0.9)
P(of='-', given='C', value=0.1)
P(of='+', given='!C', value=0.2)
P(of='-', given='!C', value=0.8)
print('\nEvidence')
P = bnet.evidence
P('+')
P('-')
print('\nThe posterior probability')
P = bnet.posterior
P(of='C', given='+')
P(of='C', given='-')
P(of='!C', given='+')
P(of='!C', given='-')
Explanation: Definition
$
P(C) = 0.01\
P(\neg C) = 0.99\
P(+|C) = 0.9\
P(-|C) = 0.1\
P(+|\neg C) = 0.2\
P(-|\neg C) = 0.8
$
End of explanation
# Graphs
graph = [('C', '+1'), ('C', '+2')]
draw_graph(graph)
print('\nThe posterior probability')
P = bnet.posterior
P(of='C', given=['+', '+'])
P(of='!C', given=['+', '+'])
P(of='C', given=['+', '-'])
# P(of=['C', '+'], given=['+'])
Explanation: Two Cancer Test
End of explanation
P = bnet.conditional_independent
P('+|+,C', '+|C')
P('+|+,!C', '+|!C')
P = bnet.p_total
P(of='+', given='+')
Explanation: Conditional Independent
End of explanation
# Graphs
graph = [('S', 'H'), ('R', 'H')]
draw_graph(graph)
bnet = BayesianNetework(verbose=True, display_formula=True)
display(HTML('<h3>PRIOR PROBABILITY</h3>'))
P = bnet.prior
P('S', 0.7)
P('!S', 0.3)
P('R', 0.01)
P('!R', 0.99)
display(HTML('<h3>JOINT PROBABILITY</h3>'))
P = bnet.p_joint
P('S', 'R')
P('S', '!R')
P('!S', 'R')
P('!S', '!R')
display(HTML('<h3>LIKELIHOOD PROBABILITY</h3>'))
P = bnet.likelihood
P(of='H', given=['S', 'R'], value=1)
P(of='H', given=['!S', 'R'], value=0.9)
P(of='H', given=['S', '!R'], value=0.7)
P(of='H', given=['!S', '!R'], value=0.1)
display(HTML('<h3>CONDITIONAL INDEPENDENCE</h3>'))
P = bnet.conditional_independent
P('R|S', 'R')
P('R|!S', 'R')
P('!R|S', '!R')
P('!R|!S', '!R')
P('S|R', 'S')
P('S|!R', 'S')
P('!S|R', '!S')
P('!S|!R', '!S')
display(HTML('<h3>EVIDENCE</h3>'))
P = bnet.p_total
P(of='H', given='S')
P(of='H', given='R')
P(of='H', given='!S')
P(of='H', given='!R')
#P(of='R', given=['H', 'S'])
P = bnet.evidence
#P('H')
display(HTML('<h3>POSTERIOR PROBABILITY</h3>'))
P = bnet.posterior
P(of='R', given=['H', 'S'])
None
Explanation: Happinnes Hipothesys
End of explanation |
3,866 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Db2 OData Tutorial
This tutorial will explain some of the features that are available in the IBM Data Server Gateway for OData Version 1.0.0. IBM Data Server Gateway for OData enables you to quickly create OData RESTful services to query and update data in IBM Db2 LUW.
An introduction to the OData gateway is found in the following developerWorks article
Step1: Db2 Extensions
Since we are connecting to a Db2 database, the following command will load the Db2 Jupyter notebook extension (%sql). The Db2 extension allows you to fully interact with the Db2 database, including the ability to drop and create objects. The OData gateway provides INSERT, UPDATE, DELETE, and SELECT capability to the database, but it doesn't have the ability to create or drop actual objects. The other option would be to use Db2 directly on the database server using utilities like CLP (Command Line Processor) or DSM (Data Server Manager).
Step2: <a id='top'></a>
Table of Contents
A Brief Introduction to Odata
<p>
* [Db2 and OData Connection Requirements](#connect)
* [Connecting through OData](#connectodata)
* [Set Command Syntax](#set)
<p>
* [A Quick Introduction](#quick)
* [Selecting Data from a Table](#sampleselect)
* [Displaying the OData Syntax](#viewodata)
* [Limiting Output Results](#limit)
* [Persistent Connection Information](#persistant)
* [Variables in OData Statements](#variables)
* [Retrieving URL, OData Command, and Parameters](#retrieveurl)
* [JSON Display in Firefox](#firefox)
<p>
* [SQL Command Syntax](#sqlsyntax)
* [SELECT Statements](#select)
* [Selecting Columns to Display](#columns)
* [FROM Clause](#from)
* [Describing the Table Structure](#describe)
* [WHERE Clause](#where)
* [LIMIT Clause](#limitclause)
* [INSERT Statement](#insert)
* [DELETE Statement](#delete)
* [UPDATE Statement](#update)
* [VIEWS](#views)
<p>
* [Summary](#summary)
[Back to Top](#top)
<a id='intro'></a>
## An Brief Introduction to OData
Rather than paraphrase what OData does, here is the official statement from the OData home page
Step3: If you connected to the SAMPLE database, you will have the EMPLOYEE and DEPARTMENT tables available to you. However, if you are connecting to a different database, you will need to execute the next command to populate the tables for you. Note, if you run this command and the two tables already exist, the tables will not be replaced. So don't worry if you execute this command by mistake.
Step4: Requesting data from Db2 using the standard %sql (ibm_db) interface is relatively straight-forward. We just need to place the SQL in the command and execute it to get the results.
Step5: Now that we have a working Db2 connection, we will need to set up an OData service to talk to Db2.
Back to Top
<a id='connectodata'></a>
Connecting through OData
Connecting through OData requires a different approach than a Db2 client. We still need to ask a bunch of questions on how we connect to the database, but this doesn't create a connection from the client. Instead what we end up creating is a service URL. This URL gives us access to Db2 through the OData gateway server.
The OData Server take the URL request and maps it to a Db2 resource, which could be one or more tables. The RESTful API needs this URL to communicate with Db2 but nothing else (userids, passwords, etc...) are sent with the request.
The following %odata command will prompt you for the connection parameters, similar to what happened with the Db2 connect. There are a few differences however. The connection requires the userid and password of the user connecting to the database, and the userid and password of a user with administration (DBABM) privileges.
The administrative user creates the service connection that will be used to communicate through the OData gateway and Db2. The regular userid and password is for the actual user that will connect to the database to manipulate the tables. Finally we need to have the schema (or owner) of the tables that will be accessed. From a Db2 perspective, this is similar to connecting to a DATABASE (SAMPLE) as userid FRED. The EMPLOYEE table was created under the userid DB2INST1, so to access the table we need to use DB2INST1.EMPLOYEE. If we didn't include the schema (DB2INST1), the query would fail since FRED was not the owner of the table.
The %odata PROMPT command will request all of the connection parameters and explain what the various fields are. Note
Step6: Back to Top
<a id='quick'></a>
A Quick Introduction
The following section will give you a quick introduction to using OData with Db2. More details on syntax and examples are found later on in the notebook.
Selecting Data from a Table
So far all we have done is set up the connection parameters, but no actual connection has been made to Db2, nor has an OData service been created. The creation of a service is done when the first SQL request is issued. The next statement will retrieve the values from our favorite EMPLOYEE table, but use OData to accomplish it.
Step7: Back to Top
<a id='viewodata'></a>
Displaying the OData Syntax
Under the covers a number of things happened when running this command. The SELECT * FROM EMPLOYEE is not what is sent to OData. The syntax is converted to something that the RESTful API understands. To view the actual OData syntax, the -e option is used to echo back the commands.
Step8: The results will show the URL service command used (http
Step9: One drawback of OData is that we don't get the actual error text returned. We know that the error code is, but the message isn't that descriptive. Using the %sql (Db2) command, we can find out that the table doesn't exist.
Step10: Back to Top
<a id='limit'></a>
Limiting Output Results
The results contain 43 rows. If you want to reduce the amount of rows being returned we can use the LIMIT clause on the SELECT statement. In addition, we can use the -j flag to return the data as JSON records.
Step11: To limit the results from a OData request, you must add the \$top=x modifier at the end of the service request. The format then becomes
Step12: The last example illustrates two additional features of the %odata command. First, you can span statements over multiple lines by using the backslash character ('\'). You could also use the %%odata command to do this without backslashes, but it unfortunately will not allow for variable substitution. The current settings being used by OData can be found by issuing the SETTINGS command.
You can specify the command with only the TABLE option and it will take the current DATABASE and SCHEMA names from any prior settings.
Step13: You can also refer to these values by using the settings['name'] variable. So the DROP statement just took the current DATABASE and SCHEMA settings and deleted the definition for the EMPLOYEE table. You could have done this directly with
Step14: And this command will show the connection service being created for us.
Step15: Back to Top
<a id='retrieveurl'></a>
Retrieving URL, OData Command, and Parameters
The %odata command will return the URL command for a select statement as part of the command
Step16: You can use this URL to directly access the results through a browser, or any application that can read the results returned by the OData gateway. The print statement below will display the URL as an active link. Click on that to see the results in another browser window.
Step17: When a URL is generated, we need to append the \$format=json tag at the end to tell the OData service and the browser how to handle the results. When we run OData and RESTful calls from a programming language (like Python), we are able to send information in the header which tells the API how to handle the results and parameters. All of the RESTful calls to the OData gateway use the following header information
Step18: Back to Top
<a id='select'></a>
SELECT Statements
The SELECT statement is the most complicated of the four statements that are allowed in OData. There are generally two forms that can be used when accessing a record. The first method uses the primary key of the table and it requires no arguments. Note that the examples will not show the URL that points to the OData service.
<pre>
/EMPLOYEES('000010')
</pre>
The second method is to use the \$filter query option. \$filter allows us to compare any column against a value. The equivalent OData statement for retrieving an individual employee is
Step19: You will notice that not all of the rows have been displayed. The output has been limited to 10 lines. 5 lines from the start of the answer set and 5 lines from the bottom of the answer set are displayed. If you want to change the maximum number of rows to be displayed, use the MAXROWS setting.
Step20: If you want an unlimited number of rows returned, set maxrows to -1.
Step21: It is better to limit the results from the answer set by using the LIMIT clause in the SELECT statement. LIMIT will force Db2 to stop retrieving rows after "x" number have been read, while the MAXROWS setting will retrieve all rows and then only display a portion of them. The one advantage of MAXROWS is that you see the bottom 5 rows while you would only be able to do that with Db2 if you could reverse sort the output. The current OData implementation does not have the ability to $orderby, so sorting to reverse the output is not possible.
Step22: Example
Step23: Back to Top
<a id='columns'></a>
Selecting Columns to Display
OData allows you to select which columns to display as part of the output. The $select query option requires a list of columns to be passed to it. For instance, the following SQL will only display the first name and last name of the top five employees.
Example
Step24: The COUNT(*) function is available as part of a SELECT list and it cannot include any other column names. If you do include other column names they will be ignored.
Step25: One of the unusual behaviors of the COUNT(*) function is that will actually return the entire answer set under the covers. The %odata command strips the count out from the results and doesn't display the rows returned. That is probably not would you expect from this syntax! The COUNT function is better described as the count of physical rows returned. Here is the same example with 5 rows returned and the JSON records.
Step26: One of the recommendations would be not to use the COUNT(*) function to determine the amount of rows that will be retrieved, especially if you expect there to a large of number rows. To minimize the data returned, you can use the form COUNT(column) which will modify the OData request to return the count and ONLY that column in the result set. This is a compromise in terms of the amount of data returned. This example using the -r (raw) flag which results in all of the JSON headers and data to be displayed. The JSON flag (-j) will not display any records.
Step27: Back to Top
<a id='from'></a>
FROM Clause
The FROM clause is mandatory in any SELECT statement. If an OData service has already been established, there will be no service request sent to OData. Instead, the URL information stored on disk will be used to establish the connection.
If a service has not been established, the %odata command will create the service and then build the OData select statement. If you want to see the command to establish the service as well as the SELECT command, use the -e flag to echo the results.
If the table does not exist in the database you will receive an error message.
Step28: This actually can cause some issues if you try to reuse the connection information that was created with the UNKNOWN_TBL. Since the service could not determine the structure of the table, the service will not return any column information with a select statement. The next SQL statement will create the UNKNOWN_TBL.
Step29: Retrying the SELECT statement will result in 43 rows with no columns returned!
Step30: To correct this situation, you need to DROP the connect that the %odata program is using and reissue the SELECT statement.
Step31: Now you can try the SQL statement again.
Step32: Back to Top
<a id='describe'></a>
Describing the Table Structure
The SELECT statement needs to know what columns are going to be returned as part of the answer set. The asterix (*) returns all of the columns, but perhaps you only want a few of the columns. To determine what the columns are in the table along with the data types, you can use the DESCRIBE command. The following statement will show the structure of the EMPLOYEE table.
Step33: The datatypes are not the same as what one expect from a relational database. You get generic information on the character columns (String), and the numbers (Int16, Decimal). The Decimal specification actually contains the number of digits and decimal places but that isn't returned when using the table display.
| Data Type | Contents
|
Step34: Example
Step35: Example
Step36: Example
Step37: Example
Step38: Example
Step39: Converting to OData will mean that the search will look across the entire string, not just the beginning.
Step40: Back to Top
<a id='limitclause'></a>
Limit Clause
The LIMIT clause was discussed earlier in this notebook. LIMIT allows you to reduce the amount of rows that are returned in the answer set. The LIMIT clause is similar to FETCH FIRST x ROWS ONLY in Db2. The rows are always taken from the beginning of the answer set so there is no way to skip "x" rows before getting results. The facility does exist in the OData spaecification but has not been implemented in this release.
The LIMIT clause also works in conjunction with the %odata command. The default number of rows that are displayed in a table (result set) is set to 10 by default. So if you have 50 rows in your answer set, the first 5 are displayed and then the last 5 with the rows inbetween are hidden from view. If you want to see the entire answer set, you need to change the MAXROWS value to -1
Step41: Back to Top
<a id='insert'></a>
INSERT Command
OData allows you to insert data into a table through the use of the RESTful POST command and a JSON document that contains the field names and contents of those fields.
The format of the INSERT command is
Step42: We also need to remove the connection information from the system in the event we've run this example before.
Step43: A couple of things about the table design. The salary is NOT NULL, while the BONUS allows for nulls. Unfortunately, the DESCRIBE command only tells us about the columns in the table and their OData data type, and no indication of whether table.
Step44: The initial INSERT will populate the table with valid data. The echo option will show the json document that is sent via the POST command to OData to insert the row.
Step45: Just to make sure things were inserted properly, we retrieve the contents of the table.
Step46: OData (and Db2) will return an error message about our missing SALARY column which requires a value.
Step47: We can try this on the Db2 side as well to get the details of the error.
Step48: Back to Top
<a id='delete'></a>
DELETE Command
The DELETE command only takes one parameter and that is the key value for the record that we want to delete from the table. The format of the command is
Step49: A primary key is required to issue a DELETE command. You also need to make sure that the primary key column does not contain NULLs because a primary key must always contain a value. The following SQL tries to fix the primary key issue.
Step50: Check to see if we can delete the row yet.
Step51: Adding a primary key after the fact won't help because the service URL would have already recorded the information about the table (and the fact it didn't have a primary key at the time). We need to drop our SERVICE URL and generate another one.
Step52: We do a describe on the table and this will force another service URL to be generated for us.
Step53: Trying the DELETE this time will work.
Step54: Deleting the record again still gives you a successful return code. The call always returns a successful status even if the record doesn't exist.
Step55: Back to Top
<a id='update'></a>
UPDATE Command
The update command requires both a primary key to update and the name of the field that you want changed. Note that you can only change one field at a time. There is no ability to specify multiple fields at this time.
The format of the UDPATE command is
Step56: At this point we can update their salary.
Step57: We doublecheck the results to make sure we got it right!
Step58: Back to Top
<a id='views'></a>
Views
The OData implemented with Db2 doesn't allow for JOINS between tables. Sometimes you need to be able to look up information from another table in order to get the final result. One option you have to do this is to create a VIEW on the Db2 system.
The VIEW can contain almost any type of SQL so it allows for very complex queries to be created. For instance, the following view joins the EMPLOYEE table and the DEPARTMENT table to generate a row with the employee name and the name of the department that they work for.
Step59: We also need to drop any service connection you may have created in the past with this table name.
Step60: Now that we have created the view, we can retrieve rows from it just like a standard table.
Step61: You can also create sophisticated VIEWS that can take parameters to adjust the results returned. For instance, consider the following SQL statement which gives me count of employees that work in SYSTEMS departments.
Step62: There are two departments with the name SYSTEMS in them, but there is no easy way to create a view for every possible combination of searches that you may want. Instead what we do is create a table that contains the pattern we want to look for and create the view so that it references this table.
The first step is to create our PATTERN table. Note we make sure it has a primary key so our OData update calls can change it!
Step63: Now we create a view that access this PATTERN table to do the actual search. Note that values that are inserted into the PATTERN table must have the SQL special characters like % to make sure patterns can be anywhere in the string.
Step64: In order for our view to work properly, we must populate our PATTERN table with a value. To test the view we will use %SYSTEMS% as our first example.
Step65: And now we can test our view by selecting from it.
Step66: Now that we have it working, we can try exactly the same thing but with OData. Our first transaction will update the search key to SERVICE.
Step67: The next OData statement should select the count of employees working in service departments. | Python Code:
%run db2odata.ipynb
Explanation: Db2 OData Tutorial
This tutorial will explain some of the features that are available in the IBM Data Server Gateway for OData Version 1.0.0. IBM Data Server Gateway for OData enables you to quickly create OData RESTful services to query and update data in IBM Db2 LUW.
An introduction to the OData gateway is found in the following developerWorks article:
https://www.ibm.com/developerworks/community/blogs/96960515-2ea1-4391-8170-b0515d08e4da/entry/IBM_Data_Server_Gateway_for_OData_Version_1_0_0?lang=en
The code can be obtained through the following link:
https://www-945.ibm.com/support/fixcentral/swg/selectFixes?parent=ibm~Information%2BManagement&product=ibm/Information+Management/IBM+Data+Server+Client+Packages&release=11.1.&platform=Linux&function=fixId&fixids=odataFP001&includeSupersedes=0&source=fc
OData Extensions for Db2
In order to help explain some of the features of the OData Gateway, a Jupyter notebook has been created that includes an %odata command that maps Db2 SQL into the equivalent OData syntax. The next command will load the extension and make the command available to this tutorial.
End of explanation
%run db2.ipynb
Explanation: Db2 Extensions
Since we are connecting to a Db2 database, the following command will load the Db2 Jupyter notebook extension (%sql). The Db2 extension allows you to fully interact with the Db2 database, including the ability to drop and create objects. The OData gateway provides INSERT, UPDATE, DELETE, and SELECT capability to the database, but it doesn't have the ability to create or drop actual objects. The other option would be to use Db2 directly on the database server using utilities like CLP (Command Line Processor) or DSM (Data Server Manager).
End of explanation
%sql connect reset
%sql connect
Explanation: <a id='top'></a>
Table of Contents
A Brief Introduction to Odata
<p>
* [Db2 and OData Connection Requirements](#connect)
* [Connecting through OData](#connectodata)
* [Set Command Syntax](#set)
<p>
* [A Quick Introduction](#quick)
* [Selecting Data from a Table](#sampleselect)
* [Displaying the OData Syntax](#viewodata)
* [Limiting Output Results](#limit)
* [Persistent Connection Information](#persistant)
* [Variables in OData Statements](#variables)
* [Retrieving URL, OData Command, and Parameters](#retrieveurl)
* [JSON Display in Firefox](#firefox)
<p>
* [SQL Command Syntax](#sqlsyntax)
* [SELECT Statements](#select)
* [Selecting Columns to Display](#columns)
* [FROM Clause](#from)
* [Describing the Table Structure](#describe)
* [WHERE Clause](#where)
* [LIMIT Clause](#limitclause)
* [INSERT Statement](#insert)
* [DELETE Statement](#delete)
* [UPDATE Statement](#update)
* [VIEWS](#views)
<p>
* [Summary](#summary)
[Back to Top](#top)
<a id='intro'></a>
## An Brief Introduction to OData
Rather than paraphrase what OData does, here is the official statement from the OData home page:
http://www.odata.org/
>OData (Open Data Protocol) is an ISO/IEC approved, OASIS standard that defines a set of best practices for building and consuming RESTful APIs. OData helps you focus on your business logic while building RESTful APIs without having to worry about the various approaches to define request and response headers, status codes, HTTP methods, URL conventions, media types, payload formats, query options, etc. OData also provides
guidance for tracking changes, defining functions/actions for reusable procedures, and sending asynchronous/batch requests.
### Why OData for Db2?
Customers have a wealth of data in their databases (not just Db2) and publishing it to different devices is often fraught with many challenges. Db2 requires the use of client code to communicate between the application and the database itself. Many of APIs that are used are well known: JDBC, .Net, ODBC, OLE-DB, CLI and so on. Most programming languages have some sort of connector that maps from the language syntax to the database driver. When a new language gets developed it always needs this driver code to talk to the database. For instance, this Python notebook is communicating to Db2 natively using the ibm_db package. Without some specialized coding, there would be no way to communicate with Db2.
OData tries to remove much of the complexity of communicating with the database. There are no drivers required, no configuration file, nor any administration required on the client that is communicating with the database. All communication is done using RESTful API calls, which are available on all browsers and all operating systems. The calls to the database are replaced with standard POST, GET, DELETE, PUT and PATCH requests.
OData goes one step further and removes the syntactical differences between SQL vendors. The INSERT, DELETE, UPDATE and SELECT statements are coverted to a canonical form that should be interpreted by all vendors. Of course, interoperability depends on how much of the standard a vendor has implemented.
The end result is that an Iphone, Andriod phone, tablet, browser or any application will be able to access the database without having any code installed locally. This simplifies the ability to access the database and makes development considerably easier.
The downside to this approach is that the richness of a particular SQL dialect will not be available through OData. Complex SQL with aggregation functions and moving result windows are not a good candidate to use with OData. However, OData covers much of the query spectrum that traditional applications will use, so it makes it a good choice for agile development.
### OData to Db2 Extension
Writing OData calls to Db2 requires a knowledge of the OData syntax, the RESTful calling sequence, and an understanding of the level of support of OData that Db2 provides. This tutorial will take you through all of the functions that the OData gateway currently provides and show how these calls are implemented. Feel free to use the code and extensions in your own applications.
[Back to Top](#top)
<a id='connect'></a>
## Db2 and OData Connection Requirements
Both the Db2 client and OData calls need connection information. The way that you go about connecting to the database is completely different between these two protocols. Let first start with the Db2 connection.
Db2 requires a userid and password to connect to a database (along with the client code that talks to Db2 over the network). Assuming you have a Db2 database somewhere, the next command will ask you for the following information:
* DATABASE name - The name of the Db2 database you want to connect to
* HOST ipaddress - The IP address (or localhost) where the Db2 instance can be found
* PORT portno - The PORT number that Db2 is listening to (usually 50000)
* USER userid - The user that will be connecting to Db2
* PASSWORD pwd - The password for the USER (use a "?" to prompt for the value)
You need to have this information available or the program won't be able to connect. For demonstration purposes, the standard SAMPLE database should be used but in the event you don't have that created, the %sql command will generate the necessary tables for you. It is also good to be a DBADM (database administrator) on the system you are connecting to. This will allow you to create the services requires by the OData gateway. If you don't, someone with that authority will be needed to give you access through OData.
When the next set of commands is issued, the system will prompt you for the information required as well as give you the details for each of the fields.
End of explanation
%sql -sampledata
Explanation: If you connected to the SAMPLE database, you will have the EMPLOYEE and DEPARTMENT tables available to you. However, if you are connecting to a different database, you will need to execute the next command to populate the tables for you. Note, if you run this command and the two tables already exist, the tables will not be replaced. So don't worry if you execute this command by mistake.
End of explanation
%sql SELECT * FROM EMPLOYEE
Explanation: Requesting data from Db2 using the standard %sql (ibm_db) interface is relatively straight-forward. We just need to place the SQL in the command and execute it to get the results.
End of explanation
%odata register
Explanation: Now that we have a working Db2 connection, we will need to set up an OData service to talk to Db2.
Back to Top
<a id='connectodata'></a>
Connecting through OData
Connecting through OData requires a different approach than a Db2 client. We still need to ask a bunch of questions on how we connect to the database, but this doesn't create a connection from the client. Instead what we end up creating is a service URL. This URL gives us access to Db2 through the OData gateway server.
The OData Server take the URL request and maps it to a Db2 resource, which could be one or more tables. The RESTful API needs this URL to communicate with Db2 but nothing else (userids, passwords, etc...) are sent with the request.
The following %odata command will prompt you for the connection parameters, similar to what happened with the Db2 connect. There are a few differences however. The connection requires the userid and password of the user connecting to the database, and the userid and password of a user with administration (DBABM) privileges.
The administrative user creates the service connection that will be used to communicate through the OData gateway and Db2. The regular userid and password is for the actual user that will connect to the database to manipulate the tables. Finally we need to have the schema (or owner) of the tables that will be accessed. From a Db2 perspective, this is similar to connecting to a DATABASE (SAMPLE) as userid FRED. The EMPLOYEE table was created under the userid DB2INST1, so to access the table we need to use DB2INST1.EMPLOYEE. If we didn't include the schema (DB2INST1), the query would fail since FRED was not the owner of the table.
The %odata PROMPT command will request all of the connection parameters and explain what the various fields are. Note: If you have DBADM privileges (and you created the sample tables yourself), you can leave the USERID/PASSWORD/SCHEMA values blank and they will default to the administrative user values.
Back to Top
<a id='set'></a>
End of explanation
%odata RESET TABLE EMPLOYEE
s = %odata -e SELECT lastname, salary from employee where salary > 50000
Explanation: Back to Top
<a id='quick'></a>
A Quick Introduction
The following section will give you a quick introduction to using OData with Db2. More details on syntax and examples are found later on in the notebook.
Selecting Data from a Table
So far all we have done is set up the connection parameters, but no actual connection has been made to Db2, nor has an OData service been created. The creation of a service is done when the first SQL request is issued. The next statement will retrieve the values from our favorite EMPLOYEE table, but use OData to accomplish it.
End of explanation
s = %odata -e SELECT * FROM EMPLOYEE
Explanation: Back to Top
<a id='viewodata'></a>
Displaying the OData Syntax
Under the covers a number of things happened when running this command. The SELECT * FROM EMPLOYEE is not what is sent to OData. The syntax is converted to something that the RESTful API understands. To view the actual OData syntax, the -e option is used to echo back the commands.
End of explanation
%odata select * from unknown_table
Explanation: The results will show the URL service command used (http:// followed by details of the host location and service ID) and the OData command. In this case the command should be /EMPLOYEES. This may seem like a spelling mistake, but the OData service creates a mapping from the database table (EMPLOYEE) to a service request. To give the service request a unique name, the letter "S" is appended to the table name. Do not confuse the service
name with the table name. That can sometimes lead to coding errors!
If we tried to request a table that didn't exist in the database, we would get an error message instead.
End of explanation
%sql select * from unknown_table
Explanation: One drawback of OData is that we don't get the actual error text returned. We know that the error code is, but the message isn't that descriptive. Using the %sql (Db2) command, we can find out that the table doesn't exist.
End of explanation
s = %odata -e -j SELECT * FROM EMPLOYEE LIMIT 1
Explanation: Back to Top
<a id='limit'></a>
Limiting Output Results
The results contain 43 rows. If you want to reduce the amount of rows being returned we can use the LIMIT clause on the SELECT statement. In addition, we can use the -j flag to return the data as JSON records.
End of explanation
%odata \
RESET \
DATABASE {odata_settings['database']} \
SCHEMA {odata_settings['schema']} \
TABLE EMPLOYEE
Explanation: To limit the results from a OData request, you must add the \$top=x modifier at the end of the service request. The format then becomes:
<pre>
\[service url\]/\[service name\]?$top=value
</pre>
You will notice that the OData syntax requires that a "?" be placed after the name of the service. In our example, EMPLOYEES is the name of the service that accesses the EMPLOYEE table. We add the ? after the end of the service name and then add the $top modifier. If there were multiple modifiers, each one must be separated with an ampersand (&) symbol.
Back to Top
<a id='persistant'></a>
Persistent Connection Information
What you should have received when running the previous command was a single JSON record, the service URL and the OData command. The URL will be identical to the one in the previous %odata request. There is no need to recreate a service if you are using the same table. The program created a new service when you did a SELECT for the first time. After that it keeps the service information in a file called TABLE@[email protected] in the directory where the Jupyter notebook is running. If you try this statement at another time, this service URL will be retrieved from this file rather than creating another service.
Dropping a Connection
If you want to delete the connection information, use the RESET command with the database, schema, and table name in it. This doesn't drop the object or anything associated with the table. All this does is remove the service information from your system. It also does not remove the service from the OData gateway.
End of explanation
%odata settings
Explanation: The last example illustrates two additional features of the %odata command. First, you can span statements over multiple lines by using the backslash character ('\'). You could also use the %%odata command to do this without backslashes, but it unfortunately will not allow for variable substitution. The current settings being used by OData can be found by issuing the SETTINGS command.
You can specify the command with only the TABLE option and it will take the current DATABASE and SCHEMA names from any prior settings.
End of explanation
%odata set DATABASE {odata_settings['database']} SCHEMA {odata_settings['schema']}
Explanation: You can also refer to these values by using the settings['name'] variable. So the DROP statement just took the current DATABASE and SCHEMA settings and deleted the definition for the EMPLOYEE table. You could have done this directly with:
<pre>
RESET DATABASE SAMPLE SCHEMA DB2INST1 TABLE EMPLOYEE
</pre>
The list of settings and their variable names are listed below.
| Setting | Variable name
|:--------------|:---------------------
| DATABASE | odata_settings['database']
| SCHEMA | odata_settings['schema']
| ADMIN | odata_settings['admin']
| A_PWD | odata_settings['a_pwd']
| USER | odata_settings['userid']
| U_PWD | odata_settings['u_pwd']
| HOST | odata_settings['host']
| PORT | odata_settings['port']
| MAXROWS | odata_settings['maxrows']
Back to Top
<a id='variables'></a>
Variables in %OData Statements
To use local Jupyter/Python variables in a notebook, all you need to do is place braces {} around the name of the variable. Before we illustrate this, we need to create another connection (since we just dropped it in the last example). Fortunately, none of the settings have been removed, so we still have the connection information (DATABASE, SCHEMA, ...) available.
In the event you have closed the notebook and started up from scratch, there is no need to do a full connect command (or prompt). The settings are automatically written to disk and then restored when you start up another session. If you want to connect to another database then you will need to use the following SET statement.
End of explanation
u = %odata -e select * from employee limit 1
Explanation: And this command will show the connection service being created for us.
End of explanation
url = %odata -e select * from employee limit 1
Explanation: Back to Top
<a id='retrieveurl'></a>
Retrieving URL, OData Command, and Parameters
The %odata command will return the URL command for a select statement as part of the command:
<pre>
<url> = %odata -e select * from employee limit 1
</pre>
The variable "url" will contain the full URL required to retrieve data from the OData service. The next command illustrates how this works. You must use the echo (-e) option to get the URL returned. Note that you cannot use this syntax with the %%odata version of the command.
End of explanation
print(url)
Explanation: You can use this URL to directly access the results through a browser, or any application that can read the results returned by the OData gateway. The print statement below will display the URL as an active link. Click on that to see the results in another browser window.
End of explanation
%odata delete
Explanation: When a URL is generated, we need to append the \$format=json tag at the end to tell the OData service and the browser how to handle the results. When we run OData and RESTful calls from a programming language (like Python), we are able to send information in the header which tells the API how to handle the results and parameters. All of the RESTful calls to the OData gateway use the following header information:
<pre>
{
"Content-Type":"application/json",
"Accept":"application/json"
}
</pre>
<br>When we send the URL to the OData gateway, it needs to be told how to return the information. We need to append the $format=json flag at the end of our query when sending the request via a browser. Note that the ampersand must be appended to the end of the existing URL since we already have one parameter in it.
Back to Top
<a id='firefox'></a>
JSON Display in Firefox
Depending on what version of Firefox you have, you may not get the JSON to be displayed very nicely. To use the built-in JSON formatter, issue the following commands in a separate browser window:
<pre>
about:config
Search for devtools.jsonview.enabled
</pre>
<br>Right click on the jsonview setting and enable it. This will result in the JSON being easier to view.
Back to Top
<a id='sqlsyntax'></a>
SQL Command Syntax
The %odata command has been designed to translate the SQL syntax for INSERT, DELETE, UPDATE, and SELECT into an equivalent OData format. There are very specific ways of requesting data from OData, so this ends up placing some limitations on what SQL you can use. This section will cover the four major SQL commands and how they can be used with OData. If you need the syntax for a particular SQL command, just enter the command name by itself on the %odata line and it will give you a brief summary of the syntax. Here is the DELETE help.
End of explanation
s = %odata -e SELECT * FROM EMPLOYEE
Explanation: Back to Top
<a id='select'></a>
SELECT Statements
The SELECT statement is the most complicated of the four statements that are allowed in OData. There are generally two forms that can be used when accessing a record. The first method uses the primary key of the table and it requires no arguments. Note that the examples will not show the URL that points to the OData service.
<pre>
/EMPLOYEES('000010')
</pre>
The second method is to use the \$filter query option. \$filter allows us to compare any column against a value. The equivalent OData statement for retrieving an individual employee is:
<pre>
/EMPLOYEES?$filter=EMPNO eq '000010'
</pre>
The generated SELECT statements will always use this format, rather than relying on a primary key. This becomes more important when we deal with Views.
SELECT Syntax
The SELECT command will return data from one table. There is no ability to join tables with the current implementation of OData. If you do want to join tables, you may want to create a VIEW on the Db2 system and then use that as the TABLE. This will allow for SELECT, but no INSERT/DELETE/UPDATE.
You do not need to use the primary key in the WHERE clause to use this statement. By default, any results will be displayed in a table. If you want to retrieve the results as JSON records, use the -j option on the %odata command.
<pre>
SELECT \[col1, col2, ... | count(\*)\] FROM <table> \[ WHERE logic\] \[ LIMIT rows \]
</pre>
The column list can contain as many values as you want, or just COUNT(*). COUNT(*) will return the count of rows found. If you use the -r or -j flags to display everything in JSON format, you will also get the entire answer set along with the row count. This is the behavior of using count in OData.
The FROM clause must contain the name of the table you want to access.
The WHERE clause is optional, as is the LIMIT clause. The WHERE clause can contain comparisons between columns and constants (EMPNO='000010'), logic (AND, OR) as well as LIKE clauses (COLUMN LIKE 'xxx'). The current version cannot use arithmetic operators (+, -, *, /) or the NOT operator.
The LIMIT clause will restrict the results to "x" number of rows. So even if there are 500 rows that meet the answer set, only "x" rows will be returned to the client.
Example: Select statement with no logic
The following SELECT statement will retrieve all of the data from the EMPLOYEE table.
End of explanation
%odata set maxrows 10
Explanation: You will notice that not all of the rows have been displayed. The output has been limited to 10 lines. 5 lines from the start of the answer set and 5 lines from the bottom of the answer set are displayed. If you want to change the maximum number of rows to be displayed, use the MAXROWS setting.
End of explanation
%odata set maxrows -1
%odata select * from employee
Explanation: If you want an unlimited number of rows returned, set maxrows to -1.
End of explanation
%odata set maxrows 10
Explanation: It is better to limit the results from the answer set by using the LIMIT clause in the SELECT statement. LIMIT will force Db2 to stop retrieving rows after "x" number have been read, while the MAXROWS setting will retrieve all rows and then only display a portion of them. The one advantage of MAXROWS is that you see the bottom 5 rows while you would only be able to do that with Db2 if you could reverse sort the output. The current OData implementation does not have the ability to $orderby, so sorting to reverse the output is not possible.
End of explanation
s = %odata -e SELECT * FROM EMPLOYEE LIMIT 5
Explanation: Example: Select statement limiting output to 5 rows
This SELECT statement will limit output to 5 rows. If MAXROWS was set to a smaller value, it would still read all rows before displaying them.
End of explanation
s = %odata -e SELECT FIRSTNME, LASTNAME FROM EMPLOYEE LIMIT 5
Explanation: Back to Top
<a id='columns'></a>
Selecting Columns to Display
OData allows you to select which columns to display as part of the output. The $select query option requires a list of columns to be passed to it. For instance, the following SQL will only display the first name and last name of the top five employees.
Example: Limiting the columns to display
The column list must only include columns from the table and cannot include any calculations like SALARY+BONUS.
End of explanation
s = %odata -e SELECT COUNT(*) FROM EMPLOYEE LIMIT 1
Explanation: The COUNT(*) function is available as part of a SELECT list and it cannot include any other column names. If you do include other column names they will be ignored.
End of explanation
s = %odata -e -r SELECT COUNT(*) FROM EMPLOYEE LIMIT 5
Explanation: One of the unusual behaviors of the COUNT(*) function is that will actually return the entire answer set under the covers. The %odata command strips the count out from the results and doesn't display the rows returned. That is probably not would you expect from this syntax! The COUNT function is better described as the count of physical rows returned. Here is the same example with 5 rows returned and the JSON records.
End of explanation
s = %odata -e -r SELECT COUNT(EMPNO) FROM EMPLOYEE LIMIT 5
Explanation: One of the recommendations would be not to use the COUNT(*) function to determine the amount of rows that will be retrieved, especially if you expect there to a large of number rows. To minimize the data returned, you can use the form COUNT(column) which will modify the OData request to return the count and ONLY that column in the result set. This is a compromise in terms of the amount of data returned. This example using the -r (raw) flag which results in all of the JSON headers and data to be displayed. The JSON flag (-j) will not display any records.
End of explanation
%sql -q DROP TABLE UNKNOWN_TBL
%odata RESET TABLE UNKNOWN_TBL
s = %odata -e SELECT * FROM UNKNOWN_TBL
Explanation: Back to Top
<a id='from'></a>
FROM Clause
The FROM clause is mandatory in any SELECT statement. If an OData service has already been established, there will be no service request sent to OData. Instead, the URL information stored on disk will be used to establish the connection.
If a service has not been established, the %odata command will create the service and then build the OData select statement. If you want to see the command to establish the service as well as the SELECT command, use the -e flag to echo the results.
If the table does not exist in the database you will receive an error message.
End of explanation
%sql CREATE TABLE UNKNOWN_TBL AS (SELECT * FROM EMPLOYEE) WITH DATA
Explanation: This actually can cause some issues if you try to reuse the connection information that was created with the UNKNOWN_TBL. Since the service could not determine the structure of the table, the service will not return any column information with a select statement. The next SQL statement will create the UNKNOWN_TBL.
End of explanation
s = %odata -e SELECT * FROM UNKNOWN_TBL
Explanation: Retrying the SELECT statement will result in 43 rows with no columns returned!
End of explanation
%odata RESET TABLE UNKNOWN_TBL
Explanation: To correct this situation, you need to DROP the connect that the %odata program is using and reissue the SELECT statement.
End of explanation
s = %odata -e SELECT * FROM UNKNOWN_TBL
Explanation: Now you can try the SQL statement again.
End of explanation
%odata DESCRIBE EMPLOYEE
Explanation: Back to Top
<a id='describe'></a>
Describing the Table Structure
The SELECT statement needs to know what columns are going to be returned as part of the answer set. The asterix (*) returns all of the columns, but perhaps you only want a few of the columns. To determine what the columns are in the table along with the data types, you can use the DESCRIBE command. The following statement will show the structure of the EMPLOYEE table.
End of explanation
s = %odata -e SELECT EMPNO, WORKDEPT, SALARY FROM EMPLOYEE WHERE SALARY < 40000
Explanation: The datatypes are not the same as what one expect from a relational database. You get generic information on the character columns (String), and the numbers (Int16, Decimal). The Decimal specification actually contains the number of digits and decimal places but that isn't returned when using the table display.
| Data Type | Contents
|:-----------|:---------------
| Binary | Binary data
| Boolean | Binary-valued logic
| Byte | Unsigned 8-bit integer
| Date | Date without a time-zone offset
| Decimal | Numeric values with fixed precision and scale
| Double | IEEE 754 binary64 floating-point number (15-17 decimal digits)
| Duration | Signed duration in days, hours, minutes, and (sub)seconds
| Guid | 16-byte (128-bit) unique identifier
| Int16 | Signed 16-bit integer
| Int32 | Signed 32-bit integer
| Int64 | Signed 64-bit integer
| SByte | Signed 8-bit integer
| Single | IEEE 754 binary32 floating-point number (6-9 decimal digits)
| String | Sequence of UTF-8 characters
| TimeOfDay | Clock time 00:00-23:59:59.999999999999
Back to Top
<a id='where'></a>
WHERE Clause
The WHERE clause is used to filter out the rows that you want to retrieve from the table. The WHERE clause allows the following operators:
>, =>, <, <=, =, !=, <>, LIKE
AND, OR
Parenthesis to override order () of operators
The WHERE clause does not allow for mathematical operators at this time (*, -, +, /) or the unary NOT or "-" operators.
The LIKE clause can contain the special % character, but the equivalent OData syntax always searches the entire string and does not anchor at the beginning of the string. What this means is that the LIKE clause will turn into a search of the entire string whether you use the % character in your search string or not.
Example: Single comparison
The following select statement will search for employees who have a salary less than 40000.
End of explanation
s = %odata -e SELECT EMPNO, WORKDEPT, SALARY FROM EMPLOYEE WHERE SALARY < 40000 AND WORKDEPT = 'E21'
Explanation: Example: Two comparisons in a WHERE clause
We add an additional comparison to our SQL to check for only employees in a particular department.
End of explanation
s = %odata -e \
SELECT EMPNO, WORKDEPT, SALARY \
FROM EMPLOYEE \
WHERE SALARY < 40000 AND WORKDEPT = 'E21' OR WORKDEPT = 'E11'
Explanation: Example: OR Logic in the WHERE clause
We add some additional complexity by requesting employees who are in department E11 as well as those who make less than 40000 and work in department E21.
End of explanation
s = %odata -e \
SELECT EMPNO, WORKDEPT, SALARY \
FROM EMPLOYEE \
WHERE SALARY < 40000 AND (WORKDEPT = 'E21' OR WORKDEPT = 'E11')
Explanation: Example: Overriding the order of comparisons
You can override the order of comparisons in the WHERE clause by using parenthesis. Here we are asking for employees in department E21 or E11 and have a salary less than 40000.
End of explanation
s = %odata -e SELECT LASTNAME FROM EMPLOYEE WHERE LASTNAME LIKE '%AA%'
Explanation: Example: Using a LIKE clause
The LIKE clause in Db2 will look for a string within a character column. Normally the LIKE statement will allow for the use of the % (wildcard) and _ (one character match) operators to look for patterns. These special characters do not exist in OData, so the %odata command will remove the % character and convert it to an equivalent OData statement. What this means is that the string search will look at the entire string for the pattern, while LIKE can be anchored to look only at the beginning of the string. This capability does not current exist with the current OData implementation.
Example: Search for a lastname that has 'AA' in it.
This SQL will look for a lastname that has the string 'AA' in it.
End of explanation
%sql SELECT LASTNAME FROM EMPLOYEE WHERE LASTNAME LIKE '%ON'
Explanation: Example: Beginning of string search
In SQL, you can search for a name that ends with the letters ON by using LIKE '%ON'
End of explanation
s = %odata -e SELECT LASTNAME FROM EMPLOYEE WHERE LASTNAME LIKE '%ON'
Explanation: Converting to OData will mean that the search will look across the entire string, not just the beginning.
End of explanation
s = %odata -e SELECT * FROM EMPLOYEE LIMIT 5
Explanation: Back to Top
<a id='limitclause'></a>
Limit Clause
The LIMIT clause was discussed earlier in this notebook. LIMIT allows you to reduce the amount of rows that are returned in the answer set. The LIMIT clause is similar to FETCH FIRST x ROWS ONLY in Db2. The rows are always taken from the beginning of the answer set so there is no way to skip "x" rows before getting results. The facility does exist in the OData spaecification but has not been implemented in this release.
The LIMIT clause also works in conjunction with the %odata command. The default number of rows that are displayed in a table (result set) is set to 10 by default. So if you have 50 rows in your answer set, the first 5 are displayed and then the last 5 with the rows inbetween are hidden from view. If you want to see the entire answer set, you need to change the MAXROWS value to -1:
<pre>
%odata SET MAXROWS -1
</pre>
This will display all rows that are returned from the answer set. However, the number of rows actually returned in the anwer set will be determined by the LIMIT clause. If you set LIMIT 5 then only five rows will be returned no matter what MAXROWS is set to. On the other hand, if you set MAXROWS to 10 and LIMIT to 20, you will get 20 rows returned but only 10 will be displayed.
Example: Limit result to 5 rows
This SQL will retrieve only the top 5 rows of the EMPLOYEE table.
End of explanation
%%sql -q
DROP TABLE TESTODATA;
CREATE TABLE TESTODATA
(
EMPNO INT NOT NULL,
LASTNAME VARCHAR(10) NOT NULL,
SALARY INT NOT NULL,
BONUS INT
);
%sql select * from testodata
%odata -e select * from testodata
Explanation: Back to Top
<a id='insert'></a>
INSERT Command
OData allows you to insert data into a table through the use of the RESTful POST command and a JSON document that contains the field names and contents of those fields.
The format of the INSERT command is:
<pre>
INSERT INTO <table>(col1, col2, ...) VALUES (val1, val2, ...)
</pre>
The TABLE must be defined before you can issue this statement. There is no requirement to have a primary key on the table, but this will prevent you from updating it with the OData interface because filtering (WHERE) is not allowed on UPDATEs or DELETEs. The column list and value list must match (i.e. there must be a value for every column name). If you do not supply the list of all columns in the table, the missing columns will have null values assigned to them. The insert will fail if any of these missing columns requires a value (NOT NULL).
Example: Insert into a table
In this example we will insert a single row into a table. We start by defining the table within Db2 and then doing a DESCRIBE to get the column definitions back with OData.
End of explanation
%odata RESET TABLE TESTODATA
Explanation: We also need to remove the connection information from the system in the event we've run this example before.
End of explanation
%odata -e DESCRIBE TESTODATA
Explanation: A couple of things about the table design. The salary is NOT NULL, while the BONUS allows for nulls. Unfortunately, the DESCRIBE command only tells us about the columns in the table and their OData data type, and no indication of whether table.
End of explanation
%odata -e INSERT INTO TESTODATA(EMPNO, LASTNAME, SALARY, BONUS) VALUES (1,'Fred',10000,1000)
Explanation: The initial INSERT will populate the table with valid data. The echo option will show the json document that is sent via the POST command to OData to insert the row.
End of explanation
%odata SELECT * FROM TESTODATA
Explanation: Just to make sure things were inserted properly, we retrieve the contents of the table.
End of explanation
%odata -e INSERT INTO TESTODATA(EMPNO, LASTNAME, BONUS) VALUES (2,'Wilma',50000)
Explanation: OData (and Db2) will return an error message about our missing SALARY column which requires a value.
End of explanation
%sql INSERT INTO TESTODATA(EMPNO, LASTNAME, BONUS) VALUES (2,'Wilma',50000)
Explanation: We can try this on the Db2 side as well to get the details of the error.
End of explanation
%odata -e DELETE FROM TESTODATA WHERE EMPNO=1
Explanation: Back to Top
<a id='delete'></a>
DELETE Command
The DELETE command only takes one parameter and that is the key value for the record that we want to delete from the table. The format of the command is:
<pre>
DELETE FROM <table> WHERE KEY=VALUE
</pre>
Key refers to the column that is the primary key in the table we are deleting from. Unless you have a primary key, the DELETE command will not work.
End of explanation
%sql ALTER TABLE TESTODATA ADD CONSTRAINT PKTD PRIMARY KEY (EMPNO)
Explanation: A primary key is required to issue a DELETE command. You also need to make sure that the primary key column does not contain NULLs because a primary key must always contain a value. The following SQL tries to fix the primary key issue.
End of explanation
%odata -e DELETE FROM TESTODATA WHERE EMPNO=1
Explanation: Check to see if we can delete the row yet.
End of explanation
%odata RESET TABLE TESTODATA
Explanation: Adding a primary key after the fact won't help because the service URL would have already recorded the information about the table (and the fact it didn't have a primary key at the time). We need to drop our SERVICE URL and generate another one.
End of explanation
%odata DESCRIBE TESTODATA
Explanation: We do a describe on the table and this will force another service URL to be generated for us.
End of explanation
%odata -e DELETE FROM TESTODATA WHERE EMPNO=1
Explanation: Trying the DELETE this time will work.
End of explanation
%odata -e DELETE FROM TESTODATA WHERE EMPNO=2
Explanation: Deleting the record again still gives you a successful return code. The call always returns a successful status even if the record doesn't exist.
End of explanation
%odata -e \
INSERT INTO TESTODATA(EMPNO, LASTNAME, SALARY, BONUS) \
VALUES (1,'Fred',10000,1000)
Explanation: Back to Top
<a id='update'></a>
UPDATE Command
The update command requires both a primary key to update and the name of the field that you want changed. Note that you can only change one field at a time. There is no ability to specify multiple fields at this time.
The format of the UDPATE command is:
<pre>
UPDATE <table> SET column=value WHERE key=keyvalue
</pre>
You must have a primary key on the table if you want an update to work. The filtering (WHERE) is allowed only to specify the primary key for the row and no filtering is allowed. The primary can be changed in the statement, but the update will fail if the key already exists in another record.
The other restriction is that no calculations can be done as part of the SET clause. You can only pass atomic values to the UPDATE statement.
Example: Update a BONUS value of employee
This SQL will update employee number 1 bonus to 2000. The first step is to put the employee back into the table.
End of explanation
%odata -e UPDATE TESTODATA SET BONUS=2000 WHERE EMPNO=1
Explanation: At this point we can update their salary.
End of explanation
%odata SELECT * FROM TESTODATA
Explanation: We doublecheck the results to make sure we got it right!
End of explanation
%%sql
CREATE OR REPLACE VIEW EMPDEPT AS
(
SELECT LASTNAME, DEPTNAME
FROM EMPLOYEE E, DEPARTMENT D
WHERE E.WORKDEPT = D.DEPTNO
)
Explanation: Back to Top
<a id='views'></a>
Views
The OData implemented with Db2 doesn't allow for JOINS between tables. Sometimes you need to be able to look up information from another table in order to get the final result. One option you have to do this is to create a VIEW on the Db2 system.
The VIEW can contain almost any type of SQL so it allows for very complex queries to be created. For instance, the following view joins the EMPLOYEE table and the DEPARTMENT table to generate a row with the employee name and the name of the department that they work for.
End of explanation
%odata RESET TABLE EMPDEPT
Explanation: We also need to drop any service connection you may have created in the past with this table name.
End of explanation
%odata SELECT LASTNAME, DEPTNAME FROM EMPDEPT LIMIT 5
Explanation: Now that we have created the view, we can retrieve rows from it just like a standard table.
End of explanation
%%sql
SELECT
COUNT(*)
FROM
EMPLOYEE E, DEPARTMENT D
WHERE
E.WORKDEPT = D.DEPTNO
AND D.DEPTNAME LIKE '%SYSTEMS%'
Explanation: You can also create sophisticated VIEWS that can take parameters to adjust the results returned. For instance, consider the following SQL statement which gives me count of employees that work in SYSTEMS departments.
End of explanation
%%sql -q
DROP TABLE PATTERN;
CREATE TABLE PATTERN
(
PATTERN_NUMBER INT NOT NULL PRIMARY KEY,
SEARCH VARCHAR(16)
);
Explanation: There are two departments with the name SYSTEMS in them, but there is no easy way to create a view for every possible combination of searches that you may want. Instead what we do is create a table that contains the pattern we want to look for and create the view so that it references this table.
The first step is to create our PATTERN table. Note we make sure it has a primary key so our OData update calls can change it!
End of explanation
%odata RESET TABLE EMPDEPT
%odata RESET TABLE PATTERN
%%sql
CREATE OR REPLACE VIEW EMPDEPT AS
(
SELECT
COUNT(*) AS COUNT
FROM
EMPLOYEE E, DEPARTMENT D
WHERE
E.WORKDEPT = D.DEPTNO
AND D.DEPTNAME LIKE
(
SELECT SEARCH FROM PATTERN WHERE PATTERN_NUMBER=1
)
);
Explanation: Now we create a view that access this PATTERN table to do the actual search. Note that values that are inserted into the PATTERN table must have the SQL special characters like % to make sure patterns can be anywhere in the string.
End of explanation
%sql INSERT INTO PATTERN VALUES(1,'%SYSTEMS%')
Explanation: In order for our view to work properly, we must populate our PATTERN table with a value. To test the view we will use %SYSTEMS% as our first example.
End of explanation
%sql SELECT * FROM EMPDEPT
Explanation: And now we can test our view by selecting from it.
End of explanation
%odata UPDATE PATTERN SET SEARCH = '%SERVICE%' WHERE PATTERN_NUMBER = 1
Explanation: Now that we have it working, we can try exactly the same thing but with OData. Our first transaction will update the search key to SERVICE.
End of explanation
%odata SELECT * FROM EMPDEPT
Explanation: The next OData statement should select the count of employees working in service departments.
End of explanation |
3,867 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Download the list of occultation periods from the MOC at Berkeley.
Note that the occultation periods typically only are stored at Berkeley for the future and not for the past. So this is only really useful for observation planning.
Step1: Download the NuSTAR TLE archive.
This contains every two-line element (TLE) that we've received for the whole mission. We'll expand on how to use this later.
The times, line1, and line2 elements are now the TLE elements for each epoch.
Step2: Here is where we define the observing window that we want to use.
Note that tstart and tend must be in the future otherwise you won't find any occultation times and sunlight_periods will return an error.
Step3: We want to know how to orient NuSTAR for the Sun.
We can more or less pick any angle that we want. But this angle has to be specified a little in advance so that the NuSTAR SOC can plan the "slew in" maneuvers. Below puts DET0 in the top left corner (north-east with respect to RA/Dec coordinates).
This is what you tell the SOC you want the "Sky PA angle" to be.
Step4: Set up the offset you want to use here
Step5: Loop over each orbit and see what the difference between the two methods is
Note that you may want to update the pointing for solar rotation. That's up to the user to decide and is not done here.
Looks like a fixed shift...probably some time-ephemeris issue.
Step6: Okay, now check to see what the parallax does in each orbit.
Compare Astropy/Sunpy to what you get when you correct for the orbital parallax. Every step below is 100 seconds. | Python Code:
fname = io.download_occultation_times(outdir='../data/')
print(fname)
Explanation: Download the list of occultation periods from the MOC at Berkeley.
Note that the occultation periods typically only are stored at Berkeley for the future and not for the past. So this is only really useful for observation planning.
End of explanation
tlefile = io.download_tle(outdir='../data')
print(tlefile)
times, line1, line2 = io.read_tle_file(tlefile)
Explanation: Download the NuSTAR TLE archive.
This contains every two-line element (TLE) that we've received for the whole mission. We'll expand on how to use this later.
The times, line1, and line2 elements are now the TLE elements for each epoch.
End of explanation
tstart = '2017-07-18T12:00:00'
tend = '2017-07-18T20:00:00'
orbits = planning.sunlight_periods(fname, tstart, tend)
Explanation: Here is where we define the observing window that we want to use.
Note that tstart and tend must be in the future otherwise you won't find any occultation times and sunlight_periods will return an error.
End of explanation
pa = planning.get_nustar_roll(tstart, 0)
print("NuSTAR Roll angle for Det0 in NE quadrant: {}".format(pa))
Explanation: We want to know how to orient NuSTAR for the Sun.
We can more or less pick any angle that we want. But this angle has to be specified a little in advance so that the NuSTAR SOC can plan the "slew in" maneuvers. Below puts DET0 in the top left corner (north-east with respect to RA/Dec coordinates).
This is what you tell the SOC you want the "Sky PA angle" to be.
End of explanation
offset = [-190., -47.]*u.arcsec
Explanation: Set up the offset you want to use here:
The first element is the direction +WEST of the center of the Sun, the second is the offset +NORTH of the center of the Sun.
If you want multiple pointing locations you can either specify an array of offsets or do this "by hand" below.
End of explanation
from astropy.coordinates import SkyCoord
for ind, orbit in enumerate(orbits):
midTime = (0.5*(orbit[1] - orbit[0]) + orbit[0])
sky_pos = planning.get_sky_position(midTime, offset)
print("Orbit: {}".format(ind))
print("Orbit start: {} Orbit end: {}".format(orbit[0].isoformat(), orbit[1].isoformat()))
print('Aim time: {} RA (deg): {} Dec (deg): {}'.format(midTime.isoformat(), sky_pos[0], sky_pos[1]))
skyfield_pos = planning.get_skyfield_position(midTime, offset, load_path='../data')
print('SkyField Aim time: {} RA (deg): {} Dec (deg): {}'.format(midTime.isoformat(), skyfield_pos[0], skyfield_pos[1]))
skyfield_ephem = SkyCoord(skyfield_pos[0], skyfield_pos[1])
sunpy_ephem = SkyCoord(sky_pos[0], sky_pos[1])
print("")
print("Offset between SkyField and Astropy: {} arcsec".format(skyfield_ephem.separation(sunpy_ephem).arcsec))
print("")
Explanation: Loop over each orbit and see what the difference between the two methods is
Note that you may want to update the pointing for solar rotation. That's up to the user to decide and is not done here.
Looks like a fixed shift...probably some time-ephemeris issue.
End of explanation
from astropy.coordinates import SkyCoord
from datetime import timedelta
for ind, orbit in enumerate(orbits):
midTime = orbit[0]
while(midTime < orbit[1]):
sky_pos = planning.get_sky_position(midTime, offset)
skyfield_pos = planning.get_skyfield_position(midTime, offset, load_path='../data', parallax_correction=True)
skyfield_geo = planning.get_skyfield_position(midTime, offset, load_path='../data', parallax_correction=False)
skyfield_ephem = SkyCoord(skyfield_pos[0], skyfield_pos[1])
skyfield_geo_ephem = SkyCoord(skyfield_geo[0], skyfield_geo[1])
# sunpy_ephem = SkyCoord(sky_pos[0], sky_pos[1])
print('Offset between parallax-corrected positions and geoenctric is {} arcsec'.format(
skyfield_geo_ephem.separation(skyfield_ephem).arcsec)
)
dra, ddec = skyfield_geo_ephem.spherical_offsets_to(skyfield_ephem)
print('{0} delta-RA, {1} delta-Dec'.format(dra.to(u.arcsec), ddec.to(u.arcsec)))
print('')
midTime += timedelta(seconds=100)
break
Explanation: Okay, now check to see what the parallax does in each orbit.
Compare Astropy/Sunpy to what you get when you correct for the orbital parallax. Every step below is 100 seconds.
End of explanation |
3,868 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 1
Step1: Import the Servo class. It is needed for creating the Servo objects
Step2: Import the IPython 3 interact function. It is needed for creating the Interactive slider that moves the servo
Step3: Open the serial port
Change the serial device name. In linux, by default, it is /dev/ttyUSB0. In Windows, it should be COM1, COM2 ...
Step4: Create a servo object. It is linked to the serial port already opened
Step5: Interactive widget for moving the servo
Step6: Example of a simple servo sequence generation | Python Code:
from serial import Serial
Explanation: Example 1: Moving one servo connected to the zum bt-328 board
Introduction
This example shows how to move one servo using the interactive IPython 3.0 widgets from Jupyter (known before as ipython notebooks). This notebook is only a "hello world" example, but it opens the door to control small servo-driven robots from Jupyter. It is quite useful when researching on gait generation. The locomotion algorithms can be tested easily on real robots
The bq zum BT-328 board is compatible with arduino, so this example can also be tested with arduino boards
Stuff needed
IPython 3.0 installed
Python pyserial library installed
A web browser
One Futaba 3003 servo (or any other compatible)
Control board: bq zum BT-328 board or any other arduino compatible board
USB cable
Steps
1 Install Ipython 3.0
Read these instructions for installing the latest Ipython 3.0. Once installed, check that you alredy have installed the 3.0 version. Open the console and type ipython --version:
ipython --version
3.0.0
2 Install the Pyserial python library
Read These instructions
Depending on your OS you may chose different methods:
Ubuntu: sudo apt-get install python-serial
Anaconda environment: conda install pyserial
PyPI: pip install pyserial:
3 Download the zum servos project
Download or clone the zum-servos github repo
It contains the firmware, python classes and example notebooks for moving the servos
4 Hardware connections
Connect the Servo to the PIN 9 of the zum / arduino board
Connect the board to the computer with the USB cable
5 Upload the zum-servos firmware into the board
Open the zum_servos_fw.ino firmware with the arduino IDE and upload it into the zum / arduino board. It is locate in the folder: zum-servos/firmware/zum_servos_fw
6 Launch the Ipython notebook
Launch the Ipython notebook from the zum-servos/python folder. Open the zum-servos-example1 notebook (this one you are reading :-)
7 Configure the serial port
In the example python code, change the serial port name where your board is connected. In Linux the default name is /dev/ttyUSB0. In Windows: COM1, COM2...
8 run the notebook
Run the notebook and enjoy moving the servo from the bottom slider!!!
How it works
The python class Servo communicates with the zum-servos firmware by the USB serial line. The Servo objects have the method set_pos(ang) to set the servo position. When this method is invoked, a command is sent to the zum board by serial communication. The firmware process it and moves the servo to the given position
The code is quite simple. First the serial port is opened (Important: baud rate should be set to 19200). Then a servo object is created. Finally the set_pos() method of the servo is called by the interactive Ipython 3.0 function to display the slider
The python code
Import the Serial class. This is needed for opening the serial port.
End of explanation
from Servo import Servo
Explanation: Import the Servo class. It is needed for creating the Servo objects
End of explanation
from IPython.html.widgets import interact
Explanation: Import the IPython 3 interact function. It is needed for creating the Interactive slider that moves the servo
End of explanation
sp = Serial("/dev/ttyUSB0", 19200)
Explanation: Open the serial port
Change the serial device name. In linux, by default, it is /dev/ttyUSB0. In Windows, it should be COM1, COM2 ...
End of explanation
a = Servo(sp, dir = 'a')
Explanation: Create a servo object. It is linked to the serial port already opened
End of explanation
w1 = interact(a.set_pos, pos = (-90, 90))
Explanation: Interactive widget for moving the servo
End of explanation
import time
#-- Sequence of angles
seq = [40, 0, 20, -40, -80, 0]
#-- Repeat the sequence n times
for n in range(2):
for ang in seq:
a.pos = ang
time.sleep(0.8)
Explanation: Example of a simple servo sequence generation
End of explanation |
3,869 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step3: Preparing data and model
The EMNIST data processing and model are very similar to the simple_fedavg example.
Step6: Custom iterative process
In many cases, federated algorithms have 4 main components
Step7: TFF blocks
Step8: Evaluating the algorithm
We evaluate the performance on a centralized evaluation dataset. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
import functools
import attr
import numpy as np
import tensorflow as tf
import tensorflow_federated as tff
Explanation: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/federated/tutorials/custom_federated_algorithm_with_tff_optimizers"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/federated/blob/v0.27.0/docs/tutorials/custom_federated_algorithm_with_tff_optimizers.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/federated/blob/v0.27.0/docs/tutorials/custom_federated_algorithm_with_tff_optimizers.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/federated/docs/tutorials/custom_federated_algorithm_with_tff_optimizers.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Use TFF optimizers in custom iterative process
This is an alternative to the Build Your Own Federated Learning Algorithm tutorial and the simple_fedavg example to build a custom iterative process for the federated averaging algorithm. This tutorial will use TFF optimizers instead of Keras optimizers.
The TFF optimizer abstraction is desgined to be state-in-state-out to be easier to be incorporated in a TFF iterative process. The tff.learning APIs also accept TFF optimizers as input argument.
Before we start
Before we start, please run the following to make sure that your environment is
correctly setup. If you don't see a greeting, please refer to the
Installation guide for instructions.
End of explanation
only_digits=True
# Load dataset.
emnist_train, emnist_test = tff.simulation.datasets.emnist.load_data(only_digits)
# Define preprocessing functions.
def preprocess_fn(dataset, batch_size=16):
def batch_format_fn(element):
return (tf.expand_dims(element['pixels'], -1), element['label'])
return dataset.batch(batch_size).map(batch_format_fn)
# Preprocess and sample clients for prototyping.
train_client_ids = sorted(emnist_train.client_ids)
train_data = emnist_train.preprocess(preprocess_fn)
central_test_data = preprocess_fn(
emnist_train.create_tf_dataset_for_client(train_client_ids[0]))
# Define model.
def create_keras_model():
The CNN model used in https://arxiv.org/abs/1602.05629.
data_format = 'channels_last'
input_shape = [28, 28, 1]
max_pool = functools.partial(
tf.keras.layers.MaxPooling2D,
pool_size=(2, 2),
padding='same',
data_format=data_format)
conv2d = functools.partial(
tf.keras.layers.Conv2D,
kernel_size=5,
padding='same',
data_format=data_format,
activation=tf.nn.relu)
model = tf.keras.models.Sequential([
conv2d(filters=32, input_shape=input_shape),
max_pool(),
conv2d(filters=64),
max_pool(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(10 if only_digits else 62),
])
return model
# Wrap as `tff.learning.Model`.
def model_fn():
keras_model = create_keras_model()
return tff.learning.from_keras_model(
keras_model,
input_spec=central_test_data.element_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True))
Explanation: Preparing data and model
The EMNIST data processing and model are very similar to the simple_fedavg example.
End of explanation
@tf.function
def client_update(model, dataset, server_weights, client_optimizer):
Performs local training on the client's dataset.
# Initialize the client model with the current server weights.
client_weights = model.trainable_variables
# Assign the server weights to the client model.
tf.nest.map_structure(lambda x, y: x.assign(y),
client_weights, server_weights)
# Initialize the client optimizer.
trainable_tensor_specs = tf.nest.map_structure(
lambda v: tf.TensorSpec(v.shape, v.dtype), client_weights)
optimizer_state = client_optimizer.initialize(trainable_tensor_specs)
# Use the client_optimizer to update the local model.
for batch in iter(dataset):
with tf.GradientTape() as tape:
# Compute a forward pass on the batch of data.
outputs = model.forward_pass(batch)
# Compute the corresponding gradient.
grads = tape.gradient(outputs.loss, client_weights)
# Apply the gradient using a client optimizer.
optimizer_state, updated_weights = client_optimizer.next(
optimizer_state, client_weights, grads)
tf.nest.map_structure(lambda a, b: a.assign(b),
client_weights, updated_weights)
# Return model deltas.
return tf.nest.map_structure(tf.subtract, client_weights, server_weights)
@attr.s(eq=False, frozen=True, slots=True)
class ServerState(object):
trainable_weights = attr.ib()
optimizer_state = attr.ib()
@tf.function
def server_update(server_state, mean_model_delta, server_optimizer):
Updates the server model weights.
# Use aggregated negative model delta as pseudo gradient.
negative_weights_delta = tf.nest.map_structure(
lambda w: -1.0 * w, mean_model_delta)
new_optimizer_state, updated_weights = server_optimizer.next(
server_state.optimizer_state, server_state.trainable_weights,
negative_weights_delta)
return tff.structure.update_struct(
server_state,
trainable_weights=updated_weights,
optimizer_state=new_optimizer_state)
Explanation: Custom iterative process
In many cases, federated algorithms have 4 main components:
A server-to-client broadcast step.
A local client update step.
A client-to-server upload step.
A server update step.
In TFF, we generally represent federated algorithms as a tff.templates.IterativeProcess (which we refer to as just an IterativeProcess throughout). This is a class that contains initialize and next functions. Here, initialize is used to initialize the server, and next will perform one communication round of the federated algorithm.
We will introduce different components to build the federated averaging (FedAvg) algorithm, which will use an optimizer in the client update step, and another optimizer in the server update step. The core logics of client and server updates can be expressed as pure TF blocks.
TF blocks: client and server update
On each client, a local client_optimizer is initialized and used to update the client model weights. On the server, server_optimizer will use the state from the previous round, and update the state for the next round.
End of explanation
# 1. Server and client optimizer to be used.
server_optimizer = tff.learning.optimizers.build_sgdm(
learning_rate=0.05, momentum=0.9)
client_optimizer = tff.learning.optimizers.build_sgdm(
learning_rate=0.01)
# 2. Functions return initial state on server.
@tff.tf_computation
def server_init():
model = model_fn()
trainable_tensor_specs = tf.nest.map_structure(
lambda v: tf.TensorSpec(v.shape, v.dtype), model.trainable_variables)
optimizer_state = server_optimizer.initialize(trainable_tensor_specs)
return ServerState(
trainable_weights=model.trainable_variables,
optimizer_state=optimizer_state)
@tff.federated_computation
def server_init_tff():
return tff.federated_value(server_init(), tff.SERVER)
# 3. One round of computation and communication.
server_state_type = server_init.type_signature.result
print('server_state_type:\n',
server_state_type.formatted_representation())
trainable_weights_type = server_state_type.trainable_weights
print('trainable_weights_type:\n',
trainable_weights_type.formatted_representation())
# 3-1. Wrap server and client TF blocks with `tff.tf_computation`.
@tff.tf_computation(server_state_type, trainable_weights_type)
def server_update_fn(server_state, model_delta):
return server_update(server_state, model_delta, server_optimizer)
whimsy_model = model_fn()
tf_dataset_type = tff.SequenceType(whimsy_model.input_spec)
print('tf_dataset_type:\n',
tf_dataset_type.formatted_representation())
@tff.tf_computation(tf_dataset_type, trainable_weights_type)
def client_update_fn(dataset, server_weights):
model = model_fn()
return client_update(model, dataset, server_weights, client_optimizer)
# 3-2. Orchestration with `tff.federated_computation`.
federated_server_type = tff.FederatedType(server_state_type, tff.SERVER)
federated_dataset_type = tff.FederatedType(tf_dataset_type, tff.CLIENTS)
@tff.federated_computation(federated_server_type, federated_dataset_type)
def run_one_round(server_state, federated_dataset):
# Server-to-client broadcast.
server_weights_at_client = tff.federated_broadcast(
server_state.trainable_weights)
# Local client update.
model_deltas = tff.federated_map(
client_update_fn, (federated_dataset, server_weights_at_client))
# Client-to-server upload and aggregation.
mean_model_delta = tff.federated_mean(model_deltas)
# Server update.
server_state = tff.federated_map(
server_update_fn, (server_state, mean_model_delta))
return server_state
# 4. Build the iterative process for FedAvg.
fedavg_process = tff.templates.IterativeProcess(
initialize_fn=server_init_tff, next_fn=run_one_round)
print('type signature of `initialize`:\n',
fedavg_process.initialize.type_signature.formatted_representation())
print('type signature of `next`:\n',
fedavg_process.next.type_signature.formatted_representation())
Explanation: TFF blocks: tff.tf_computation and tff.federated_computation
We now use TFF for orchestration and build the iterative process for FedAvg. We have to wrap the TF blocks defined above with tff.tf_computation, and use TFF methods tff.federated_broadcast, tff.federated_map, tff.federated_mean in a tff.federated_computation function. It is easy to use the tff.learning.optimizers.Optimizer APIs with initialize and next functions when defining a custom iterative process.
End of explanation
def evaluate(server_state):
keras_model = create_keras_model()
tf.nest.map_structure(
lambda var, t: var.assign(t),
keras_model.trainable_weights, server_state.trainable_weights)
metric = tf.keras.metrics.SparseCategoricalAccuracy()
for batch in iter(central_test_data):
preds = keras_model(batch[0], training=False)
metric.update_state(y_true=batch[1], y_pred=preds)
return metric.result().numpy()
server_state = fedavg_process.initialize()
acc = evaluate(server_state)
print('Initial test accuracy', acc)
# Evaluate after a few rounds
CLIENTS_PER_ROUND=2
sampled_clients = train_client_ids[:CLIENTS_PER_ROUND]
sampled_train_data = [
train_data.create_tf_dataset_for_client(client)
for client in sampled_clients]
for round in range(20):
server_state = fedavg_process.next(server_state, sampled_train_data)
acc = evaluate(server_state)
print('Test accuracy', acc)
Explanation: Evaluating the algorithm
We evaluate the performance on a centralized evaluation dataset.
End of explanation |
3,870 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Home Assignment No. 3
Step1: <br>
Bayesian Models. GLM
Task 1 (1 pt.)
Consider a univariate Gaussian distribution $\mathcal{N}(x; \mu, \tau^{-1})$.
Let's define Gaussian-Gamma prior for parameters $(\mu, \tau)$
Step2: <br>
Task 2.2 (1 pt.)
Use the diagonal approximation of the Hessian computed by autodifferentiation
in pytorch.
Step3: <br>
Task 2.3 (1 pt.)
Compare the results comparing the absolute errors of the results (this is possible with Monte-Carlo estimate of the integral). Write 1-2 sentences in the results discussion.
Step4: BEGIN Solution
So, we have got big absolute error in the second line due to the fact that we used Hessian diagonal approximation which neglects a lot of values off the diagonal
END Solution
<br>
Gaussian Processes
Task 3 (1 + 2 = 3 pt.)
Task 3.1 (1 pt.)
Assuimng the matrices $A \in \mathbb{R}^{n \times n}$ and $D \in \mathbb{R}^{d \times d}$
are invertible, using gaussian elimination find the inverse matrix for the following
block matrix
Step5: <br>
Task 4.2 (2 pt.)
Use GPy library for training and prediction. Fit a GP and run the predict on the test. Useful kernels to combine | Python Code:
import numpy as np
import pandas as pd
import torch
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: Home Assignment No. 3: Part 1
In this part of the homework you are to solve several problems related to machine learning algorithms.
* For every separate problem you can get only 0 points or maximal points for this problem. There are NO INTERMEDIATE scores.
* Your solution must me COMPLETE, i.e. contain all required formulas/proofs/detailed explanations.
* You must write your solution for any problem just right after the words BEGIN SOLUTION. Attaching pictures of your handwriting is allowed, but highly discouraged.
* If you want an easy life, you have to use BUILT-IN METHODS of sklearn library instead of writing tons of our yown code. There exists a class/method for almost everything you can imagine (related to this homework).
* To do some tasks in this part of homework, you have to write CODE directly inside specified places inside notebook CELLS.
* In some problems you may be asked to provide short discussion of the results. In this cases you have to create MARKDOWN cell with your comments right after the your code cell.
* Your SOLUTION notebook MUST BE REPRODUCIBLE, i.e. if the reviewer decides to execute Kernel -> Restart Kernel and Run All Cells, after all the computation he will obtain exactly the same solution (with all the corresponding plots) as in your uploaded notebook. For this purpose, we suggest to fix random seed or (better) define random_state= inside every algorithm that uses some pseudorandomness.
Your code must be clear to the reviewer. For this purpose, try to include neccessary comments inside the code. But remember: GOOD CODE MUST BE SELF-EXPLANATORY without any additional comments.
The are problems with * mark - they are not obligatory. You can get EXTRA POINTS for solving them.
$\LaTeX$ in Jupyter
Jupyter has constantly improving $\LaTeX$ support. Below are the basic methods to
write neat, tidy, and well typeset equations in your notebooks:
* to write an inline equation use
markdown
$ you latex equation here $
* to write an equation, that is displayed on a separate line use
markdown
$$ you latex equation here $$
* to write a block of equations use
markdown
\begin{align}
left-hand-side
&= right-hand-side on line 1
\\
&= right-hand-side on line 2
\\
&= right-hand-side on the last line
\end{align}
The ampersand (&) aligns the equations horizontally and the double backslash
(\\) creates a new line.
Write your theoretical derivations within such blocks:
```markdown
BEGIN Solution
<!-- >>> your derivation here <<< -->
END Solution
```
Please, write your implementation within the designated blocks:
```python
...
BEGIN Solution
>>> your solution here <<<
END Solution
...
```
<br>
End of explanation
import numdifftools as nd
from scipy.optimize import minimize
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal
### BEGIN Solution
def p_star_w(w):
x = np.array([2/3, 1/6, 1/6], dtype=np.float64)
E = np.array([[1, -0.25, 0.75], [-0.25, 1, 0.5], [0.75, 0.5, 2]], dtype=np.float64)
left_part = 1/(1+np.exp(- w.T @ x))
right_part = multivariate_normal(mean=[0,0,0], cov=E).pdf(w)
return left_part * right_part
def log_star_w(w):
return -np.log(p_star_w(w))
w_0 = minimize(log_star_w, np.array([1,2,1], dtype=np.float64)).x
Hessian = nd.Hessian(log_star_w)
A = Hessian(w_0)
Z_p = p_star_w(w_0) * np.sqrt((2*np.pi)**3/np.linalg.det(A))
print("The value of intergral:", Z_p)
### END Solution
Explanation: <br>
Bayesian Models. GLM
Task 1 (1 pt.)
Consider a univariate Gaussian distribution $\mathcal{N}(x; \mu, \tau^{-1})$.
Let's define Gaussian-Gamma prior for parameters $(\mu, \tau)$:
\begin{equation}
p(\mu, \tau)
= \mathcal{N}(\mu; \mu_0, (\beta \tau)^{-1})
\otimes \text{Gamma}(\tau; a, b)
\,.
\end{equation}
Find the posterior distribution of $(\mu, \tau)$ after observing $X = (x_1, \dots, x_n)$.
BEGIN Solution
$$
\mathbb{P}(\mu, \tau | X) \propto p(\mu, \tau) \mathbb{P}(X | \mu, \tau)
$$ By condition:
$$
\mathbb{P}(X | \mu, \tau) = \prod_{i=1}^n \mathbb{P}(x_i | \mu, \tau) $$
As we know distribution of each datasample:
$$ \mathbb{P}(X | \mu, \tau) = \prod_{i=1}^n \frac{\tau^{\frac{1}{2}}}{\sqrt{2 \pi}} \exp{ \Big[-\frac{\tau (x_i - \mu)^2 }{2}\Big]} =
\frac{\tau^{\frac{n}{2}}}{(2 \pi)^{\frac{n}{2}}} \exp{ \Big[-\frac{\tau}{2} \sum_{i=1}^n(x_i - \mu)^2 \Big]} $$
$$ p(\mu, \tau) = \mathcal{N}(\mu; \mu_0, (\beta \tau)^{-1}) \otimes \text{Gamma}(\tau; a, b) $$
$$ p(\mu, \tau) = \frac{b^a \beta ^{\frac{1}{2}}}{(2\pi)^{\frac{1}{2}}\Gamma(a)} \tau^{a-\frac{1}{2}} e^{-b\tau} \exp{\Big( - \frac{\beta \tau}{2} (\mu - \mu_0)^2 \Big)} $$
$$ p(\mu, \tau | X) \propto \tau^{\frac{n}{2} + a - \frac{1}{2}} e^{-b\tau} \exp{\Big( - \frac{\tau}{2} \Big[ \beta(\mu - \mu_0)^2 + \sum_{i=1}^{n} (x_i - \mu)^2 \Big] \Big)} $$
$$ \sum_{i=1}^n (x_i - \mu)^2 = ns + n(\overline{x} - \mu)^2, \, \overline{x} = \frac{1}{n} \sum_{i=1}^n x_i, \, s=\frac{1}{n} \sum_{i=1}^n (x_i - \overline{x})^2 $$
$$ \exp{\Big[ -\frac{\tau}{2} \Big[ \beta (\mu - \mu_0)^2 +ns + n(\overline{x} - \mu)^2 \Big] \Big]} \exp{(-b\tau)}
= \exp{\Big[ -\tau \Big( \frac{1}{2} ns + b \Big) \Big]} \exp{\Big[ - \frac{\tau}{2} \Big( \beta(\mu - \mu_0)^2 + n (\overline{x} - \mu)^2 \Big) \Big]}
$$
After simple regrouping:
$$ \beta (\mu - \mu_0)^2 + n(\overline{x} - \mu)^2 = (\beta + n) \Big( \mu - \frac{\beta \mu_0 + n \overline{x}}{\beta + n} \Big)^2 + \frac{\beta n ( \overline{x} - \mu_0)^2}{\beta + n}
$$
$$ p(\mu, \tau | X) \propto \tau^{\frac{n}{2} + a - \frac{1}{2}} \exp{\Big[ -\tau \Big( \frac{1}{2} ns + b \Big) \Big]} \exp{\Big[ - \frac{\tau}{2} \Big[ (\beta + n) \Big( \mu - \frac{\beta \mu_0 + n \overline{x}}{\beta + n} \Big)^2 + \frac{\beta n ( \overline{x} - \mu_0)^2}{\beta + n} \Big] \Big]}
$$
Again regrouping:
$$ p(\mu, \tau | X) \propto \tau^{\frac{n}{2} + a - \frac{1}{2}} \exp{\Big[ -\tau \Big[ \frac{1}{2} ns + b + \frac{\beta n ( \overline{x} - \mu_0)^2}{2(\beta + n)} \Big] \Big]} \exp{\Big[ - \frac{\tau}{2} (\beta + n) \Big( \mu - \frac{\beta \mu_0 + n \overline{x}}{\beta + n} \Big)^2 \Big]}
$$
Finally:
$$ \boxed{ p(\mu, \tau | X) \propto \Gamma\Big(\tau, \frac{n}{2} + a, \frac{1}{2} ns +b + \frac{\beta n ( \overline{x} - \mu_0)^2}{2(\beta + n)} \Big) \mathcal{N}\Big( \mu, \frac{\beta \mu_0 + n \overline{x}}{\beta + n}, (\tau(\beta + n))^{-1}\Big) }
$$
END Solution
<br>
Task 2 (1 + 1 + 1 = 3 pt.)
Evaluate the following integral using the Laplace approximation:
\begin{equation}
x \mapsto \int \sigma(w^T x) \mathcal{N}(w; 0, \Sigma) dw \,,
\end{equation}
for $x = \bigl(\tfrac23, \tfrac16, \tfrac16\bigr)\in \mathbb{R}^3$ and
\begin{equation}
\Sigma
= \begin{pmatrix}
1 & -0.25 & 0.75 \
-0.25 & 1 & 0.5 \
0.75 & 0.5 & 2
\end{pmatrix}
\,.
\end{equation}
Task 2.1 (1 pt.)
Use the Hessian matrix computed numericaly via finite differences. (Check out Numdifftools)
End of explanation
import torch
from torch.autograd import Variable, grad
### BEGIN Solution
def pt_p_star_w(w):
x = np.array([2/3, 1/6, 1/6], dtype=np.float64)
E = np.array([[1, -0.25, 0.75], [-0.25, 1, 0.5], [0.75, 0.5, 2]], dtype=np.float64)
left_part = torch.sigmoid(torch.dot(w, Variable(torch.from_numpy(x).type(torch.FloatTensor))))
right_part = 1 / (( 2 * np.pi )**(3/2) * np.linalg.det(E)**(1/2)) *\
torch.exp(-0.5 * w @ Variable(torch.from_numpy(np.linalg.inv(E)).type(torch.FloatTensor))@w)
return left_part * right_part
def pt_log_star_w(w):
return -torch.log(pt_p_star_w(w))
def hessian_diag(func, w):
w = Variable(torch.FloatTensor(w), requires_grad=True)
grad_params = torch.autograd.grad(func(w), w, create_graph=True)
hessian = [torch.autograd.grad(grad_params[0][i], w, create_graph=True)[0].data.numpy() \
for i in range(3)]
return np.diagonal(hessian)*np.eye(3)
A = hessian_diag(pt_log_star_w, w_0)
pt_Z_p = (np.sqrt((2*np.pi)**3 / np.linalg.det(A)) *\
pt_p_star_w(Variable(torch.from_numpy(w_0).type(torch.FloatTensor)))).data.numpy()
print('Integral value is', pt_Z_p)
### END Solution
Explanation: <br>
Task 2.2 (1 pt.)
Use the diagonal approximation of the Hessian computed by autodifferentiation
in pytorch.
End of explanation
from scipy.integrate import tplquad
### BEGIN Solution
def p_star_w_adapter(x, y, z):
return p_star_w(np.array([x,y,z]))
acc_Z_p = tplquad(p_star_w_adapter, -10, 10, -10, 10, -10, 10)
print("Laplace method: %.05f" % abs(acc_Z_p[0] - Z_p))
print("Diag. Hessian Approx: %.05f" % abs(acc_Z_p[0] - pt_Z_p))
### END Solution
Explanation: <br>
Task 2.3 (1 pt.)
Compare the results comparing the absolute errors of the results (this is possible with Monte-Carlo estimate of the integral). Write 1-2 sentences in the results discussion.
End of explanation
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
### BEGIN Solution
df = pd.read_csv('data/monthly_co2_mlo.csv')
df = df.replace(-99.99, np.nan).dropna()
df.head(10)
y = df['CO2 [ppm]']
X = df.drop(['CO2 [ppm]'], axis=1)
X['year'] -= 1958
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0, shuffle=False, test_size=0.25)
X.head(10)
### END Solution
scaler = StandardScaler()
y_test_min = np.min(y_train.values)
y_test_abs = np.max(y_train.values) - np.min(y_train.values)
y_train_scaled = scaler.fit_transform(y_train.values.reshape(-1, 1))
y_test_scaled = scaler.transform(y_test.values.reshape(-1, 1))
plt.figure(figsize=(14, 5))
plt.plot(X_train['year'], y_train_scaled)
plt.plot(X_test['year'], y_test_scaled)
plt.axvline(x=0.75 * np.max([np.max(X_train['year'].values), np.max(X_test['year'].values)]), c='black', ls='-')
plt.grid()
plt.ylabel(r'${CO}_2$', size=18)
plt.xlabel('Train and test split', size=18)
plt.show()
Explanation: BEGIN Solution
So, we have got big absolute error in the second line due to the fact that we used Hessian diagonal approximation which neglects a lot of values off the diagonal
END Solution
<br>
Gaussian Processes
Task 3 (1 + 2 = 3 pt.)
Task 3.1 (1 pt.)
Assuimng the matrices $A \in \mathbb{R}^{n \times n}$ and $D \in \mathbb{R}^{d \times d}$
are invertible, using gaussian elimination find the inverse matrix for the following
block matrix:
\begin{equation}
\begin{pmatrix} A & B \ C & D \end{pmatrix} \,,
\end{equation}
where $C \in \mathbb{R}^{d \times n}$ and $B \in \mathbb{R}^{n \times d}$.
BEGIN Solution
$$
\Bigg(
\begin{array}{cc|cc}
A & B & I_n & 0\
C & D & 0 & I_m\
\end{array}
\Bigg)
\sim
\Bigg(
\begin{array}{cc|cc}
I_n & B A^{-1} & A^{-1} & 0\
C & D & 0 & I_m\
\end{array}
\Bigg)
\sim
\Bigg(
\begin{array}{cc|cc}
I_n & B A^{-1} & A^{-1} & 0\
0 & D - B A^{-1} C & - A^{-1} C & I_m\
\end{array}
\Bigg)
\sim
$$
$$
\sim
\Bigg(
\begin{array}{cc|cc}
I_n & B A^{-1} & A^{-1} & 0\
0 & I_m & - A^{-1} C (D - B A^{-1}C)^{-1} & (D - B A^{-1} C)^{-1}\
\end{array}
\Bigg)
\sim
\Bigg(
\begin{array}{cc|cc}
I_n & 0 & A^{-1} + A^{-1}C (D - B A^{-1}C)^{-1} B A^{-1} & -(D - B A^{-1}C)^{-1} BA^{-1} \
0 & I_m & - A^{-1} C (D - B A^{-1}C)^{-1} & (D - B A^{-1}C)^{-1}\
\end{array}
\Bigg) $$
Finally,
$$
\boxed {\begin{pmatrix} A & B \ C & D \end{pmatrix}^{-1}
=
\Bigg(
\begin{array}{cc}
A^{-1} + A^{-1}C (D - B A^{-1}C)^{-1} B A^{-1} & - (D - B A^{-1}C)^{-1} BA^{-1} \
- A^{-1} C (D - B A^{-1}C)^{-1} & (D - B A^{-1}C)^{-1}\
\end{array}
\Bigg) }
$$
END Solution
<br>
Task 3.2 (2 pt.)
Assume that the function $y(x)$, $x \in \mathbb{R}^d$, is a realization of the Gaussian
Process $GP\bigl(0; K(\cdot, \cdot)\bigr)$ with $K(a, b) = \exp({- \gamma \|a - b\|_2^2}))$.
Suppose two datasets were observed: noiseless ${D_0}$ and noisy ${D_1}$
\begin{aligned}
& D_0 = \bigl(x_i, y(x_i) \bigr){i=1}^{n} \,, \
& D_1 = \bigl(x^\prime_i, y(x^\prime_i) + \varepsilon_i \bigr){i=1}^{m} \,,
\end{aligned}
where $\varepsilon_i \sim \text{ iid } \mathcal{N}(0, \sigma^2)$, independent of process $y$.
Derive the conditional distribution of $y(x) \big\vert_{D_0, D_1}$ at a new $x$.
BEGIN Solution
END Solution
<br>
Task 4 (1 + 2 = 3 pt.)
Task 4.1 (1 pt.)
In the late 1950’s Charles Keeling invented an accurate way to measure atmospheric $CO_2$ concentration and began taking regular measurements at the Mauna Loa observatory.
Take monthly_co2_mlo.csv file, load it and prepare the data.
Load the CO2 [ppm] time series
Replace $-99.99$ with NaN and drop the missing observations
Split the time series into train and test
Normalize the target value by fitting a transformation on the train
Plot the resulting target against the time index
End of explanation
from GPy.models import GPRegression
from GPy.kern import RBF, Poly, StdPeriodic, White, Linear
from sklearn.metrics import r2_score
### BEGIN Solution
kernels = RBF(input_dim=1, variance=1., lengthscale=10.) + \
Poly(input_dim=1) + \
StdPeriodic(input_dim=1) + \
White(input_dim=1) + \
Linear(input_dim=1)
gpr = GPRegression(X_train['year'].values.reshape(-1, 1), y_train_scaled, kernels)
gpr.plot(figsize=(13,4))
plt.show()
### END Solution
predicted = gpr.predict(X_test['year'].values.reshape(-1, 1))
plt.figure(figsize=(13,4))
plt.plot(scaler.inverse_transform(y_test_scaled), scaler.inverse_transform(y_test_scaled), label='x = y', c='r')
plt.scatter(scaler.inverse_transform(predicted[0]), scaler.inverse_transform(y_test_scaled), label="")
plt.title("QQ - plot", size=16)
plt.xlabel("True value", size=16)
plt.ylabel("Predicted values", size=16)
plt.legend()
plt.show()
r2_score(predicted[0], y_test_scaled)
Explanation: <br>
Task 4.2 (2 pt.)
Use GPy library for training and prediction. Fit a GP and run the predict on the test. Useful kernels to combine: GPy.kern.RBF, GPy.kern.Poly, GPy.kern.StdPeriodic, GPy.kern.White, GPy.kern.Linear.
Plot mean and confidence interval of the prediction.
Inspect them on normality by scatter plot: plot predicted points/time series against true values.
Estimate the prediction error with r2_score. R2-score accepted > 0.83 on test sample.
End of explanation |
3,871 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
train_iris.py コードの補足説明
はじめに
train_iris.py をそのまま動かす際には scikitlearn が必要となるので、
shell
pip install scikit-learn
conda install scikit-learn
のどちらかを実行してほしい。
Step1: cupy フラグのたて方
CUDA が使えるならば、データセットを numpy 配列ではなく cupy 配列 とすることで、
データセットをGPUメモリに載せられるため高速化につながる。
(だが iris はデータそのものがあまりにが小さいので恩恵がないが…)
下記のようなコードにすると、簡単に numpy 配列と cupy 配列を切り替えることができる。
Step2: データセットの読み込み
iris データセットは scikit-learn に用意されているので、それを利用した。
Step3: データセットの分割
scikit-learn にある train_test_sprit を使えば、簡単にデータセットを分割できる
test_size
だが Docstring に "Split arrays or matrices into random train and test subsets" とあるように、
ランダムに分割するため分割後のラベルの数が統一されていない
test_size (or train_size) に 0.0 - 1.0 の値を与えると、その割合を test (or train) にしてくれる。
Step4: またオプションとして、train_test_sprit を使わず、
偶数番目のデータを train 、奇数番目のデータを test にするオプションも用意した
実行時の引数に --spritintwo y or -s y とすれば実行される | Python Code:
from chainer import cuda
import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split
import pandas as pd
Explanation: train_iris.py コードの補足説明
はじめに
train_iris.py をそのまま動かす際には scikitlearn が必要となるので、
shell
pip install scikit-learn
conda install scikit-learn
のどちらかを実行してほしい。
End of explanation
# gpu -> args.gpu と読み替えてほしい
gpu = -1
if gpu >= 0:
# chainer.cuda.get_device(args.gpu).use() # make a specified gpu current
# model.to_gpu() # copy the model to the gpu
xp = cuda.cupy
else:
xp = np
Explanation: cupy フラグのたて方
CUDA が使えるならば、データセットを numpy 配列ではなく cupy 配列 とすることで、
データセットをGPUメモリに載せられるため高速化につながる。
(だが iris はデータそのものがあまりにが小さいので恩恵がないが…)
下記のようなコードにすると、簡単に numpy 配列と cupy 配列を切り替えることができる。
End of explanation
iris = datasets.load_iris()
pd.DataFrame({
'sepal length': np.array([iris.data[x][0] for x in range(150)]), #len(iris.data) -> 150
'sepal width': np.array([iris.data[x][1] for x in range(150)]),
'petal length': np.array([iris.data[x][2] for x in range(150)]),
'petal width': np.array([iris.data[x][3] for x in range(150)]),
'target label': np.array(iris.target)
})
Explanation: データセットの読み込み
iris データセットは scikit-learn に用意されているので、それを利用した。
End of explanation
data_train, data_test, tgt_train, tgt_test = train_test_split(iris.data, iris.target, test_size=0.5)
from collections import Counter
Counter(tgt_train)
Explanation: データセットの分割
scikit-learn にある train_test_sprit を使えば、簡単にデータセットを分割できる
test_size
だが Docstring に "Split arrays or matrices into random train and test subsets" とあるように、
ランダムに分割するため分割後のラベルの数が統一されていない
test_size (or train_size) に 0.0 - 1.0 の値を与えると、その割合を test (or train) にしてくれる。
End of explanation
index = np.arange(len(iris.data))
data_train, data_test = iris.data[index[index%2!=0],:], iris.data[index[index%2==0],:]
tgt_train, tgt_test = iris.target[index[index%2!=0]], iris.target[index[index%2==0]]
Counter(tgt_train)
Explanation: またオプションとして、train_test_sprit を使わず、
偶数番目のデータを train 、奇数番目のデータを test にするオプションも用意した
実行時の引数に --spritintwo y or -s y とすれば実行される
End of explanation |
3,872 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
107
Step1: The swissmetro dataset used in this example is conveniently bundled with Larch,
accessible using the data_warehouse module. We'll load this file using
the pandas read_csv command.
Step2: We can inspect a few rows of data to see what we have using the head method.
Step3: The Biogeme code includes a variety of commands to manipulate the data
and create new variables. Because Larch sits on top of pandas, a reasonable
method to create new variables is to just create new columns in the
source pandas.DataFrame in the usual manner for any DataFrame.
Step4: You can also use the eval method of pandas DataFrames.
This method takes an expression as a string
and evaluates it within a namespace that has already loaded the
column names as variables.
Step5: This can allow for writing data
expressions more succinctly, as long as all your variable names
are strings that can also be the names of variables in Python.
If this isn't the case (e.g., if any variable names have spaces
in the name) you'll be better off if you stay away from this
feature.
We can mix and match between these two method to create new
columns in any DataFrame as needed.
Step6: Removing some observations can also be done directly using pandas.
Here we identify a subset of observations that we want to keep.
Step7: You may note that we don't assign this value to a column within the
raw DataFrame. This is perfectly acceptable, as the output from
the eval method is just a normal pandas.Series, like any other
single column output you might expect to get from a pandas method.
When you've created the data you need, you can pass the dataframe to
the larch.DataFrames constructor. Since the swissmetro data is in
idco format, we'll need to explicitly identify the alternative
codes as well.
Step8: The info method of the DataFrames object gives a short summary
of the contents.
Step9: A longer summary is available by setting verbose to True.
Step10: You may have noticed that the info summary notes that this data is "not computation-ready".
That's because some of the data columns are stored as integers, which can be observed by
inspecting the info on the data_co dataframe.
Step11: When computations are run, we'll need all the data to be in float format, but Larch knows this and will
handle it for you later.
Class Model Setup
Having prepped our data, we're ready to set up discrete choices models
for each class in the latent class model. We'll reproduce the Biogeme
example exactly here, as a technology demonstation. Each of two classes
will be set up with a simple MNL model.
Step12: Class Membership Model
For Larch, the class membership model will be set up as yet another discrete choice model.
In this case, the choices are not the ultimate choices, but instead are the latent classes.
To remain consistent with the Biogeme example, we'll set up this model with only a single
constant that determines class membership. Unlike Biogeme, this class membership will
be represented with an MNL model, not a simple direct probability.
Step13: The utility function of the first class isn't written here, which means it will implicitly
be set as 0.
Latent Class Model
Now we're ready to create the latent class model itself, by assembling the components
we created above. The constructor for the LatentClassModel takes two arguments,
a class membership model, and a dictionary of class models, where the keys in the
dictionary correspond to the identifying codes from the utility functions we wrote
for the class membership model.
Step14: The we'll load the data needed for our models using the load_data method.
This step will assemble the data needed, and convert it to floating point
format as required.
Step15: Only the data actually needed by the models has been converted, which may help
keep memory usage down on larger models. You may also note that the loaded
dataframes no longer reports that it is "not computational-ready".
To estimate the model, we'll use the maximize_loglike method. When run
in Jupyter, a live-view report of the parmeters and log likelihood is displayed.
Step16: To complete our analysis, we can compute the log likelihood at "null" parameters.
Step17: And the parameter covariance matrixes.
Step18: Reporting Results
And then generate a report of the estimation statistics. Larch includes a Reporter class
to help you assemble a report containing the relevant output you want.
Step19: Pipe into the report section headers in markdown format (use one hash for top level
headings, two hashes for lower levels, etc.)
Step20: You can also pipe in dataframes directly, include the pf parameter frame from the model.
Step21: And a selection of pre-formatted summary sections.
Step22: In addition to reviewing report sections in a Jupyter notebook, the
entire report can be saved to an HTML file. | Python Code:
import larch
import pandas as pd
from larch.roles import P,X
Explanation: 107: Latent Class Models
In this example, we will replicate the latent class example model
from Biogeme.
End of explanation
from larch import data_warehouse
raw = pd.read_csv(larch.data_warehouse.example_file('swissmetro.csv.gz'))
Explanation: The swissmetro dataset used in this example is conveniently bundled with Larch,
accessible using the data_warehouse module. We'll load this file using
the pandas read_csv command.
End of explanation
raw.head()
Explanation: We can inspect a few rows of data to see what we have using the head method.
End of explanation
raw['SM_COST'] = raw['SM_CO'] * (raw["GA"]==0)
Explanation: The Biogeme code includes a variety of commands to manipulate the data
and create new variables. Because Larch sits on top of pandas, a reasonable
method to create new variables is to just create new columns in the
source pandas.DataFrame in the usual manner for any DataFrame.
End of explanation
raw['TRAIN_COST'] = raw.eval("TRAIN_CO * (GA == 0)")
Explanation: You can also use the eval method of pandas DataFrames.
This method takes an expression as a string
and evaluates it within a namespace that has already loaded the
column names as variables.
End of explanation
raw['TRAIN_COST_SCALED'] = raw['TRAIN_COST'] / 100
raw['TRAIN_TT_SCALED'] = raw['TRAIN_TT'] / 100
raw['SM_COST_SCALED'] = raw.eval('SM_COST / 100')
raw['SM_TT_SCALED'] = raw['SM_TT'] / 100
raw['CAR_CO_SCALED'] = raw['CAR_CO'] / 100
raw['CAR_TT_SCALED'] = raw['CAR_TT'] / 100
raw['CAR_AV_SP'] = raw.eval("CAR_AV * (SP!=0)")
raw['TRAIN_AV_SP'] = raw.eval("TRAIN_AV * (SP!=0)")
Explanation: This can allow for writing data
expressions more succinctly, as long as all your variable names
are strings that can also be the names of variables in Python.
If this isn't the case (e.g., if any variable names have spaces
in the name) you'll be better off if you stay away from this
feature.
We can mix and match between these two method to create new
columns in any DataFrame as needed.
End of explanation
keep = raw.eval("PURPOSE in (1,3) and CHOICE != 0")
Explanation: Removing some observations can also be done directly using pandas.
Here we identify a subset of observations that we want to keep.
End of explanation
dfs = larch.DataFrames(raw[keep], alt_codes=[1,2,3])
Explanation: You may note that we don't assign this value to a column within the
raw DataFrame. This is perfectly acceptable, as the output from
the eval method is just a normal pandas.Series, like any other
single column output you might expect to get from a pandas method.
When you've created the data you need, you can pass the dataframe to
the larch.DataFrames constructor. Since the swissmetro data is in
idco format, we'll need to explicitly identify the alternative
codes as well.
End of explanation
dfs.info()
Explanation: The info method of the DataFrames object gives a short summary
of the contents.
End of explanation
dfs.info(verbose=True)
Explanation: A longer summary is available by setting verbose to True.
End of explanation
dfs.data_co.info()
Explanation: You may have noticed that the info summary notes that this data is "not computation-ready".
That's because some of the data columns are stored as integers, which can be observed by
inspecting the info on the data_co dataframe.
End of explanation
m1 = larch.Model(dataservice=dfs)
m1.availability_co_vars = {
1: "TRAIN_AV_SP",
2: "SM_AV",
3: "CAR_AV_SP",
}
m1.choice_co_code = 'CHOICE'
m1.utility_co[1] = P("ASC_TRAIN") + X("TRAIN_COST_SCALED") * P("B_COST")
m1.utility_co[2] = X("SM_COST_SCALED") * P("B_COST")
m1.utility_co[3] = P("ASC_CAR") + X("CAR_CO_SCALED") * P("B_COST")
m2 = larch.Model(dataservice=dfs)
m2.availability_co_vars = {
1: "TRAIN_AV_SP",
2: "SM_AV",
3: "CAR_AV_SP",
}
m2.choice_co_code = 'CHOICE'
m2.utility_co[1] = P("ASC_TRAIN") + X("TRAIN_TT_SCALED") * P("B_TIME") + X("TRAIN_COST_SCALED") * P("B_COST")
m2.utility_co[2] = X("SM_TT_SCALED") * P("B_TIME") + X("SM_COST_SCALED") * P("B_COST")
m2.utility_co[3] = P("ASC_CAR") + X("CAR_TT_SCALED") * P("B_TIME") + X("CAR_CO_SCALED") * P("B_COST")
Explanation: When computations are run, we'll need all the data to be in float format, but Larch knows this and will
handle it for you later.
Class Model Setup
Having prepped our data, we're ready to set up discrete choices models
for each class in the latent class model. We'll reproduce the Biogeme
example exactly here, as a technology demonstation. Each of two classes
will be set up with a simple MNL model.
End of explanation
mk = larch.Model()
mk.utility_co[2] = P("W_OTHER")
Explanation: Class Membership Model
For Larch, the class membership model will be set up as yet another discrete choice model.
In this case, the choices are not the ultimate choices, but instead are the latent classes.
To remain consistent with the Biogeme example, we'll set up this model with only a single
constant that determines class membership. Unlike Biogeme, this class membership will
be represented with an MNL model, not a simple direct probability.
End of explanation
from larch.model.latentclass import LatentClassModel
m = LatentClassModel(mk, {1:m1, 2:m2})
Explanation: The utility function of the first class isn't written here, which means it will implicitly
be set as 0.
Latent Class Model
Now we're ready to create the latent class model itself, by assembling the components
we created above. The constructor for the LatentClassModel takes two arguments,
a class membership model, and a dictionary of class models, where the keys in the
dictionary correspond to the identifying codes from the utility functions we wrote
for the class membership model.
End of explanation
m.load_data()
m.dataframes.info(verbose=1)
Explanation: The we'll load the data needed for our models using the load_data method.
This step will assemble the data needed, and convert it to floating point
format as required.
End of explanation
result = m.maximize_loglike()
result
Explanation: Only the data actually needed by the models has been converted, which may help
keep memory usage down on larger models. You may also note that the loaded
dataframes no longer reports that it is "not computational-ready".
To estimate the model, we'll use the maximize_loglike method. When run
in Jupyter, a live-view report of the parmeters and log likelihood is displayed.
End of explanation
m.loglike_null()
Explanation: To complete our analysis, we can compute the log likelihood at "null" parameters.
End of explanation
m.calculate_parameter_covariance()
m.covariance_matrix
m.robust_covariance_matrix
Explanation: And the parameter covariance matrixes.
End of explanation
report = larch.Reporter("Latent Class Example")
Explanation: Reporting Results
And then generate a report of the estimation statistics. Larch includes a Reporter class
to help you assemble a report containing the relevant output you want.
End of explanation
report << "# Parameter Estimates"
Explanation: Pipe into the report section headers in markdown format (use one hash for top level
headings, two hashes for lower levels, etc.)
End of explanation
report << m.pf
Explanation: You can also pipe in dataframes directly, include the pf parameter frame from the model.
End of explanation
report << "# Estimation Statistics"
report << m.estimation_statistics()
report << "# Parameter Covariance"
report << "## Typical Parameter Covariance"
report << m.covariance_matrix
report << "## Robust Parameter Covariance"
report << m.robust_covariance_matrix
report << "# Utility Functions"
report << "## Class 1"
report << "### Formulae"
report << m1.utility_functions(resolve_parameters=False)
report << "### Final Estimated Values"
report << m1.utility_functions(resolve_parameters=True)
report << "## Class 2"
report << "### Formulae"
report << m1.utility_functions(resolve_parameters=False)
report << "### Final Estimated Values"
report << m1.utility_functions(resolve_parameters=True)
Explanation: And a selection of pre-formatted summary sections.
End of explanation
report.save('latent-class-example-report.html', overwrite=True)
Explanation: In addition to reviewing report sections in a Jupyter notebook, the
entire report can be saved to an HTML file.
End of explanation |
3,873 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
データの取得方法
ここではQuandl.comからのデータを受け取っています。今回入手した日経平均株価は、
時間、開始値、最高値、最低値、終値のデータを入手していますが、古いデータは終値しかないようですので、終値を用います。
*** TODO いつからデータを入手することが最も効果的かを考える。(処理時間と制度に影響が出るため)
Step1: 抜けデータが目立ったため、週単位でのデータを入手します
Step2: データの用い方
必要となるpythonパッケージのインポートを行っています。
*** TODO 実装はClojureで行いため、これに相当するパッケージを検索、作成を行う
Step3: 以下のグラフから、2000年ごろのデータからの推測でも十分に予測が行える可能性が伺えます。
Step4: ARIMAモデルでモデル推定を行うための下準備として、株価の変化量を取得します。
Step5: AICを求めてモデルの良さを計算しますが、やや時間(約三分)がかかってしまします。
(SARIMAモデルでこれを行うと、更に時間がかかります)
*** TODO 実行時間の計測と最適化・マシンスペックの向上と性能の関係の調査
Step6: 先程の実行結果から、AR=2, MA=2という値の場合が最も良いモデルになることがわかりました。
Step7: 比較のためSARIMAモデルではなく、ARIMAモデルでの推定を行ってみます。
こちらの実行はそれほど時間がかかりません。
Step8: 予測のブレがあまりないことが伺えます
Step9: SARIMAモデルでの推定を行ってみます。
ARIMAモデルの実行がそれほど時間がかからなかったのに対して、SARIMAモデルはやや時間がかかること、Wariningが出ることが難点です。
Step10: おおよそ見た限りではARIMAモデルと大差はないようですが、他の論文を読む限りではこちらの手法のほうが推測が上手く行くようです。
*** TODO データを少なくした場合の実行結果
Step11: 以下が予測を結合したもの
Step12: 青が実測値、赤が予測値です。それに近い値を計測できたのではないでしょうか?
後のために関数化しておく | Python Code:
import quandl
data = quandl.get('NIKKEI/INDEX')
data[:5]
data_normal = (((data['Close Price']).to_frame())[-10000:-1])['Close Price']
data_normal[-10:-1] # 最新のデータ10件を表示
Explanation: データの取得方法
ここではQuandl.comからのデータを受け取っています。今回入手した日経平均株価は、
時間、開始値、最高値、最低値、終値のデータを入手していますが、古いデータは終値しかないようですので、終値を用います。
*** TODO いつからデータを入手することが最も効果的かを考える。(処理時間と制度に影響が出るため)
End of explanation
data_normal = data_normal.fillna(method='pad').resample('W-MON').fillna(method='pad')
data_normal[:5]
type(data_normal.index[0])
data_normal.index
Explanation: 抜けデータが目立ったため、週単位でのデータを入手します
End of explanation
import numpy as np
import pandas as pd
from scipy import stats
from pandas.core import datetools
# grapgh plotting
from matplotlib import pylab as plt
import seaborn as sns
%matplotlib inline
# settings graph size
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 15,6
# model
import statsmodels.api as sm
Explanation: データの用い方
必要となるpythonパッケージのインポートを行っています。
*** TODO 実装はClojureで行いため、これに相当するパッケージを検索、作成を行う
End of explanation
plt.plot(data_normal)
Explanation: 以下のグラフから、2000年ごろのデータからの推測でも十分に予測が行える可能性が伺えます。
End of explanation
# ARIMA model prediction ... (This is self thought (not automatically))
diff = data_normal - data_normal.shift()
diff = diff.dropna()
diff.head()
# difference plot
plt.plot(diff)
Explanation: ARIMAモデルでモデル推定を行うための下準備として、株価の変化量を取得します。
End of explanation
# automatically ARIMA prediction function (using AIC)
resDiff = sm.tsa.arma_order_select_ic(diff, ic='aic', trend='nc')
# few Times ...(orz...)
Explanation: AICを求めてモデルの良さを計算しますが、やや時間(約三分)がかかってしまします。
(SARIMAモデルでこれを行うと、更に時間がかかります)
*** TODO 実行時間の計測と最適化・マシンスペックの向上と性能の関係の調査
End of explanation
resDiff
# search min
resDiff['aic_min_order']
Explanation: 先程の実行結果から、AR=2, MA=2という値の場合が最も良いモデルになることがわかりました。
End of explanation
# we found x = x, y= y autopmatically
from statsmodels.tsa.arima_model import ARIMA
ARIMAx_1_y = ARIMA(data_normal,
order=(resDiff['aic_min_order'][0], 1,
resDiff['aic_min_order'][1])).fit(dist=False)
# AR = resDiff[...][0] / I = 1 / MA = resDiff[...][1]
ARIMAx_1_y.params
Explanation: 比較のためSARIMAモデルではなく、ARIMAモデルでの推定を行ってみます。
こちらの実行はそれほど時間がかかりません。
End of explanation
# check Residual error (... I think this is "White noise")
# this is not Arima ... (Periodicity remained)
resid = ARIMAx_1_y.resid
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(resid.values.squeeze(), lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2)
# ok?
# We test SARIMA_model
Explanation: 予測のブレがあまりないことが伺えます
End of explanation
# predict SARIMA model by myself (not automatically)
import statsmodels.api as sm
SARIMAx_1_y_111 = sm.tsa.SARIMAX(data_normal,
order=(2,1,2),seasonal_order=(1,1,1,12))
SARIMAx_1_y_111 = SARIMAx_1_y_111.fit()
# order ... from ARIMA model // seasonal_order ... 1 1 1 ... ?
print(SARIMAx_1_y_111.summary())
# maybe use "Box-Jenkins method" ...
# https://github.com/statsmodels/statsmodels/issues/3620 for error
Explanation: SARIMAモデルでの推定を行ってみます。
ARIMAモデルの実行がそれほど時間がかからなかったのに対して、SARIMAモデルはやや時間がかかること、Wariningが出ることが難点です。
End of explanation
# check Residual error
residSARIMA = SARIMAx_1_y_111.resid
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(residSARIMA.values.squeeze(), lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(residSARIMA, lags=40, ax=ax2)
# prediction
pred = SARIMAx_1_y_111.predict(start = 1, end = '2018-01-15')
# (print(SARIMAx_1_y_111.__doc__))
# 本来は未来(インデクスの外)まで予測ができるはずなのですが、
# 何故かエラーが出てしまうので、既存のデータ部分だけ予測します
# TODO エラーの原因特定
# plot real data and predict data
plt.plot(data_normal[:-150:-1])
plt.plot(pred[:-150:-1], "r")
Explanation: おおよそ見た限りではARIMAモデルと大差はないようですが、他の論文を読む限りではこちらの手法のほうが推測が上手く行くようです。
*** TODO データを少なくした場合の実行結果
End of explanation
data_extra = pd.concat( [data_normal, pred[data_normal.index[-1] + 1:]] )
plt.plot(data_extra[:-150:-1])
Explanation: 以下が予測を結合したもの
End of explanation
# require
import quandl
import numpy as np
import pandad as pd
from scipy import stats
from pandas.coda import datatools
import statsmodels.api as sm
def get_data(quandl_name):
data = quandl.get(quandl_name)
return data
def set_data(data):
data_normal = (((data['Close Price']).to_frame())[-10000:-1])['Close Price']
data_normal = data_normal.fillna(method='pad').resample('W-MON').fillna(method='pad')
return data_normal
def sarima(quandl_name):
data_normal = set_data(get_data(quandl_name))
diff = (data_normal - (data_normal.shift())).dropna()
resDiff = aic(diff)['aic_min_order']
ar = resDiff[0]
ma = resDiff[1]
SARIMAx_1_y_111 = \
sm.tsa.SARIMAX(data_normal, order=(int(ar),1, int(ma)),seasonal_order=(1,1,1,12))
return SARIMAx_1_y_111
def pred_data(SARIMAx_1_y_111, predict_date):
SARIMAx_1_y_111 = SARIMAx_1_y_111.fit()
print(SARIMAx_1_y_111.summary())
pred = SARIMAx_1_y_111.predict(start = 1, end = predict_date)
return pd.concat( [data_normal, pred[data_normal.index[-1] + 1:]] )
def aic (diff):
return (sm.tsa.arma_order_select_ic(diff, ic='aic', trend='nc'))
# 以上をまとめたもの
def predict_data(quandl_name, predict_data):
sarima_model = sarima(quandl_name)
return pred_data(sarima_model, predict_data)
predict_res = predict_data('NIKKEI/INDEX','2018-01-15')
plt.plot(predict_res[:-150:-1])
Explanation: 青が実測値、赤が予測値です。それに近い値を計測できたのではないでしょうか?
後のために関数化しておく
End of explanation |
3,874 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is used to profile https
Step1: With View options set to
Step2: | Python Code:
!pip install pyprof2calltree
!brew install qcachegrind
%%writefile test_41.py
from galgebra.ga import Ga
GA = Ga('e*1|2|3')
a = GA.mv('a', 'vector')
b = GA.mv('b', 'vector')
c = GA.mv('c', 'vector')
def cross(x, y):
return (x ^ y).dual()
xx = cross(a, cross(b, c))
!python -m cProfile -o test_41.cprof test_41.py
!python -m pyprof2calltree -i test_41.cprof -k
Explanation: This is used to profile https://github.com/pygae/galgebra/issues/41 .
The following code uses https://github.com/pygae/galgebra/tree/new_printer .
End of explanation
%%writefile test_41.py
from galgebra.ga import Ga
GA = Ga('e*1|2|3', norm=False)
a = GA.mv('a', 'vector')
b = GA.mv('b', 'vector')
c = GA.mv('c', 'vector')
def cross(x, y):
return (x ^ y).dual()
xx = cross(a, cross(b, c))
!python -m cProfile -o test_41.cprof test_41.py
!python -m pyprof2calltree -i test_41.cprof -k
Explanation: With View options set to:
The profiling result is like:
End of explanation
from galgebra.ga import Ga
GA = Ga('e*1|2|3')
a = GA.mv('a', 'vector')
b = GA.mv('b', 'vector')
c = GA.mv('c', 'vector')
def cross(x, y):
return (x ^ y).dual()
xx = cross(a, cross(b, c))
xx
GA.E()
GA.I()
from galgebra.ga import Ga
GA = Ga('e*1|2|3', norm=False)
a = GA.mv('a', 'vector')
b = GA.mv('b', 'vector')
c = GA.mv('c', 'vector')
def cross(x, y):
return (x ^ y).dual()
xx = cross(a, cross(b, c))
xx
GA.E()
GA.I()
Explanation:
End of explanation |
3,875 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy, sebagai salah satu library yang saling penting di pemrograman yang menggunakan matematika dan angka, memberikan kemudahan dalam melakukan operasi aljabar matriks. Bila deklarasi array a = [[1,0],[0,1]] memberikan array 2D biasa, maka dengan Numpy, a = np.array([[1,0],[0,1]]) memberikan objek a yang dapat dilakukan operasi aljabar matriks seperti penjumlahan, pengurangan, perkalian, transpose, dll.
Pada bab ini, tidak akan dibahas Numpy secara keseluruhan, namun hanya dibahas tentang matriks dan sedikit Aljabar Linear dengan Numpy.
Instalasi
Untuk menggunakan Numpy, kita harus melakukan import numpy karena ia merupakan suatu library yang bukan merupakan standar Python. Apabila Anda menginstall Python dengan Anaconda, kemungkinan besar Numpy sudah terinstal di mesin Anda dan terhubung dengan Python dan Anda bisa lanjut ke section berikutnya. Apabila Anda menginstall Python melalui distribusi resmi, kemungkinan Anda belum bisa menggunakan Numpy. Untuk menginstallnya, Anda bisa menggunakan package manager yang Anda gunakan, bisa pip atau conda atau yang lainnya.
Untuk menginstall dengan pip, cukup buka terminal (Windows
Step1: Kalau yang dibutuhkan hanya salah satu modul dari numpy dan tidak semuanya, import bisa dilakukan dengan perintah from ... import ... sebagai berikut.
Step2: Array
Array pada Numpy berbeda dengan array pada module bawaan Python. Pada module bawaan Python, array.array hanya memiliki fungsi terbatas. Pada Numpy, array disebut ndarray (dulu) atau array (alias).
Kita bisa membuat array dengan np.array(). Beberapa matriks khusus juga bisa langsung dibuat.
Step3: Beberapa kesalahan dalam membuat array seringkali terjadi karena kesalahan pada kurung. Ingat bahwa array ini adalah suatu fungsi, sehingga membutuhkan kurung biasa atau parenthesis sedangkan array membutuhkan kurung siku atau brackets. Pada prakteknya, ketika dimensi array lebih dari satu, setelah mengambil brackets di paling luar, menggunakan parenthesis lagi di dalam.
Step4: Vektor
Secara matematis, vektor merupakan bentuk khusus dari matriks. Namun, di Numpy vektor bisa dideklarasikan dengan cara yang berbeda. Dua cara yang cukup sering digunakan adalah dengan arange dan dengan linspace. Perbedaan keduanya ada pendeklarasiannya. Bila arange mengambil input start, stop, dan step, maka linspace ini mengambil input start, stop, dan banyaknya entri. Ide dari linspace ini adalah membuat satu vektor yang jarak antar entrinya sama.
Step5: Manipulasi Bentuk
Sebelum memanipulasi bentuk, kita bisa mengecek 'ukuran' dari matriks dengan np.shape; panjang dengan np.dim dan banyaknya entri dengan np.size.
Step6: Karena shape, dim, size dan lain-lain merupakan fungsi di numpy, bisa juga dipanggil dengan menambahkan fungsinya di belakang objek seperti contoh berikut.
Step7: Berikutnya, untuk mengubah bentuk matriks (reshape) kita bisa menyusun ulang matriks dengan perintah np.reshape. Seperti halnya shape, ndim, dan size, kita bisa memanggil reshape di depan sebagai fungsi dan di belakang sebagai atribut. Matriks juga punya
Step8: Bila reshape dan transpose ini bersifat non-destruktif (tidak mengubah objek aslinya), maka untuk mengubah bentuk matriks dengan mengubah objeknya bisa dilakukan dengan resize.
Step9: Melakukan Iterasi dengan Matrix
Pada matriks, kita bisa melakukan iterasi berdasarkan elemen-elemen matriks. Misalkan axis pertama (row) atau bahkan seluruh elemennya.
Step10: Indexing dan Slicing
Mengambil satu elemen dari matrix mirip dengan mengambil dari sequence
Step11: Operasi Matriks
Operasi Dasar
Pada operasi matriks dengan menggunakan numpy, satu hal yang perlu diperhatikan adalah ada dua jenis operasi, yaitu operasi element-wise dan operasi matriks. Operasi element-wise menggunakan tanda operasi seperti halnya pada data berbentuk integer atau float. Operasi perkalian matriks menggunakan tanda @ untuk versi Python yang lebih tinggi dari 3.5; untuk versi sebelumnya harus menggunakan fungsi np.matmul dengan dua input.
Step12: Perkalian Matriks
Step13: Perkalian Matriks dengan 'vektor' | Python Code:
import numpy as np
Explanation: Numpy, sebagai salah satu library yang saling penting di pemrograman yang menggunakan matematika dan angka, memberikan kemudahan dalam melakukan operasi aljabar matriks. Bila deklarasi array a = [[1,0],[0,1]] memberikan array 2D biasa, maka dengan Numpy, a = np.array([[1,0],[0,1]]) memberikan objek a yang dapat dilakukan operasi aljabar matriks seperti penjumlahan, pengurangan, perkalian, transpose, dll.
Pada bab ini, tidak akan dibahas Numpy secara keseluruhan, namun hanya dibahas tentang matriks dan sedikit Aljabar Linear dengan Numpy.
Instalasi
Untuk menggunakan Numpy, kita harus melakukan import numpy karena ia merupakan suatu library yang bukan merupakan standar Python. Apabila Anda menginstall Python dengan Anaconda, kemungkinan besar Numpy sudah terinstal di mesin Anda dan terhubung dengan Python dan Anda bisa lanjut ke section berikutnya. Apabila Anda menginstall Python melalui distribusi resmi, kemungkinan Anda belum bisa menggunakan Numpy. Untuk menginstallnya, Anda bisa menggunakan package manager yang Anda gunakan, bisa pip atau conda atau yang lainnya.
Untuk menginstall dengan pip, cukup buka terminal (Windows: commandprompt) dan mengetikkan perintah berikut.
Apabila Anda menggunakan miniconda atau anaconda, Anda bisa menginstall dengan perintah berikut.
Apabila Anda kebetulan menggunakan Ubuntu 20.04 LTS, Anda bisa menginstall lewat aptitude.
Dasar-dasar
Untuk menggunakan Numpy kita perlu mengimportnya. Biasanya import dilakukan dengan memberi singkatan np seperti di bawah ini.
End of explanation
from scipy import some_module
some_module.some_function()
Explanation: Kalau yang dibutuhkan hanya salah satu modul dari numpy dan tidak semuanya, import bisa dilakukan dengan perintah from ... import ... sebagai berikut.
End of explanation
import numpy as np
a = np.array([[1,2,1],[1,0,1]])
b = np.arange(7)
c = np.arange(3,10)
print("a = ")
print(a)
print("b = ")
print(b)
print(c)
print()
# Special matrix
I = np.ones(3)
O = np.zeros(4)
I2 = np.ones((2,4))
O2 = np.zeros((3,3))
print("spesial matrix")
print("satu =",I)
print("nol = ",O)
print("matriks isi 1, dua dimensi")
print(I2)
print("nol dua dimensi =")
print(O2)
Explanation: Array
Array pada Numpy berbeda dengan array pada module bawaan Python. Pada module bawaan Python, array.array hanya memiliki fungsi terbatas. Pada Numpy, array disebut ndarray (dulu) atau array (alias).
Kita bisa membuat array dengan np.array(). Beberapa matriks khusus juga bisa langsung dibuat.
End of explanation
# Salah
x = np.array(1,2,3)
# Benar
x = np.array([1,2,3])
y = np.array([[1,0,0],[0,1,0]])
z = np.array([(1,0,0),(0,1,0)])
y-z
Explanation: Beberapa kesalahan dalam membuat array seringkali terjadi karena kesalahan pada kurung. Ingat bahwa array ini adalah suatu fungsi, sehingga membutuhkan kurung biasa atau parenthesis sedangkan array membutuhkan kurung siku atau brackets. Pada prakteknya, ketika dimensi array lebih dari satu, setelah mengambil brackets di paling luar, menggunakan parenthesis lagi di dalam.
End of explanation
# deklarasi biasa dengan menuliskan inputnya satu demi satu
vektor = np.array([1,5])
print(vektor)
# deklarasi dengan arange: start, stop, step
vektor1 = np.arange(start=1, stop=10, step=1)
vektor2 = np.arange(0,9,1)
print(vektor1, vektor2)
# deklarasi dengan linspace: start, stop, banyak titik
vektor3 = np.linspace(1,10,4)
print(vektor3)
Explanation: Vektor
Secara matematis, vektor merupakan bentuk khusus dari matriks. Namun, di Numpy vektor bisa dideklarasikan dengan cara yang berbeda. Dua cara yang cukup sering digunakan adalah dengan arange dan dengan linspace. Perbedaan keduanya ada pendeklarasiannya. Bila arange mengambil input start, stop, dan step, maka linspace ini mengambil input start, stop, dan banyaknya entri. Ide dari linspace ini adalah membuat satu vektor yang jarak antar entrinya sama.
End of explanation
np.shape(a)
np.ndim(a)
np.size(a)
Explanation: Manipulasi Bentuk
Sebelum memanipulasi bentuk, kita bisa mengecek 'ukuran' dari matriks dengan np.shape; panjang dengan np.dim dan banyaknya entri dengan np.size.
End of explanation
a.shape
a.ndim
a.size
Explanation: Karena shape, dim, size dan lain-lain merupakan fungsi di numpy, bisa juga dipanggil dengan menambahkan fungsinya di belakang objek seperti contoh berikut.
End of explanation
b = np.reshape(y,(1,6)) # reshape menjadi ukuran (1,6)
c = z.reshape(6,1) # reshape menjadi ukuran (6,1)
d = c.T # return berupa transpose dari matriks
e = np.transpose(d)
print(b)
print(c)
b.shape, c.shape, d.shape, e.shape
Explanation: Berikutnya, untuk mengubah bentuk matriks (reshape) kita bisa menyusun ulang matriks dengan perintah np.reshape. Seperti halnya shape, ndim, dan size, kita bisa memanggil reshape di depan sebagai fungsi dan di belakang sebagai atribut. Matriks juga punya
End of explanation
print(a)
a.reshape((1,6))
print("setelah reshape")
print(a)
print(a)
a.resize((1,6))
print("setelah resize")
print(a)
Explanation: Bila reshape dan transpose ini bersifat non-destruktif (tidak mengubah objek aslinya), maka untuk mengubah bentuk matriks dengan mengubah objeknya bisa dilakukan dengan resize.
End of explanation
a = np.array([[1,2,3],[4,5,6],[7,8,9],[10,11,12]])
a
for row in a:
print("baris:",row)
for elemen in a.flat:
print(elemen)
Explanation: Melakukan Iterasi dengan Matrix
Pada matriks, kita bisa melakukan iterasi berdasarkan elemen-elemen matriks. Misalkan axis pertama (row) atau bahkan seluruh elemennya.
End of explanation
# Indexing
A_1D = np.array([0,1,1,2,3,5,8,13,21])
B_2D = np.array([A_1D,A_1D+2])
C_3D = np.array([B_2D,B_2D*2,B_2D*3,B_2D*4])
C_3D
print(A_1D[4],B_2D[0,0],C_3D[0,0,0])
# Contoh slicing
B_2D
# slicing bisa diambil dengan satu koordinat
B_2D[1]
B_2D[1,:]
B_2D[:,1]
# slicing pada matriks 3D
C_3D
C_3D[1]
C_3D[1,:,:]
C_3D[:,1,:]
C_3D[:,:,1]
# slicing lebih dari satu kolom/baris
A = np.linspace(0,9,10)
B, C = A + 10, A + 20
X = np.array([A,B,C])
X
X[0:2,3], X[0:3,3], X[1:3,3]
X[1:3,1:5]
Explanation: Indexing dan Slicing
Mengambil satu elemen dari matrix mirip dengan mengambil dari sequence: kita cuma perlu tahu koordinatnya. Apabila matriks satu dimensi, maka koordinat cuma perlu satu dimensi. Apabila matriks 2D, maka koordinatnya perlu dua input, dst. Sementara itu, mengambil beberapa elemen dari satu array sering disebut dengan slicing (memotong).
End of explanation
A = np.array([1,2,3,4,5,6])
B = np.array([[1,1,1],[1,0,1],[0,1,1]])
C = np.array([[1,2,3],[4,5,6],[7,8,9]])
D = np.array([[1,1],[1,0]])
2*A
A+3*A + 3*A**2+A**3
A**2
C**2
A < 5
np.sin(A)
Explanation: Operasi Matriks
Operasi Dasar
Pada operasi matriks dengan menggunakan numpy, satu hal yang perlu diperhatikan adalah ada dua jenis operasi, yaitu operasi element-wise dan operasi matriks. Operasi element-wise menggunakan tanda operasi seperti halnya pada data berbentuk integer atau float. Operasi perkalian matriks menggunakan tanda @ untuk versi Python yang lebih tinggi dari 3.5; untuk versi sebelumnya harus menggunakan fungsi np.matmul dengan dua input.
End of explanation
B @ C
d
B @ D
print(B.shape,D.shape)
Explanation: Perkalian Matriks
End of explanation
x1 = np.ones(3)
x2 = np.ones((3,1))
x3 = np.ones((1,3))
# Perkalian Matriks 3x3 dengan vektor 3x1
B @ x1
B @ x2
B @ x3
Explanation: Perkalian Matriks dengan 'vektor'
End of explanation |
3,876 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Importing our wordlists
Here we import all of our wordlists and add them to an array which me can merge at the end.
This wordlists should not be filtered at this point. However they should all contain the same columns to make merging easier for later.
Step1: Dictcc
Download the dictionary from http
Step2: Use pandas library to import csv file
Step3: Preview a few entries of the wordlist
Step4: We only need "Word" and "WordType" column
Step5: Convert WordType Column to a pandas.Categorical
Step6: List the current distribution of word types in dictcc dataframe
Step7: Add dictcc corpus to our wordlists array
Step8: Moby
Download the corpus from http
Step9: sort out the nouns, verbs and adjectives
Step10: remove the trailing stuff and concatenate the nouns, verbs and adjectives
Step11: Add moby corpus to wordlists array
Step12: Combine all wordlists
Step13: Filter for results that we want
We want to remove words that aren't associated with a type (null WordType)
Step14: We want to remove words that contain non word characters (whitespace, hypens, etc.)
Step15: We want results that are less than 'x' letters long (x+3 for verbs since they are in their infinitive form in the dictcc wordlist)
Step16: We want to remove all duplicates
Step17: Load our wordlists into nltk
Step18: NLTK
Use NLTK to help us merge our wordlists
Step19: Make Some Placewords Magic Happen | Python Code:
wordlists = []
Explanation: Importing our wordlists
Here we import all of our wordlists and add them to an array which me can merge at the end.
This wordlists should not be filtered at this point. However they should all contain the same columns to make merging easier for later.
End of explanation
!head -n 20 de-en.txt
Explanation: Dictcc
Download the dictionary from http://www.dict.cc/?s=about%3Awordlist
Print out the first 20 lines of the dictionary
End of explanation
import pandas as pd
dictcc_df = pd.read_csv("de-en.txt",
sep='\t',
skiprows=8,
header=None,
names=["GermanWord","Word","WordType"])
Explanation: Use pandas library to import csv file
End of explanation
dictcc_df[90:100]
Explanation: Preview a few entries of the wordlist
End of explanation
dictcc_df = dictcc_df[["Word", "WordType"]][:].copy()
Explanation: We only need "Word" and "WordType" column
End of explanation
word_types = dictcc_df["WordType"].astype('category')
dictcc_df["WordType"] = word_types
# show data types of each column in the dataframe
dictcc_df.dtypes
Explanation: Convert WordType Column to a pandas.Categorical
End of explanation
# nltk TaggedCorpusParses requires uppercase WordType
dictcc_df["WordType"] = dictcc_df["WordType"].str.upper()
dictcc_df["WordType"].value_counts().head()
Explanation: List the current distribution of word types in dictcc dataframe
End of explanation
wordlists.append(dictcc_df)
Explanation: Add dictcc corpus to our wordlists array
End of explanation
# the readme file in `nltk/corpora/moby/mpos` gives some information on how to parse the file
result = []
# replace all DOS line endings '\r' with newlines then change encoding to UTF8
moby_words = !cat nltk/corpora/moby/mpos/mobyposi.i | iconv --from-code=ISO88591 --to-code=UTF8 | tr -s '\r' '\n' | tr -s '×' '/'
result.extend(moby_words)
moby_df = pd.DataFrame(data = result, columns = ['Word'])
moby_df.tail(10)
Explanation: Moby
Download the corpus from http://icon.shef.ac.uk/Moby/mpos.html
Perform some basic cleanup on the wordlist
End of explanation
# Matches nouns
nouns = moby_df[moby_df["Word"].str.contains('/[Np]$')].copy()
nouns["WordType"] = "NOUN"
# Matches verbs
verbs = moby_df[moby_df["Word"].str.contains('/[Vti]$')].copy()
verbs["WordType"] = "VERB"
# Magtches adjectives
adjectives = moby_df[moby_df["Word"].str.contains('/A$')].copy()
adjectives["WordType"] = "ADJ"
Explanation: sort out the nouns, verbs and adjectives
End of explanation
nouns["Word"] = nouns["Word"].str.replace(r'/N$','')
verbs["Word"] = verbs["Word"].str.replace(r'/[Vti]$','')
adjectives["Word"] = adjectives["Word"].str.replace(r'/A$','')
# Merge nouns, verbs and adjectives into one dataframe
moby_df = pd.concat([nouns,verbs,adjectives])
Explanation: remove the trailing stuff and concatenate the nouns, verbs and adjectives
End of explanation
wordlists.append(moby_df)
Explanation: Add moby corpus to wordlists array
End of explanation
wordlist = pd.concat(wordlists)
Explanation: Combine all wordlists
End of explanation
wordlist_filtered = wordlist[wordlist["WordType"].notnull()]
Explanation: Filter for results that we want
We want to remove words that aren't associated with a type (null WordType)
End of explanation
# we choose [a-z] here and not [A-Za-z] because we do _not_
# want to match words starting with uppercase characters.
# ^to matches verbs in the infinitive from `dictcc`
word_chars = r'^[a-z]+$|^to\s'
is_word_chars = wordlist_filtered["Word"].str.contains(word_chars, na=False)
wordlist_filtered = wordlist_filtered[is_word_chars]
wordlist_filtered.describe()
wordlist_filtered["WordType"].value_counts()
Explanation: We want to remove words that contain non word characters (whitespace, hypens, etc.)
End of explanation
lt_x_letters = (wordlist_filtered["Word"].str.len() < 9) |\
((wordlist_filtered["Word"].str.contains('^to\s\w+\s')) &\
(wordlist_filtered["Word"].str.len() < 11)\
)
wordlist_filtered = wordlist_filtered[lt_x_letters]
wordlist_filtered.describe()
Explanation: We want results that are less than 'x' letters long (x+3 for verbs since they are in their infinitive form in the dictcc wordlist)
End of explanation
wordlist_filtered = wordlist_filtered.drop_duplicates("Word")
wordlist_filtered.describe()
wordlist_filtered["WordType"].value_counts()
Explanation: We want to remove all duplicates
End of explanation
# The TaggedCorpusReader likes to use the forward slash character '/'
# as seperator between the word and part-of-speech tag (WordType).
wordlist_filtered.to_csv("dictcc_moby.csv",index=False,sep="/",header=None)
from nltk.corpus import TaggedCorpusReader
from nltk.tokenize import WhitespaceTokenizer
nltk_wordlist = TaggedCorpusReader("./", "dictcc_moby.csv")
Explanation: Load our wordlists into nltk
End of explanation
# Our custom wordlist
import nltk
custom_cfd = nltk.ConditionalFreqDist((tag, word) for (word, tag) in nltk_wordlist.tagged_words() if len(word) < 9 and word.isalpha)
# Brown Corpus
import nltk
brown_cfd = nltk.ConditionalFreqDist((tag, word) for (word, tag) in nltk.corpus.brown.tagged_words() if word.isalpha() and len(word) < 9)
# Merge Nouns from all wordlists
nouns = set(brown_cfd["NN"]) | set(brown_cfd["NP"]) | set(custom_cfd["NOUN"])
# Lowercase all words to remove duplicates
nouns = set([noun.lower() for noun in nouns])
print("Total nouns count: " + str(len(nouns)))
# Merge Verbs from all wordlists
verbs = set(brown_cfd["VB"]) | set(brown_cfd["VBD"]) | set(custom_cfd["VERB"])
# Lowercase all words to remove duplicates
verbs = set([verb.lower() for verb in verbs])
print("Total verbs count: " + str(len(verbs)))
# Merge Adjectives from all wordlists
adjectives = set(brown_cfd["JJ"]) | set(custom_cfd["ADJ"])
# Lowercase all words to remove duplicates
adjectives = set([adjective.lower() for adjective in adjectives])
print("Total adjectives count: " + str(len(adjectives)))
Explanation: NLTK
Use NLTK to help us merge our wordlists
End of explanation
def populate_degrees(nouns):
degrees = {}
nouns_copy = nouns.copy()
for latitude in range(60):
for longtitude in range(190):
degrees[(latitude,longtitude)] = nouns_copy.pop()
return degrees
def populate_minutes(verbs):
minutes = {}
verbs_copy = verbs.copy()
for latitude in range(60):
for longtitude in range(60):
minutes[(latitude,longtitude)] = verbs_copy.pop()
return minutes
def populate_seconds(adjectives):
seconds = {}
adjectives_copy = adjectives.copy()
for latitude in range(60):
for longtitude in range(60):
seconds[(latitude,longtitude)] = adjectives_copy.pop()
return seconds
def populate_fractions(nouns):
fractions = {}
nouns_copy = nouns.copy()
for latitude in range(10):
for longtitude in range(10):
fractions[(latitude,longtitude)] = nouns_copy.pop()
return fractions
def placewords(degrees,minutes,seconds,fractions):
result = []
result.append(populate_degrees(nouns).get(degrees))
result.append(populate_minutes(verbs).get(minutes))
result.append(populate_seconds(adjectives).get(seconds))
result.append(populate_fractions(nouns).get(fractions))
return "-".join(result)
# Located at 50°40'47.9" N 10°55'55.2" E
ilmenau_home = placewords((50,10),(40,55),(47,55),(9,2))
print("Feel free to stalk me at " + ilmenau_home)
Explanation: Make Some Placewords Magic Happen
End of explanation |
3,877 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outline
Collect some tweets
Annotate the tweets
Calculate the accuracy
Step1: Collect some data
Step2: Annotate the data
Start by tokenizing
Step3: Now tag the tokens with parts-of-speech labels
The default configuration is the Greedy Averaged Perceptron tagger (https
Step4: Evaluate the annotations
We must choose which parts of speech to evaluate. Let's focus on adjectives, which are useful for sentiment analysis, and proper nouns, which provide a set of potential events and topics.
JJ
Step5: These seem like dreadful results. Let's try a different NLP engine.
Stanford CoreNLP
Download | Python Code:
from pprint import pprint
Explanation: Outline
Collect some tweets
Annotate the tweets
Calculate the accuracy
End of explanation
# we'll use data from a job that collected tweets about parenting
tweet_bodies = [body for body in open('tweet_bodies.txt')]
# sanity checks
pprint(len(tweet_bodies))
# sanity checks
pprint(tweet_bodies[:10])
# lets do some quick deduplication
from duplicate_filter import duplicateFilter
## set the similarity threshold at 90%
dup_filter = duplicateFilter(0.9)
deduped_tweet_bodies = []
for id,tweet_body in enumerate(tweet_bodies):
if not dup_filter.isDup(id,tweet_body):
deduped_tweet_bodies.append(tweet_body)
pprint(deduped_tweet_bodies[:10])
Explanation: Collect some data
End of explanation
from nltk.tokenize import TweetTokenizer
tt = TweetTokenizer()
tokenized_deduped_tweet_bodies = [tt.tokenize(body) for body in deduped_tweet_bodies]
# sanity checks
len(tokenized_deduped_tweet_bodies)
pprint(tokenized_deduped_tweet_bodies[:2])
Explanation: Annotate the data
Start by tokenizing
End of explanation
from nltk.tag import pos_tag as pos_tagger
tagged_tokenized_deduped_tweet_bodies = [ pos_tagger(tokens) for tokens in tokenized_deduped_tweet_bodies]
pprint(tagged_tokenized_deduped_tweet_bodies[:2])
# let's look at the taxonomy of tags; in our case derived from the Penn treebank project
# (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.9.8216&rep=rep1&type=pdf)
import nltk
nltk.help.upenn_tagset()
# let's peek at the tag dictionary for our tagger
from nltk.tag.perceptron import PerceptronTagger
t = PerceptronTagger()
pprint(list(t.tagdict.items())[:10])
Explanation: Now tag the tokens with parts-of-speech labels
The default configuration is the Greedy Averaged Perceptron tagger (https://explosion.ai/blog/part-of-speech-pos-tagger-in-python)
End of explanation
adjective_tags = ['JJ','JJR','JJS']
pn_tags = ['NNP','NNPS']
tag_types = [('adj',adjective_tags),('PN',pn_tags)]
# print format: "POS: TOKEN --> TWEET TEXT"
for body,tweet_tokens,tagged_tokens in zip(deduped_tweet_bodies,tokenized_deduped_tweet_bodies,tagged_tokenized_deduped_tweet_bodies):
for token,tag in tagged_tokens:
if tag in adjective_tags:
#if tag in pn_tags:
print_str = '{}: {} --> {}'.format(tag,token,body)
print(print_str)
Explanation: Evaluate the annotations
We must choose which parts of speech to evaluate. Let's focus on adjectives, which are useful for sentiment analysis, and proper nouns, which provide a set of potential events and topics.
JJ: adjective or numeral, ordinal
JJR: adjective, comparative
JJS: adjective, superlative
NNP: noun, proper, singular
NNPS: noun, proper, plural
End of explanation
from corenlp_pywrap import pywrap
cn = pywrap.CoreNLP(url='http://localhost:9000', annotator_list=["pos"])
corenlp_results = []
for tweet_body in deduped_tweet_bodies:
try:
corenlp_results.append( cn.basic(tweet_body,out_format='json').json() )
except UnicodeEncodeError:
corenlp_results.append( {'sentences':[]} )
# pull out the tokens and tags
corenlp_tagged_tokenized_deduped_tweet_bodies = [ [(token['word'],token['pos']) for sentence in result['sentences'] for token in sentence['tokens']] for result in corenlp_results]
# print format: "POS: TOKEN --> TWEET TEXT"
for body,tagged_tokens in zip(deduped_tweet_bodies,corenlp_tagged_tokenized_deduped_tweet_bodies):
for token,tag in tagged_tokens:
#if tag in pn_tags:
if tag in adjective_tags:
print_str = '{}: {} --> {}'.format(tag,token,body)
print(print_str)
Explanation: These seem like dreadful results. Let's try a different NLP engine.
Stanford CoreNLP
Download:
http://nlp.stanford.edu/software/stanford-corenlp-full-2016-10-31.zip
Then unzip. Start up the server from the unzipped directory:
$ java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 15000
End of explanation |
3,878 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
I'm looking into doing a delta_sigma emulator. This is testing if the cat side works. Then i'll make an emulator for it.
Step1: Load up a snapshot at a redshift near the center of this bin.
Step2: Use my code's wrapper for halotools' xi calculator. Full source code can be found here.
Step3: Interpolate with a Gaussian process. May want to do something else "at scale", but this is quick for now.
Step4: This plot looks bad on large scales. I will need to implement a linear bias model for larger scales; however I believe this is not the cause of this issue. The overly large correlation function at large scales if anything should increase w(theta).
This plot shows the regimes of concern. The black lines show the value of r for u=0 in the below integral for each theta bin. The red lines show the maximum value of r for the integral I'm performing.
Perform the below integral in each theta bin
Step5: The below plot shows the problem. There appears to be a constant multiplicative offset between the redmagic calculation and the one we just performed. The plot below it shows their ratio. It is near-constant, but there is some small radial trend. Whether or not it is significant is tough to say.
Step6: The below cell calculates the integrals jointly instead of separately. It doesn't change the results significantly, but is quite slow. I've disabled it for that reason. | Python Code:
from pearce.mocks import cat_dict
import numpy as np
from os import path
from astropy.io import fits
import matplotlib
#matplotlib.use('Agg')
from matplotlib import pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
z_bins = np.array([0.15, 0.3, 0.45, 0.6, 0.75, 0.9])
zbin=1
a = 0.81120
z = 1.0/a - 1.0
Explanation: I'm looking into doing a delta_sigma emulator. This is testing if the cat side works. Then i'll make an emulator for it.
End of explanation
print z
cosmo_params = {'simname':'chinchilla', 'Lbox':400.0, 'scale_factors':[a]}
cat = cat_dict[cosmo_params['simname']](**cosmo_params)#construct the specified catalog!
cat.load_catalog(a, particles=True)
cat.load_model(a, 'redMagic')
params = cat.model.param_dict.copy()
#params['mean_occupation_centrals_assembias_param1'] = 0.0
#params['mean_occupation_satellites_assembias_param1'] = 0.0
params['logMmin'] = 13.4
params['sigma_logM'] = 0.1
params['f_c'] = 1.0
params['alpha'] = 1.0
params['logM1'] = 14.0
params['logM0'] = 12.0
print params
cat.populate(params)
nd_cat = cat.calc_analytic_nd()
print nd_cat
rp_bins = np.logspace(-1.1, 1.5, 16) #binning used in buzzard mocks
rpoints = (rp_bins[1:]+rp_bins[:-1])/2
ds = cat.calc_ds(rp_bins)
plt.plot(rpoints, ds)
plt.loglog();
Explanation: Load up a snapshot at a redshift near the center of this bin.
End of explanation
xi = cat.calc_xi(r_bins, do_jackknife=False)
Explanation: Use my code's wrapper for halotools' xi calculator. Full source code can be found here.
End of explanation
import george
from george.kernels import ExpSquaredKernel
kernel = ExpSquaredKernel(0.05)
gp = george.GP(kernel)
gp.compute(np.log10(rpoints))
print xi
xi[xi<=0] = 1e-2 #ack
from scipy.stats import linregress
m,b,_,_,_ = linregress(np.log10(rpoints), np.log10(xi))
plt.plot(rpoints, (2.22353827e+03)*(rpoints**(-1.88359)))
#plt.plot(rpoints, b2*(rpoints**m2))
plt.scatter(rpoints, xi)
plt.loglog();
plt.plot(np.log10(rpoints), b+(np.log10(rpoints)*m))
#plt.plot(np.log10(rpoints), b2+(np.log10(rpoints)*m2))
#plt.plot(np.log10(rpoints), 90+(np.log10(rpoints)*(-2)))
plt.scatter(np.log10(rpoints), np.log10(xi) )
#plt.loglog();
print m,b
rpoints_dense = np.logspace(-0.5, 2, 500)
plt.scatter(rpoints, xi)
plt.plot(rpoints_dense, np.power(10, gp.predict(np.log10(xi), np.log10(rpoints_dense))[0]))
plt.loglog();
Explanation: Interpolate with a Gaussian process. May want to do something else "at scale", but this is quick for now.
End of explanation
#a subset of the data from above. I've verified it's correct, but we can look again.
wt_redmagic = np.loadtxt('/u/ki/swmclau2/Git/pearce/bin/mcmc/buzzard2_wt_%d%d.npy'%(zbin,zbin))
Explanation: This plot looks bad on large scales. I will need to implement a linear bias model for larger scales; however I believe this is not the cause of this issue. The overly large correlation function at large scales if anything should increase w(theta).
This plot shows the regimes of concern. The black lines show the value of r for u=0 in the below integral for each theta bin. The red lines show the maximum value of r for the integral I'm performing.
Perform the below integral in each theta bin:
$$ w(\theta) = W \int_0^\infty du \xi \left(r = \sqrt{u^2 + \bar{x}^2(z)\theta^2} \right) $$
Where $\bar{x}$ is the median comoving distance to z.
End of explanation
from scipy.special import gamma
def wt_analytic(m,b,t,x):
return W*b*np.sqrt(np.pi)*(t*x)**(1 + m)*(gamma(-(1./2) - m/2.)/(2*gamma(-(m/2.))) )
plt.plot(tpoints, wt, label = 'My Calculation')
plt.plot(tpoints_rm, wt_redmagic, label = 'Buzzard Mock')
#plt.plot(tpoints_rm, W.to("1/Mpc").value*mathematica_calc, label = 'Mathematica Calc')
plt.plot(tpoints, wt_analytic(m,10**b, np.radians(tpoints), x),label = 'Mathematica Calc' )
plt.ylabel(r'$w(\theta)$')
plt.xlabel(r'$\theta \mathrm{[degrees]}$')
plt.loglog();
plt.legend(loc='best')
wt_redmagic/(W.to("1/Mpc").value*mathematica_calc)
import cPickle as pickle
with open('/u/ki/jderose/ki23/bigbrother-addgals/bbout/buzzard-flock/buzzard-0/buzzard0_lb1050_xigg_ministry.pkl') as f:
xi_rm = pickle.load(f)
xi_rm.metrics[0].xi.shape
xi_rm.metrics[0].mbins
xi_rm.metrics[0].cbins
#plt.plot(np.log10(rpoints), b2+(np.log10(rpoints)*m2))
#plt.plot(np.log10(rpoints), 90+(np.log10(rpoints)*(-2)))
plt.scatter(rpoints, xi)
for i in xrange(3):
for j in xrange(3):
plt.plot(xi_rm.metrics[0].rbins[:-1], xi_rm.metrics[0].xi[:,i,j,0])
plt.loglog();
plt.subplot(211)
plt.plot(tpoints_rm, wt_redmagic/wt)
plt.xscale('log')
#plt.ylim([0,10])
plt.subplot(212)
plt.plot(tpoints_rm, wt_redmagic/wt)
plt.xscale('log')
plt.ylim([2.0,4])
xi_rm.metrics[0].xi.shape
xi_rm.metrics[0].rbins #Mpc/h
Explanation: The below plot shows the problem. There appears to be a constant multiplicative offset between the redmagic calculation and the one we just performed. The plot below it shows their ratio. It is near-constant, but there is some small radial trend. Whether or not it is significant is tough to say.
End of explanation
x = cat.cosmology.comoving_distance(z)*a
#ubins = np.linspace(10**-6, 10**2.0, 1001)
ubins = np.logspace(-6, 2.0, 51)
ubc = (ubins[1:]+ubins[:-1])/2.0
#NLL
def liklihood(params, wt_redmagic,x, tpoints):
#print _params
#prior = np.array([ PRIORS[pname][0] < v < PRIORS[pname][1] for v,pname in zip(_params, param_names)])
#print param_names
#print prior
#if not np.all(prior):
# return 1e9
#params = {p:v for p,v in zip(param_names, _params)}
#cat.populate(params)
#nd_cat = cat.calc_analytic_nd(parmas)
#wt = np.zeros_like(tpoints_rm[:-5])
#xi = cat.calc_xi(r_bins, do_jackknife=False)
#m,b,_,_,_ = linregress(np.log10(rpoints), np.log10(xi))
#if np.any(xi < 0):
# return 1e9
#kernel = ExpSquaredKernel(0.05)
#gp = george.GP(kernel)
#gp.compute(np.log10(rpoints))
#for bin_no, t_med in enumerate(np.radians(tpoints_rm[:-5])):
# int_xi = 0
# for ubin_no, _u in enumerate(ubc):
# _du = ubins[ubin_no+1]-ubins[ubin_no]
# u = _u*unit.Mpc*a
# du = _du*unit.Mpc*a
#print np.sqrt(u**2+(x*t_med)**2)
# r = np.sqrt((u**2+(x*t_med)**2))#*cat.h#not sure about the h
#if r > unit.Mpc*10**1.7: #ignore large scales. In the full implementation this will be a transition to a bias model.
# int_xi+=du*0
#else:
# the GP predicts in log, so i predict in log and re-exponate
# int_xi+=du*(np.power(10, \
# gp.predict(np.log10(xi), np.log10(r.value), mean_only=True)[0]))
# int_xi+=du*(10**b)*(r.to("Mpc").value**m)
#print (((int_xi*W))/wt_redmagic[0]).to("m/m")
#break
# wt[bin_no] = int_xi*W.to("1/Mpc")
wt = wt_analytic(params[0],params[1], tpoints, x.to("Mpc").value)
chi2 = np.sum(((wt - wt_redmagic[:-5])**2)/(1e-3*wt_redmagic[:-5]) )
#chi2=0
#print nd_cat
#print wt
#chi2+= ((nd_cat-nd_mock.value)**2)/(1e-6)
#mf = cat.calc_mf()
#HOD = cat.calc_hod()
#mass_bin_range = (9,16)
#mass_bin_size = 0.01
#mass_bins = np.logspace(mass_bin_range[0], mass_bin_range[1], int( (mass_bin_range[1]-mass_bin_range[0])/mass_bin_size )+1 )
#mean_host_mass = np.sum([mass_bin_size*mf[i]*HOD[i]*(mass_bins[i]+mass_bins[i+1])/2 for i in xrange(len(mass_bins)-1)])/\
# np.sum([mass_bin_size*mf[i]*HOD[i] for i in xrange(len(mass_bins)-1)])
#chi2+=((13.35-np.log10(mean_host_mass))**2)/(0.2)
print chi2
return chi2 #nll
print nd_mock
print wt_redmagic[:-5]
import scipy.optimize as op
results = op.minimize(liklihood, np.array([-2.2, 10**1.7]),(wt_redmagic,x, tpoints_rm[:-5]))
results
#plt.plot(tpoints_rm, wt, label = 'My Calculation')
plt.plot(tpoints_rm, wt_redmagic, label = 'Buzzard Mock')
plt.plot(tpoints_rm, wt_analytic(-1.88359, 2.22353827e+03,tpoints_rm, x.to("Mpc").value), label = 'Mathematica Calc')
plt.ylabel(r'$w(\theta)$')
plt.xlabel(r'$\theta \mathrm{[degrees]}$')
plt.loglog();
plt.legend(loc='best')
plt.plot(np.log10(rpoints), np.log10(2.22353827e+03)+(np.log10(rpoints)*(-1.88)))
plt.scatter(np.log10(rpoints), np.log10(xi) )
np.array([v for v in params.values()])
Explanation: The below cell calculates the integrals jointly instead of separately. It doesn't change the results significantly, but is quite slow. I've disabled it for that reason.
End of explanation |
3,879 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 03
Step1: Accordingly, to use the ring road scenario for this tutorial, we specify its (string) names as follows
Step2: Another difference between SUMO and RLlib experiments is that, in RLlib experiments, the scenario classes do not need to be defined; instead users should simply name the scenario class they wish to use. Later on, an environment setup module will import the correct scenario class based on the provided names.
Step3: 2.2 Adding Trainable Autonomous Vehicles
The Vehicles class stores state information on all vehicles in the network. This class is used to identify the dynamical features of a vehicle and whether it is controlled by a reinforcement learning agent. Morover, information pertaining to the observations and reward function can be collected from various get methods within this class.
The dynamics of vehicles in the Vehicles class can either be depicted by sumo or by the dynamical methods located in flow/controllers. For human-driven vehicles, we use the IDM model for acceleration behavior, with exogenous gaussian acceleration noise with std 0.2 m/s2 to induce perturbations that produce stop-and-go behavior. In addition, we use the ContinousRouter routing controller so that the vehicles may maintain their routes closed networks.
As we have done in exercise 1, human-driven vehicles are defined in the Vehicles class as follows
Step4: The above addition to the Vehicles class only accounts for 21 of the 22 vehicles that are placed in the network. We now add an additional trainable autuonomous vehicle whose actions are dictated by an RL agent. This is done by specifying an RLController as the acceleraton controller to the vehicle.
Step5: Note that this controller serves primarirly as a placeholder that marks the vehicle as a component of the RL agent, meaning that lane changing and routing actions can also be specified by the RL agent to this vehicle.
We finally add the vehicle as follows, while again using the ContinuousRouter to perpetually maintain the vehicle within the network.
Step6: 3. Setting up an Environment
Several environments in Flow exist to train RL agents of different forms (e.g. autonomous vehicles, traffic lights) to perform a variety of different tasks. The use of an environment allows us to view the cumulative reward simulation rollouts receive, along with to specify the state/action spaces.
Envrionments in Flow are parametrized by three components
Step7: 3.2 EnvParams
EnvParams specifies environment and experiment-specific parameters that either affect the training process or the dynamics of various components within the scenario. For the environment WaveAttenuationPOEnv, these parameters are used to dictate bounds on the accelerations of the autonomous vehicles, as well as the range of ring lengths (and accordingly network densities) the agent is trained on.
Finally, it is important to specify here the horizon of the experiment, which is the duration of one episode (during which the RL-agent acquire data).
Step8: 3.3 Initializing a Gym Environment
Now, we have to specify our Gym Environment and the algorithm that our RL agents will use. To specify the environment, one has to use the environment's name (a simple string). A list of all environment names is located in flow/envs/__init__.py. The names of available environments can be seen below.
Step9: We will use the environment "WaveAttenuationPOEnv", which is used to train autonomous vehicles to attenuate the formation and propagation of waves in a partially observable variable density ring road. To create the Gym Environment, the only necessary parameters are the environment name plus the previously defined variables. These are defined as follows
Step10: 3.4 Setting up Flow Parameters
RLlib and rllab experiments both generate a params.json file for each experiment run. For RLlib experiments, the parameters defining the Flow scenario and environment must be stored as well. As such, in this section we define the dictionary flow_params, which contains the variables required by the utility function make_create_env. make_create_env is a higher-order function which returns a function create_env that initializes a Gym environment corresponding to the Flow scenario specified.
Step11: 4 Running RL experiments in Ray
4.1 Import
First, we must import modules required to run experiments in Ray. The json package is required to store the Flow experiment parameters in the params.json file, as is FlowParamsEncoder. Ray-related imports are required
Step12: 4.2 Initializing Ray
Here, we initialize Ray and experiment-based constant variables specifying parallelism in the experiment as well as experiment batch size in terms of number of rollouts. redirect_output sends stdout and stderr for non-worker processes to files if True.
Step13: 4.3 Configuration and Setup
Here, we copy and modify the default configuration for the PPO algorithm. The agent has the number of parallel workers specified, a batch size corresponding to N_ROLLOUTS rollouts (each of which has length HORIZON steps), a discount rate $\gamma$ of 0.999, two hidden layers of size 16, uses Generalized Advantage Estimation, $\lambda$ of 0.97, and other parameters as set below.
Once config contains the desired parameters, a JSON string corresponding to the flow_params specified in section 3 is generated. The FlowParamsEncoder maps objects to string representations so that the experiment can be reproduced later. That string representation is stored within the env_config section of the config dictionary. Later, config is written out to the file params.json.
Next, we call make_create_env and pass in the flow_params to return a function we can use to register our Flow environment with Gym.
Step14: 4.4 Running Experiments
Here, we use the run_experiments function from ray.tune. The function takes a dictionary with one key, a name corresponding to the experiment, and one value, itself a dictionary containing parameters for training. | Python Code:
import flow.scenarios as scenarios
print(scenarios.__all__)
Explanation: Tutorial 03: Running RLlib Experiments
This tutorial walks you through the process of running traffic simulations in Flow with trainable RLlib-powered agents. Autonomous agents will learn to maximize a certain reward over the rollouts, using the RLlib library (citation) (installation instructions). Simulations of this form will depict the propensity of RL agents to influence the traffic of a human fleet in order to make the whole fleet more efficient (for some given metrics).
In this exercise, we simulate an initially perturbed single lane ring road, where we introduce a single autonomous vehicle. We witness that, after some training, that the autonomous vehicle learns to dissipate the formation and propagation of "phantom jams" which form when only human driver dynamics are involved.
1. Components of a Simulation
All simulations, both in the presence and absence of RL, require two components: a scenario, and an environment. Scenarios describe the features of the transportation network used in simulation. This includes the positions and properties of nodes and edges constituting the lanes and junctions, as well as properties of the vehicles, traffic lights, inflows, etc... in the network. Environments, on the other hand, initialize, reset, and advance simulations, and act as the primary interface between the reinforcement learning algorithm and the scenario. Moreover, custom environments may be used to modify the dynamical features of an scenario. Finally, in the RL case, it is in the environment that the state/action spaces and the reward function are defined.
2. Setting up a Scenario
Flow contains a plethora of pre-designed scenarios used to replicate highways, intersections, and merges in both closed and open settings. All these scenarios are located in flow/scenarios. For this exercise, which involves a single lane ring road, we will use the scenario LoopScenario.
2.1 Setting up Scenario Parameters
The scenario mentioned at the start of this section, as well as all other scenarios in Flow, are parameterized by the following arguments:
* name
* vehicles
* net_params
* initial_config
These parameters are explained in detail in exercise 1. Moreover, all parameters excluding vehicles (covered in section 2.2) do not change from the previous exercise. Accordingly, we specify them nearly as we have before, and leave further explanations of the parameters to exercise 1.
One important difference between SUMO and RLlib experiments is that, in RLlib experiments, the scenario classes are not imported, but rather called via their string names which (for serializtion and execution purposes) must be located within flow/scenarios/__init__.py. To check which scenarios are currently available, we execute the below command.
End of explanation
# ring road scenario class
scenario_name = "LoopScenario"
Explanation: Accordingly, to use the ring road scenario for this tutorial, we specify its (string) names as follows:
End of explanation
# input parameter classes to the scenario class
from flow.core.params import NetParams, InitialConfig
# name of the scenario
name = "training_example"
# network-specific parameters
from flow.scenarios.loop import ADDITIONAL_NET_PARAMS
net_params = NetParams(additional_params=ADDITIONAL_NET_PARAMS)
# initial configuration to vehicles
initial_config = InitialConfig(spacing="uniform", perturbation=1)
Explanation: Another difference between SUMO and RLlib experiments is that, in RLlib experiments, the scenario classes do not need to be defined; instead users should simply name the scenario class they wish to use. Later on, an environment setup module will import the correct scenario class based on the provided names.
End of explanation
# vehicles class
from flow.core.params import VehicleParams
# vehicles dynamics models
from flow.controllers import IDMController, ContinuousRouter
vehicles = VehicleParams()
vehicles.add("human",
acceleration_controller=(IDMController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=21)
Explanation: 2.2 Adding Trainable Autonomous Vehicles
The Vehicles class stores state information on all vehicles in the network. This class is used to identify the dynamical features of a vehicle and whether it is controlled by a reinforcement learning agent. Morover, information pertaining to the observations and reward function can be collected from various get methods within this class.
The dynamics of vehicles in the Vehicles class can either be depicted by sumo or by the dynamical methods located in flow/controllers. For human-driven vehicles, we use the IDM model for acceleration behavior, with exogenous gaussian acceleration noise with std 0.2 m/s2 to induce perturbations that produce stop-and-go behavior. In addition, we use the ContinousRouter routing controller so that the vehicles may maintain their routes closed networks.
As we have done in exercise 1, human-driven vehicles are defined in the Vehicles class as follows:
End of explanation
from flow.controllers import RLController
Explanation: The above addition to the Vehicles class only accounts for 21 of the 22 vehicles that are placed in the network. We now add an additional trainable autuonomous vehicle whose actions are dictated by an RL agent. This is done by specifying an RLController as the acceleraton controller to the vehicle.
End of explanation
vehicles.add(veh_id="rl",
acceleration_controller=(RLController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=1)
Explanation: Note that this controller serves primarirly as a placeholder that marks the vehicle as a component of the RL agent, meaning that lane changing and routing actions can also be specified by the RL agent to this vehicle.
We finally add the vehicle as follows, while again using the ContinuousRouter to perpetually maintain the vehicle within the network.
End of explanation
from flow.core.params import SumoParams
sumo_params = SumoParams(sim_step=0.1, render=False)
Explanation: 3. Setting up an Environment
Several environments in Flow exist to train RL agents of different forms (e.g. autonomous vehicles, traffic lights) to perform a variety of different tasks. The use of an environment allows us to view the cumulative reward simulation rollouts receive, along with to specify the state/action spaces.
Envrionments in Flow are parametrized by three components:
* env_params
* sumo_params
* scenario
3.1 SumoParams
SumoParams specifies simulation-specific variables. These variables include the length of any simulation step and whether to render the GUI when running the experiment. For this example, we consider a simulation step length of 0.1s and activate the GUI.
Note For training purposes, it is highly recommanded to deactivate the GUI in order to avoid global slow down. In such case, one just needs to specify the following: render=False
End of explanation
from flow.core.params import EnvParams
# Define horizon as a variable to ensure consistent use across notebook
HORIZON=100
env_params = EnvParams(
# length of one rollout
horizon=HORIZON,
additional_params={
# maximum acceleration of autonomous vehicles
"max_accel": 1,
# maximum deceleration of autonomous vehicles
"max_decel": 1,
# bounds on the ranges of ring road lengths the autonomous vehicle
# is trained on
"ring_length": [220, 270],
},
)
Explanation: 3.2 EnvParams
EnvParams specifies environment and experiment-specific parameters that either affect the training process or the dynamics of various components within the scenario. For the environment WaveAttenuationPOEnv, these parameters are used to dictate bounds on the accelerations of the autonomous vehicles, as well as the range of ring lengths (and accordingly network densities) the agent is trained on.
Finally, it is important to specify here the horizon of the experiment, which is the duration of one episode (during which the RL-agent acquire data).
End of explanation
import flow.envs as flowenvs
print(flowenvs.__all__)
Explanation: 3.3 Initializing a Gym Environment
Now, we have to specify our Gym Environment and the algorithm that our RL agents will use. To specify the environment, one has to use the environment's name (a simple string). A list of all environment names is located in flow/envs/__init__.py. The names of available environments can be seen below.
End of explanation
env_name = "WaveAttenuationPOEnv"
Explanation: We will use the environment "WaveAttenuationPOEnv", which is used to train autonomous vehicles to attenuate the formation and propagation of waves in a partially observable variable density ring road. To create the Gym Environment, the only necessary parameters are the environment name plus the previously defined variables. These are defined as follows:
End of explanation
# Creating flow_params. Make sure the dictionary keys are as specified.
flow_params = dict(
# name of the experiment
exp_tag=name,
# name of the flow environment the experiment is running on
env_name=env_name,
# name of the scenario class the experiment uses
scenario=scenario_name,
# simulator that is used by the experiment
simulator='traci',
# sumo-related parameters (see flow.core.params.SumoParams)
sim=sumo_params,
# environment related parameters (see flow.core.params.EnvParams)
env=env_params,
# network-related parameters (see flow.core.params.NetParams and
# the scenario's documentation or ADDITIONAL_NET_PARAMS component)
net=net_params,
# vehicles to be placed in the network at the start of a rollout
# (see flow.core.vehicles.Vehicles)
veh=vehicles,
# (optional) parameters affecting the positioning of vehicles upon
# initialization/reset (see flow.core.params.InitialConfig)
initial=initial_config
)
Explanation: 3.4 Setting up Flow Parameters
RLlib and rllab experiments both generate a params.json file for each experiment run. For RLlib experiments, the parameters defining the Flow scenario and environment must be stored as well. As such, in this section we define the dictionary flow_params, which contains the variables required by the utility function make_create_env. make_create_env is a higher-order function which returns a function create_env that initializes a Gym environment corresponding to the Flow scenario specified.
End of explanation
import json
import ray
try:
from ray.rllib.agents.agent import get_agent_class
except ImportError:
from ray.rllib.agents.registry import get_agent_class
from ray.tune import run_experiments
from ray.tune.registry import register_env
from flow.utils.registry import make_create_env
from flow.utils.rllib import FlowParamsEncoder
Explanation: 4 Running RL experiments in Ray
4.1 Import
First, we must import modules required to run experiments in Ray. The json package is required to store the Flow experiment parameters in the params.json file, as is FlowParamsEncoder. Ray-related imports are required: the PPO algorithm agent, ray.tune's experiment runner, and environment helper methods register_env and make_create_env.
End of explanation
# number of parallel workers
N_CPUS = 2
# number of rollouts per training iteration
N_ROLLOUTS = 1
ray.init(redirect_output=True, num_cpus=N_CPUS)
Explanation: 4.2 Initializing Ray
Here, we initialize Ray and experiment-based constant variables specifying parallelism in the experiment as well as experiment batch size in terms of number of rollouts. redirect_output sends stdout and stderr for non-worker processes to files if True.
End of explanation
# The algorithm or model to train. This may refer to "
# "the name of a built-on algorithm (e.g. RLLib's DQN "
# "or PPO), or a user-defined trainable function or "
# "class registered in the tune registry.")
alg_run = "PPO"
agent_cls = get_agent_class(alg_run)
config = agent_cls._default_config.copy()
config["num_workers"] = N_CPUS - 1 # number of parallel workers
config["train_batch_size"] = HORIZON * N_ROLLOUTS # batch size
config["gamma"] = 0.999 # discount rate
config["model"].update({"fcnet_hiddens": [16, 16]}) # size of hidden layers in network
config["use_gae"] = True # using generalized advantage estimation
config["lambda"] = 0.97
config["sgd_minibatch_size"] = min(16 * 1024, config["train_batch_size"]) # stochastic gradient descent
config["kl_target"] = 0.02 # target KL divergence
config["num_sgd_iter"] = 10 # number of SGD iterations
config["horizon"] = HORIZON # rollout horizon
# save the flow params for replay
flow_json = json.dumps(flow_params, cls=FlowParamsEncoder, sort_keys=True,
indent=4) # generating a string version of flow_params
config['env_config']['flow_params'] = flow_json # adding the flow_params to config dict
config['env_config']['run'] = alg_run
# Call the utility function make_create_env to be able to
# register the Flow env for this experiment
create_env, gym_name = make_create_env(params=flow_params, version=0)
# Register as rllib env with Gym
register_env(gym_name, create_env)
Explanation: 4.3 Configuration and Setup
Here, we copy and modify the default configuration for the PPO algorithm. The agent has the number of parallel workers specified, a batch size corresponding to N_ROLLOUTS rollouts (each of which has length HORIZON steps), a discount rate $\gamma$ of 0.999, two hidden layers of size 16, uses Generalized Advantage Estimation, $\lambda$ of 0.97, and other parameters as set below.
Once config contains the desired parameters, a JSON string corresponding to the flow_params specified in section 3 is generated. The FlowParamsEncoder maps objects to string representations so that the experiment can be reproduced later. That string representation is stored within the env_config section of the config dictionary. Later, config is written out to the file params.json.
Next, we call make_create_env and pass in the flow_params to return a function we can use to register our Flow environment with Gym.
End of explanation
trials = run_experiments({
flow_params["exp_tag"]: {
"run": alg_run,
"env": gym_name,
"config": {
**config
},
"checkpoint_freq": 1, # number of iterations between checkpoints
"checkpoint_at_end": True, # generate a checkpoint at the end
"max_failures": 999,
"stop": { # stopping conditions
"training_iteration": 1, # number of iterations to stop after
},
},
})
Explanation: 4.4 Running Experiments
Here, we use the run_experiments function from ray.tune. The function takes a dictionary with one key, a name corresponding to the experiment, and one value, itself a dictionary containing parameters for training.
End of explanation |
3,880 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Saving and Reloading Simulations
Here we show how to save a binary field with all the parameters for particles and the simulation's REBOUNDx effects. We begin with a one planet system subject to general relativity corrections
Step1: We add GR, and after integrating, see that the pericenter has moved
Step2: Now we add some arbitrary parameters to the planet
Step3: To save the simulation, we have to save a REBOUND binary to save the simulation, and a REBOUNDx binary to save the parameters and REBOUNDx effects
Step4: We can now reload the simulation and the rebx instance
Step5: We can now continue integrating as usual, and see that the pericenter continues to precess, so the GR effect has been successfully added to the loaded simulation | Python Code:
import rebound
import numpy as np
sim = rebound.Simulation()
sim.add(m=1., hash="star") # Sun
sim.add(m=1.66013e-07,a=0.387098,e=0.205630, hash="planet") # Mercury-like
sim.move_to_com() # Moves to the center of momentum frame
ps = sim.particles
print("t = {0}, pomega = {1}".format(sim.t, sim.particles[1].pomega))
Explanation: Saving and Reloading Simulations
Here we show how to save a binary field with all the parameters for particles and the simulation's REBOUNDx effects. We begin with a one planet system subject to general relativity corrections:
End of explanation
import reboundx
rebx = reboundx.Extras(sim)
gr = rebx.load_force("gr")
rebx.add_force(gr)
from reboundx import constants
gr.params["c"] = constants.C
sim.integrate(10.)
print("t = {0}, pomega = {1}".format(sim.t, sim.particles[1].pomega))
Explanation: We add GR, and after integrating, see that the pericenter has moved:
End of explanation
rebx.register_param('a', 'REBX_TYPE_INT')
rebx.register_param('b', 'REBX_TYPE_INT')
ps[1].params["a"] = 1
ps[1].params["b"] = 2
Explanation: Now we add some arbitrary parameters to the planet:
End of explanation
sim.save("reb.bin")
rebx.save("rebx.bin")
Explanation: To save the simulation, we have to save a REBOUND binary to save the simulation, and a REBOUNDx binary to save the parameters and REBOUNDx effects:
End of explanation
sim2 = rebound.Simulation("reb.bin")
rebx2 = reboundx.Extras(sim2, "rebx.bin")
ps2 = sim2.particles
gr2 = rebx2.get_force("gr")
print("Original: {0}, Loaded: {1}".format(ps[1].params["a"], ps2[1].params["a"]))
print("Original: {0}, Loaded: {1}".format(ps[1].params["b"], ps2[1].params["b"]))
print("Original: {0}, Loaded: {1}".format(gr.params["c"], gr2.params["c"]))
Explanation: We can now reload the simulation and the rebx instance:
End of explanation
sim2.integrate(20.)
print("t = {0}, pomega = {1}".format(sim2.t, sim2.particles[1].pomega))
Explanation: We can now continue integrating as usual, and see that the pericenter continues to precess, so the GR effect has been successfully added to the loaded simulation:
End of explanation |
3,881 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Function Quick Reference
Table of contents
<a href="#1.-Declaring-Functions">Declaring Functions</a>
<a href="#2.-Return-Values">Return Values</a>
<a href="#3.-Parameters">Parameters</a>
<a href="#4.-DocStrings">DocStrings</a>
<a href="#5.-Parameter-Unpacking">Parameter Unpacking</a>
<a href="#6.-Generator-Functions">Generator Functions</a>
<a href="#7.-Lambas-Anonymous-Functions">Lambdas Anonymous Functions</a>
<a href="#8.-Partial">Partial</a>
<a href="#9.-Closures-Nested-Functions">Closures (Nested Functions)</a>
1. Declaring Functions
Define a function with no arguments and no return values
Step1: Use pass as a placeholder if you haven't written the function body
Step2: 2. Return Values
Step3: Return two values from a single function
Step4: 3. Parameters
Step5: Define a function with a default value
Step6: Default values should always be const values, or you can get in trouble
Step7: Function taking an arbitrary number of arguments
Step8: Define a function that only accepts keyword arguments
Step9: Define a function that take a callable (function) as a parameter)
Step11: 4. DocStrings
Step12: Attaching additional metadata to a function definition
Step13: 5. Unpacking Parameters
Unpacking iterables into positional function arguments (star operator)
Step14: Unpacking dictionaries into named arguments (double-star operator)
Step15: 6. Generator Functions
Generator functions with yield
Step16: Generator function that uses an internal iterator
Step17: 7. Lambas Anonymous Functions
Step18: Default Parameters in Lambdas
Step19: Capturing local variables in lambdas
Step20: 8. Partial
Partial allows you convert an n-parameter function into a function with less arguments
Step21: 9. Closures Nested Functions
Step22: 10. Nonlocal and global
nonlocal allows you to modify a variable outside your scope (but not global scope) | Python Code:
def print_text():
print('this is text')
# call the function
print_text()
Explanation: Python Function Quick Reference
Table of contents
<a href="#1.-Declaring-Functions">Declaring Functions</a>
<a href="#2.-Return-Values">Return Values</a>
<a href="#3.-Parameters">Parameters</a>
<a href="#4.-DocStrings">DocStrings</a>
<a href="#5.-Parameter-Unpacking">Parameter Unpacking</a>
<a href="#6.-Generator-Functions">Generator Functions</a>
<a href="#7.-Lambas-Anonymous-Functions">Lambdas Anonymous Functions</a>
<a href="#8.-Partial">Partial</a>
<a href="#9.-Closures-Nested-Functions">Closures (Nested Functions)</a>
1. Declaring Functions
Define a function with no arguments and no return values:
End of explanation
def stub():
pass
Explanation: Use pass as a placeholder if you haven't written the function body:
End of explanation
def say_hello():
return 'hello'
say_hello()
Explanation: 2. Return Values
End of explanation
def min_max(nums):
return min(nums), max(nums)
# return values can be assigned into multiple variables using tuple unpacking
nums = [3, 6, 5, 8, 2, 19, 7]
min_num, max_num = min_max(nums)
print(min_num)
print(max_num)
Explanation: Return two values from a single function:
End of explanation
def print_this(x):
print (x)
print_this(3)
Explanation: 3. Parameters
End of explanation
def calc(a, b, op='add'):
if op == 'add':
return a+b
elif op == 'sub':
return a-b
else:
print('valid operations are add and sub')
calc(10, 4)
calc(10,4, op='add')
# unnamed arguments are inferred by position
calc(10, 4, 'add')
x = 42
def spam(a, b=x):
print(a, b)
spam(1)
x = 23 # Has no effect
spam(1)
Explanation: Define a function with a default value:
End of explanation
def spam(a, b=[]): # b escapes the function as a return variable, which can be altered!
return b
x = spam(1)
x
x = spam(1)
x.append(99)
x.append('Yow!')
spam(1) # Modified list gets returned!
Explanation: Default values should always be const values, or you can get in trouble
End of explanation
# arbitrary positional arguments
def print_all(seperator, *args):
print(seperator.join(args))
print_all(',', 'first','second','third')
# arbitrary positional AND keyword arguments
def anyargs(*args, **kwargs):
print(args) # A tuple
print(kwargs) # A dict
anyargs(3, 'ddddd', 5.666, foo='bar', blah='zed')
# keyword arguments have access to attribute name
import html
def make_element(name, value, **attrs):
keyvals = [' %s="%s"' % item for item in attrs.items()]
attr_str = ''.join(keyvals)
element = '<{name}{attrs}>{value}</{name}>'.format(
name=name,
attrs=attr_str,
value=html.escape(value))
return element
# Example
# Creates '<item size="large" quantity="6">Albatross</item>'
make_element('item', 'Albatross', size='large', quantity=6)
Explanation: Function taking an arbitrary number of arguments
End of explanation
def recv(maxsize, *, block):
'Receives a message'
pass
recv(1024, block=True) # Ok
# the following will fail if uncommented
#recv(1024, True) # TypeError
Explanation: Define a function that only accepts keyword arguments
End of explanation
def dedupe(items):
seen = set()
for item in items:
if item not in seen:
yield item
seen.add(item)
a = [1, 5, 2, 1, 9, 1, 5, 10]
list(dedupe(a))
Explanation: Define a function that take a callable (function) as a parameter):
End of explanation
def calc(a, b, op='add'):
calculates the result of a simple math operation.
:param a: the first parameter in the math operation
:param b: the first parameter in the math operation
:param op: which type of math operation (valid values are 'add', 'sub')
:returns: this result of applying the math argument to the two parameters
:raises keyError: raises an exception
if op == 'add':
return a+b
elif op == 'sub':
return a-b
else:
print('valid operations are add and sub')
help(calc)
Explanation: 4. DocStrings
End of explanation
# the compiler does not check any of this, it is just documentation!
def add(x:int, y:int) -> int:
return x + y
add('hello', 'world')
help(add)
Explanation: Attaching additional metadata to a function definition
End of explanation
# range takes start and stop parameters
list(range(3, 6)) # normal call with separate arguments
[3, 4, 5]
# can also pass start and stop arguments using argument unpacking
args = [3,6] #can be used on tuple too
list(range(*args))
Explanation: 5. Unpacking Parameters
Unpacking iterables into positional function arguments (star operator)
End of explanation
# a dictionary can be unpacked into names arguments with the ** / double star operator
from collections import namedtuple
Stock = namedtuple('Stock', ['name', 'shares', 'price', 'date', 'time'])
# Create a prototype instance
stock_prototype = Stock('', 0, 0.0, None, None)
# Function to convert a dictionary to a Stock
def dict_to_stock(s):
return stock_prototype._replace(**s)
a = {'name': 'ACME', 'shares': 100, 'price': 123.45}
dict_to_stock(a)
Explanation: Unpacking dictionaries into named arguments (double-star operator)
End of explanation
def myrange(n):
for i in range(n):
yield i
max(myrange(5))
Explanation: 6. Generator Functions
Generator functions with yield
End of explanation
#using yield from
def myrange(n):
yield from range(n)
print(max(myrange(5)))
Explanation: Generator function that uses an internal iterator
End of explanation
squared = lambda x: x**2
squared(3)
simpsons = ['bart', 'maggie', 'homer', 'lisa', 'marge']
sorted(simpsons, key = lambda word: word[-1])
# no parameter lambda
say_hello = lambda : 'hello'
say_hello()
Explanation: 7. Lambas Anonymous Functions
End of explanation
talkback = lambda message='hello' : message
talkback()
talkback('hello world')
Explanation: Default Parameters in Lambdas
End of explanation
test = 'hello world'
talkback = lambda : test
talkback()
# parameters are resolved when the code runs, not when lambda is declared
test = 'what???'
talkback()
# to prevent this, use a default parameter set to the local variable
test = 'hello world'
talkback = lambda message = test: message
test='nope'
talkback()
Explanation: Capturing local variables in lambdas
End of explanation
def spam(a, b, c, d):
print(a, b, c, d)
from functools import partial
s1 = partial(spam, 1) # a = 1
s1(2, 3, 4)
s1(4, 5, 6)
s2 = partial(spam, d=42) # d = 42
s2(1, 2, 3)
s2(4, 5, 5)
s3 = partial(spam, 1, 2, d=42) # a = 1, b = 2, d = 42
s3(3)
s3(4)
s3(5)
Explanation: 8. Partial
Partial allows you convert an n-parameter function into a function with less arguments
End of explanation
# this inner closure is used to carry state around
def name_func_from_family(last_name):
def print_name(first_name):
print('{} {}'.format(first_name, last_name))
return print_name #the key here is that the outer function RETURNS the inner function / closure
print_saltwell = name_func_from_family('saltwell')
print_saltwell('erik')
print_saltwell('kasia')
print_saltwell('jacob')
Explanation: 9. Closures Nested Functions
End of explanation
def outside():
msg = "Outside!"
def inside():
msg = "Inside!"
print(msg)
inside()
print(msg) # this prints 'Outside!' even though Inside() mosifies a variable called msg (its a local copy)
outside()
# to have a variable refer to something outside local scope use nonlocal
def outside():
msg = "Outside!"
def inside():
nonlocal msg
msg = "Inside!"
print(msg)
inside()
print(msg)
outside()
# the global keyword makes a variable reference a global variable rather then a copy
msg = 'Global!!'
def outside():
msg = "Outside!"
def inside():
global msg
msg = "Inside!"
print(msg)
inside()
print(msg) # this prints 'Outside!' because the copy in Inside() references the global variable
outside()
msg
Explanation: 10. Nonlocal and global
nonlocal allows you to modify a variable outside your scope (but not global scope)
End of explanation |
3,882 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Integration Exercise 2
Imports
Step1: Indefinite integrals
Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc.
Find five of these integrals and perform the following steps
Step2: Integral 1
YOUR ANSWER HERE
Step3: Integral 2
YOUR ANSWER HERE
Step4: Integral 3
YOUR ANSWER HERE
Step5: Integral 4
YOUR ANSWER HERE
Step6: Integral 5
YOUR ANSWER HERE | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy import integrate
Explanation: Integration Exercise 2
Imports
End of explanation
def integrand(x, a):
return 1.0/(x**2 + a**2)
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, np.inf, args=(a,))
return I
def integral_exact(a):
return 0.5*np.pi/a
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
Explanation: Indefinite integrals
Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc.
Find five of these integrals and perform the following steps:
Typeset the integral using LateX in a Markdown cell.
Define an integrand function that computes the value of the integrand.
Define an integral_approx funciton that uses scipy.integrate.quad to peform the integral.
Define an integral_exact function that computes the exact value of the integral.
Call and print the return value of integral_approx and integral_exact for one set of parameters.
Here is an example to show what your solutions should look like:
Example
Here is the integral I am performing:
$$ I_1 = \int_0^\infty \frac{dx}{x^2 + a^2} = \frac{\pi}{2a} $$
End of explanation
# YOUR CODE HERE
def integrand(x, a, b):
return np.exp(-a*x)*np.cos(b*x)
def integral_approx(a, b):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, np.inf, args=(a,b,))
return I
def integral_exact(a, b):
return a/(a**2 + b**2)
print("Numerical: ", integral_approx(1.0, 3.0))
print("Exact : ", integral_exact(1.0,3.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 1
YOUR ANSWER HERE:
$$ \LARGE \int_0^\infty e^{-ax} cos(bx) dx = \frac{a}{a^2+b^2} $$
End of explanation
# YOUR CODE HERE
def integrand(x, a):
v = np.sqrt(a**2 - x**2)
return v**-1
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, a, args=(a,))
return I
def integral_exact(a):
return 0.5*np.pi
print("Numerical: ", integral_approx(3.0))
print("Exact : ", integral_exact(3.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 2
YOUR ANSWER HERE:
$$ \LARGE{
\int_0^\infty \frac{dx}{\sqrt{a^2 - x^2}} = \frac{\pi}{2}
}$$
End of explanation
# YOUR CODE HERE
def integrand(x, a):
return np.sqrt(a**2 - x**2)
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, a, args=(a,))
return I
def integral_exact(a):
return 0.25*np.pi*(a**2)
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 3
YOUR ANSWER HERE:
$$ \LARGE{
\int_0^a \sqrt{a^2 - x^2}dx = \frac{\pi a^2}{4}
}$$
End of explanation
# YOUR CODE HERE
def integrand(x, a, b):
return np.exp(-a*x)*np.sin(b*x)
def integral_approx(a, b):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, np.inf, args=(a,b,))
return I
def integral_exact(a, b):
return b/(a**2 + b**2)
print("Numerical: ", integral_approx(1.0, 3.0))
print("Exact : ", integral_exact(1.0,3.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 4
YOUR ANSWER HERE:
$$ \LARGE \int_0^\infty e^{-ax} sin(bx) dx = \frac{b}{a^2+b^2} $$
End of explanation
# YOUR CODE HERE
def integrand(x, a):
return np.exp(-a*(x**2))
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, np.inf, args=(a,))
return I
def integral_exact(a):
return 0.5*np.sqrt((np.pi/a))
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 5
YOUR ANSWER HERE:
$$ \LARGE \int_0^\infty e^{-ax^2} dx = \frac{1}{2} \sqrt{\frac{\pi}{a}} $$
End of explanation |
3,883 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook is a revised version of notebook from Amy Wu and Shen Zhimo
E2E ML on GCP
Step1: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
Step2: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step3: Get your project number
Now that the project ID is set, you get your corresponding project number.
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Vertex AI Workbench Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
Before you submit a training job for the two-tower model, you need to upload your training data and schema to Cloud Storage. Vertex AI trains the model using this input data. In this tutorial, the Two-Tower built-in algorithm also saves the trained model that results from your job in the same bucket. Using this model artifact, you can then create Vertex AI model and endpoint resources in order to serve online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Import libraries and define constants
Step11: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
Step12: Set machine type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for prediction.
machine type
n1-standard
Step13: Introduction to Two-Tower algorithm
Two-tower models learn to represent two items of various types (such as user profiles, search queries, web documents, answer passages, or images) in the same vector space, so that similar or related items are close to each other. These two items are referred to as the query and candidate object, since when paired with a nearest neighbor search service such as Vertex Matching Engine, the two-tower model can retrieve candidate objects related to an input query object. These objects are encoded by a query and candidate encoder (the two "towers") respectively, which are trained on pairs of relevant items. This built-in algorithm exports trained query and candidate encoders as model artifacts, which can be deployed in Vertex Prediction for usage in a recommendation system.
Configure training parameters for the Two-Tower builtin algorithm
The following table shows parameters that are common to all Vertex AI Training jobs created using the gcloud ai custom-jobs create command. See the official documentation for all the possible arguments.
| Parameter | Data type | Description | Required |
|--|--|--|--|
| display-name | string | Name of the job. | Yes |
| worker-pool-spec | string | Comma-separated list of arguments specifying a worker pool configuration (see below). | Yes |
| region | string | Region to submit the job to. | No |
The worker-pool-spec flag can be specified multiple times, one for each worker pool. The following table shows the arguments used to specify a worker pool.
| Parameter | Data type | Description | Required |
|--|--|--|--|
| machine-type | string | Machine type for the pool. See the official documentation for supported machines. | Yes |
| replica-count | int | The number of replicas of the machine in the pool. | No |
| container-image-uri | string | Docker image to run on each worker. | No |
The following table shows the parameters for the two-tower model training job
Step14: Train on Vertex AI Training with CPU
Submit the Two-Tower training job to Vertex AI Training. The following command uses a single CPU machine for training. When using single node training, training_steps_per_epoch and eval_steps_per_epoch do not need to be set.
Prepare your machine specification
Now define the machine specification for your custom hyperparameter tuning job. This tells Vertex what type of machine instance to provision for the hyperparameter tuning.
- machine_type
Step15: Prepare your disk specification
(optional) Now define the disk specification for your custom hyperparameter tuning job. This tells Vertex what type and size of disk to provision in each machine instance for the hyperparameter tuning.
boot_disk_type
Step16: Define the worker pool specification
Next, you define the worker pool specification for your custom hyperparameter tuning job. The worker pool specification will consist of the following
Step17: Create a custom job
Use the class CustomJob to create a custom job, such as for hyperparameter tuning, with the following parameters
Step18: Execute the custom job
Next, execute your custom job using the method run().
Step19: View output
After the job finishes successfully, you can view the output directory.
Step20: Train on Vertex AI Training with GPU
Next, train the Two Tower model using a GPU.
Step21: Create and execute the custom job
Next, create and execute the custom job.
Step22: View output
After the job finishes successfully, you can view the output directory.
Step23: Train on Vertex AI Training with TFRecords
Next, train the Two Tower model using TFRecords
Step24: Create and execute the custom job
Next, create and execute the custom job.
Step25: View output
After the job finishes successfully, you can view the output directory.
Step26: Tensorboard
When the training starts, you can view the logs in TensorBoard. Colab users can use the TensorBoard widget below
Step27: Hyperparameter tuning
You may want to optimize the hyperparameters used during training to improve your model's accuracy and performance.
For this example, the following command runs a Vertex AI hyperparameter tuning job with 8 trials that attempts to maximize the validation AUC metric. The hyperparameters it optimizes are the number of hidden layers, the size of the hidden layers, and the learning rate.
Learn more about Hyperparameter tuning overview.
Step28: Run the hyperparameter tuning job
Use the run() method to execute the hyperparameter tuning job.
Step29: Display the hyperparameter tuning job trial results
After the hyperparameter tuning job has completed, the property trials will return the results for each trial.
Step30: Best trial
Now look at which trial was the best
Step31: Delete the hyperparameter tuning job
The method 'delete()' will delete the hyperparameter tuning job.
Step32: View output
After the job finishes successfully, you can view the output directory.
Step33: Upload the model to Vertex AI Model resource
Your training job will export two TF SavedModels under gs
Step34: Deploy the model to Vertex AI Endpoint
Deploying the Vertex AI Model resoure to a Vertex AI Endpoint for online predictions
Step35: Deploying Model resources to an Endpoint resource.
You can deploy one of more Vertex AI Model resource instances to the same endpoint. Each Vertex AI Model resource that is deployed will have its own deployment container for the serving binary.
In the next example, you deploy the Vertex AI Model resource to a Vertex AI Endpoint resource. The Vertex AI Model resource already has defined for it the deployment container image. To deploy, you specify the following additional configuration settings
Step36: Creating embeddings
Now that you have deployed the query/candidate encoder model on Vertex AI Prediction, you can call the model to generate embeddings for new data.
Make an online prediction with SDK
Online prediction is used to synchronously query a model on a small batch of instances with minimal latency. The following function calls the deployed model using Vertex AI SDK for Python.
The input data you want predicted embeddings on should be provided as a stringified JSON in the data field. Note that you should also provide a unique key field (of type str) for each input instance so that you can associate each output embedding with its corresponding input.
Step37: Make an online prediction with gcloud
You can also do online prediction using the gcloud CLI.
Step38: Make a batch prediction
Batch prediction is used to asynchronously make predictions on a batch of input data. This is recommended if you have a large input size and do not need an immediate response, such as getting embeddings for candidate objects in order to create an index for a nearest neighbor search service such as Vertex Matching Engine.
Create the batch input file
Next, you generate the batch input file to generate embeddings for the dataset, which you subsequently use to create an index with Vertex AI Matching Engine. In this example, the dataset contains a 1000 unique identifiers (0...999). You will use the trained encoder to generate a predicted embedding for each unique identifier.
The input data needs to be on Cloud Storage and in JSONL format. You can use the sample query object file provided below. Like with online prediction, it's recommended to have the key field so that you can associate each output embedding with its corresponding input.
Step39: Send the prediction request
To make a batch prediction request, call the model object's batch_predict method with the following parameters
Step40: Get the predicted embeddings
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format
Step41: Save the embeddings in JSONL format
Next, you store the predicted embeddings as a JSONL formatted file. Each embedding is stored as
Step42: Store the JSONL formatted embeddings in Cloud Storage
Next, you upload the training data to your Cloud Storage bucket.
Step43: Create Matching Engine Index
Next, you create the index for your embeddings. Currently, two indexing algorithms are supported
Step44: Setup VPC peering network
To use a Matching Engine Index, you setup a VPC peering network between your project and the Vertex AI Matching Engine service project. This eliminates additional hops in network traffic and allows using efficient gRPC protocol.
Learn more about VPC peering.
IMPORTANT
Step45: Create the VPC connection
Next, create the connection for VPC peering.
Note
Step46: Check the status of your peering connections.
Step47: Construct the full network name
You need to have the full network resource name when you subsequently create an Matching Engine Index Endpoint resource for VPC peering.
Step48: Create an IndexEndpoint with VPC Network
Next, you create a Matching Engine Index Endpoint, similar to the concept of creating a Private Endpoint for prediction with a peer-to-peer network.
To create the Index Endpoint resource, you call the method create() with the following parameters
Step49: Deploy the Matching Engine Index to the Index Endpoint resource
Next, deploy your index to the Index Endpoint using the method deploy_index() with the following parameters
Step50: Create and execute an online query
Now that your index is deployed, you can make queries.
First, you construct a vector query using synthetic data, to use as the example to return matches for.
Next, you make the matching request using the method match(), with the following parameters
Step51: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# The Vertex AI Workbench Notebook product has specific requirements
IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME")
IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(
"/opt/deeplearning/metadata/env_version"
)
# Vertex AI Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_WORKBENCH_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install {USER_FLAG} --upgrade tensorflow -q
! pip3 install {USER_FLAG} --upgrade google-cloud-aiplatform tensorboard-plugin-profile -q
! gcloud components update --quiet
Explanation: Notebook is a revised version of notebook from Amy Wu and Shen Zhimo
E2E ML on GCP: MLOps stage 6 : serving: get started with Vertex AI Matching Engine and Two Towers builtin algorithm
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/main/notebooks/ocommunity/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Run in Vertex Workbench
</a>
</td>
</table>
Overview
This tutorial demonstrates how to use the Vertex AI Two-Tower built-in algorithm with Vertex AI Matching Engine.
Dataset
This tutorial uses the movielens_100k sample dataset in the public bucket gs://cloud-samples-data/vertex-ai/matching-engine/two-tower, which was generated from the MovieLens movie rating dataset. For this tutorial, the data only includes the user id feature for users, and the movie id and movie title features for movies. In this example, the user is the query object and the movie is the candidate object, and each training example in the dataset contains a user and a movie they rated (we only include positive ratings in the dataset). The two-tower model will embed the user and the movie in the same embedding space, so that given a user, the model will recommend movies it thinks the user will like.
Objective
In this notebook, you will learn how to use the Two-Tower builtin algorithms for generating embeddings for a dataset, for use with generating an Matching Engine Index, with the Vertex AI Matching Engine service.
This tutorial uses the following Google Cloud ML services:
Vertex AI Two-Towers builtin algorithm
Vertex AI Matching Engine
Vertex AI Batch Prediction
The tutorial covers the following steps:
Train the Two-Tower algorithm to generate embeddings (encoder) for the dataset.
Hyperparameter tune the trained Two-Tower encoder.
Make example predictions (embeddings) from then trained encoder.
Generate embeddings using the trained Two-Tower builtin algorithm.
Store embeddings to format supported by Matching Engine.
Create a Matching Engine Index for the embeddings.
Deploy the Matching Engine Index to a Index Endpoint.
Make a matching engine prediction request.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the
command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the packages required for executing this notebook.
End of explanation
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you do not know your project ID, you may be able to get your project ID using gcloud.
End of explanation
shell_output = ! gcloud projects list --filter="PROJECT_ID:'{PROJECT_ID}'" --format='value(PROJECT_NUMBER)'
PROJECT_NUMBER = shell_output[0]
print("Project Number:", PROJECT_NUMBER)
Explanation: Get your project number
Now that the project ID is set, you get your corresponding project number.
End of explanation
REGION = "[your-region]" # @param {type: "string"}
if REGION == "[your-region]":
REGION = "us-central1"
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions.
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Vertex AI Workbench, then don't execute this code
IS_COLAB = False
if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv(
"DL_ANACONDA_HOME"
):
if "google.colab" in sys.modules:
IS_COLAB = True
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Vertex AI Workbench Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key
page.
Click Create service account.
In the Service account name field, enter a name, and
click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI"
into the filter box, and select
Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
BUCKET_URI = f"gs://{BUCKET_NAME}"
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
BUCKET_URI = "gs://" + BUCKET_NAME
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
Before you submit a training job for the two-tower model, you need to upload your training data and schema to Cloud Storage. Vertex AI trains the model using this input data. In this tutorial, the Two-Tower built-in algorithm also saves the trained model that results from your job in the same bucket. Using this model artifact, you can then create Vertex AI model and endpoint resources in order to serve online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
End of explanation
! gsutil mb -l $REGION $BUCKET_URI
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_URI
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import os
from google.cloud import aiplatform
%load_ext tensorboard
Explanation: Import libraries and define constants
End of explanation
aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI)
Explanation: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
End of explanation
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Set machine type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
DATASET_NAME = "movielens_100k" # Change to your dataset name.
# Change to your data and schema paths. These are paths to the movielens_100k
# sample data.
TRAINING_DATA_PATH = f"gs://cloud-samples-data/vertex-ai/matching-engine/two-tower/{DATASET_NAME}/training_data/*"
INPUT_SCHEMA_PATH = f"gs://cloud-samples-data/vertex-ai/matching-engine/two-tower/{DATASET_NAME}/input_schema.json"
# URI of the two-tower training Docker image.
LEARNER_IMAGE_URI = "us-docker.pkg.dev/vertex-ai-restricted/builtin-algorithm/two-tower"
# Change to your output location.
OUTPUT_DIR = f"{BUCKET_URI}/experiment/output"
TRAIN_BATCH_SIZE = 100 # Batch size for training.
NUM_EPOCHS = 3 # Number of epochs for training.
print(f"Dataset name: {DATASET_NAME}")
print(f"Training data path: {TRAINING_DATA_PATH}")
print(f"Input schema path: {INPUT_SCHEMA_PATH}")
print(f"Output directory: {OUTPUT_DIR}")
print(f"Train batch size: {TRAIN_BATCH_SIZE}")
print(f"Number of epochs: {NUM_EPOCHS}")
Explanation: Introduction to Two-Tower algorithm
Two-tower models learn to represent two items of various types (such as user profiles, search queries, web documents, answer passages, or images) in the same vector space, so that similar or related items are close to each other. These two items are referred to as the query and candidate object, since when paired with a nearest neighbor search service such as Vertex Matching Engine, the two-tower model can retrieve candidate objects related to an input query object. These objects are encoded by a query and candidate encoder (the two "towers") respectively, which are trained on pairs of relevant items. This built-in algorithm exports trained query and candidate encoders as model artifacts, which can be deployed in Vertex Prediction for usage in a recommendation system.
Configure training parameters for the Two-Tower builtin algorithm
The following table shows parameters that are common to all Vertex AI Training jobs created using the gcloud ai custom-jobs create command. See the official documentation for all the possible arguments.
| Parameter | Data type | Description | Required |
|--|--|--|--|
| display-name | string | Name of the job. | Yes |
| worker-pool-spec | string | Comma-separated list of arguments specifying a worker pool configuration (see below). | Yes |
| region | string | Region to submit the job to. | No |
The worker-pool-spec flag can be specified multiple times, one for each worker pool. The following table shows the arguments used to specify a worker pool.
| Parameter | Data type | Description | Required |
|--|--|--|--|
| machine-type | string | Machine type for the pool. See the official documentation for supported machines. | Yes |
| replica-count | int | The number of replicas of the machine in the pool. | No |
| container-image-uri | string | Docker image to run on each worker. | No |
The following table shows the parameters for the two-tower model training job:
| Parameter | Data type | Description | Required |
|--|--|--|--|
| training_data_path | string | Cloud Storage pattern where training data is stored. | Yes |
| input_schema_path | string | Cloud Storage path where the JSON input schema is stored. | Yes |
| input_file_format | string | The file format of input. Currently supports jsonl and tfrecord. | No - default is jsonl. |
| job_dir | string | Cloud Storage directory where the model output files will be stored. | Yes |
| eval_data_path | string | Cloud Storage pattern where eval data is stored. | No |
| candidate_data_path | string | Cloud Storage pattern where candidate data is stored. Only used for top_k_categorical_accuracy metrics. If not set, it's generated from training/eval data. | No |
| train_batch_size | int | Batch size for training. | No - Default is 100. |
| eval_batch_size | int | Batch size for evaluation. | No - Default is 100. |
| eval_split | float | Split fraction to use for the evaluation dataset, if eval_data_path is not provided. | No - Default is 0.2 |
| optimizer | string | Training optimizer. Lowercase string name of any TF2.3 Keras optimizer is supported ('sgd', 'nadam', 'ftrl', etc.). See TensorFlow documentation. | No - Default is 'adagrad'. |
| learning_rate | float | Learning rate for training. | No - Default is the default learning rate of the specified optimizer. |
| momentum | float | Momentum for optimizer, if specified. | No - Default is the default momentum value for the specified optimizer. |
| metrics | string | Metrics used to evaluate the model. Can be either auc, top_k_categorical_accuracy or precision_at_1. | No - Default is auc. |
| num_epochs | int | Number of epochs for training. | No - Default is 10. |
| num_hidden_layers | int | Number of hidden layers. | No |
| num_nodes_hidden_layer{index} | int | Num of nodes in hidden layer {index}. The range of index is 1 to 20. | No |
| output_dim | int | The output embedding dimension for each encoder tower of the two-tower model. | No - Default is 64. |
| training_steps_per_epoch | int | Number of steps per epoch to run the training for. Only needed if you are using more than 1 machine or using a master machine with more than 1 gpu. | No - Default is None. |
| eval_steps_per_epoch | int | Number of steps per epoch to run the evaluation for. Only needed if you are using more than 1 machine or using a master machine with more than 1 gpu. | No - Default is None. |
| gpu_memory_alloc | int | Amount of memory allocated per GPU (in MB). | No - Default is no limit. |
End of explanation
TRAIN_COMPUTE = "n1-standard-8"
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
Explanation: Train on Vertex AI Training with CPU
Submit the Two-Tower training job to Vertex AI Training. The following command uses a single CPU machine for training. When using single node training, training_steps_per_epoch and eval_steps_per_epoch do not need to be set.
Prepare your machine specification
Now define the machine specification for your custom hyperparameter tuning job. This tells Vertex what type of machine instance to provision for the hyperparameter tuning.
- machine_type: The type of GCP instance to provision -- e.g., n1-standard-8.
- accelerator_type: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable TRAIN_GPU != None, you are using a GPU; otherwise you will use a CPU.
- accelerator_count: The number of accelerators.
End of explanation
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
Explanation: Prepare your disk specification
(optional) Now define the disk specification for your custom hyperparameter tuning job. This tells Vertex what type and size of disk to provision in each machine instance for the hyperparameter tuning.
boot_disk_type: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD.
boot_disk_size_gb: Size of disk in GB.
End of explanation
JOB_NAME = "twotowers_cpu_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_URI, JOB_NAME)
CMDARGS = [
f"--training_data_path={TRAINING_DATA_PATH}",
f"--input_schema_path={INPUT_SCHEMA_PATH}",
f"--job-dir={OUTPUT_DIR}",
f"--train_batch_size={TRAIN_BATCH_SIZE}",
f"--num_epochs={NUM_EPOCHS}",
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"container_spec": {
"image_uri": LEARNER_IMAGE_URI,
"command": [],
"args": CMDARGS,
},
}
]
Explanation: Define the worker pool specification
Next, you define the worker pool specification for your custom hyperparameter tuning job. The worker pool specification will consist of the following:
replica_count: The number of instances to provision of this machine type.
machine_spec: The hardware specification.
disk_spec : (optional) The disk storage specification.
container_spec: The training container containing the training package.
Let's dive deeper now into the container specification:
image_uri: The training image.
command: The command to invoke in the training image. Defaults to the command entry point specified for the training image.
args: The command line arguments to pass to the corresponding command entry point in training image.
End of explanation
job = aiplatform.CustomJob(
display_name="twotower_cpu_" + TIMESTAMP, worker_pool_specs=worker_pool_spec
)
Explanation: Create a custom job
Use the class CustomJob to create a custom job, such as for hyperparameter tuning, with the following parameters:
display_name: A human readable name for the custom job.
worker_pool_specs: The specification for the corresponding VM instances.
End of explanation
job.run()
Explanation: Execute the custom job
Next, execute your custom job using the method run().
End of explanation
! gsutil ls {OUTPUT_DIR}
! gsutil rm -rf {OUTPUT_DIR}/*
Explanation: View output
After the job finishes successfully, you can view the output directory.
End of explanation
JOB_NAME = "twotowers_gpu_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_URI, JOB_NAME)
TRAIN_COMPUTE = "n1-highmem-4"
TRAIN_GPU = "NVIDIA_TESLA_K80"
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": 1,
}
CMDARGS = [
f"--training_data_path={TRAINING_DATA_PATH}",
f"--input_schema_path={INPUT_SCHEMA_PATH}",
f"--job-dir={OUTPUT_DIR}",
"--training_steps_per_epoch=1500",
"--eval_steps_per_epoch=1500",
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"container_spec": {
"image_uri": LEARNER_IMAGE_URI,
"command": [],
"args": CMDARGS,
},
}
]
Explanation: Train on Vertex AI Training with GPU
Next, train the Two Tower model using a GPU.
End of explanation
job = aiplatform.CustomJob(
display_name="twotower_cpu_" + TIMESTAMP, worker_pool_specs=worker_pool_spec
)
job.run()
Explanation: Create and execute the custom job
Next, create and execute the custom job.
End of explanation
! gsutil ls {OUTPUT_DIR}
! gsutil rm -rf {OUTPUT_DIR}/*
Explanation: View output
After the job finishes successfully, you can view the output directory.
End of explanation
TRAINING_DATA_PATH = f"gs://cloud-samples-data/vertex-ai/matching-engine/two-tower/{DATASET_NAME}/tfrecord/*"
JOB_NAME = "twotowers_tfrec_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_URI, JOB_NAME)
TRAIN_COMPUTE = "n1-standard-8"
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
CMDARGS = [
f"--training_data_path={TRAINING_DATA_PATH}",
f"--input_schema_path={INPUT_SCHEMA_PATH}",
f"--job-dir={OUTPUT_DIR}",
f"--train_batch_size={TRAIN_BATCH_SIZE}",
f"--num_epochs={NUM_EPOCHS}",
"--input_file_format=tfrecord",
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"container_spec": {
"image_uri": LEARNER_IMAGE_URI,
"command": [],
"args": CMDARGS,
},
}
]
Explanation: Train on Vertex AI Training with TFRecords
Next, train the Two Tower model using TFRecords
End of explanation
job = aiplatform.CustomJob(
display_name="twotower_cpu_" + TIMESTAMP, worker_pool_specs=worker_pool_spec
)
job.run()
Explanation: Create and execute the custom job
Next, create and execute the custom job.
End of explanation
! gsutil ls {OUTPUT_DIR}
! gsutil rm -rf {OUTPUT_DIR}
Explanation: View output
After the job finishes successfully, you can view the output directory.
End of explanation
try:
TENSORBOARD_DIR = os.path.join(OUTPUT_DIR, "tensorboard")
%tensorboard --logdir {TENSORBOARD_DIR}
except Exception as e:
print(e)
Explanation: Tensorboard
When the training starts, you can view the logs in TensorBoard. Colab users can use the TensorBoard widget below:
For Workbench AI Notebooks users, the TensorBoard widget above won't work. We recommend you to launch TensorBoard through the Cloud Shell.
In your Cloud Shell, launch Tensorboard on port 8080:
export TENSORBOARD_DIR=gs://xxxxx/tensorboard
tensorboard --logdir=${TENSORBOARD_DIR} --port=8080
Click the "Web Preview" button at the top-right of the Cloud Shell window (looks like an eye in a rectangle).
Select "Preview on port 8080". This should launch the TensorBoard webpage in a new tab in your browser.
After the job finishes successfully, you can view the output directory:
End of explanation
from google.cloud.aiplatform import hyperparameter_tuning as hpt
hpt_job = aiplatform.HyperparameterTuningJob(
display_name="twotowers_" + TIMESTAMP,
custom_job=job,
metric_spec={
"val_auc": "maximize",
},
parameter_spec={
"learning_rate": hpt.DoubleParameterSpec(min=0.0001, max=0.1, scale="log"),
"num_hidden_layers": hpt.IntegerParameterSpec(min=0, max=2, scale="linear"),
"num_nodes_hidden_layer1": hpt.IntegerParameterSpec(
min=1, max=128, scale="log"
),
"num_nodes_hidden_layer2": hpt.IntegerParameterSpec(
min=1, max=128, scale="log"
),
},
search_algorithm=None,
max_trial_count=8,
parallel_trial_count=1,
)
Explanation: Hyperparameter tuning
You may want to optimize the hyperparameters used during training to improve your model's accuracy and performance.
For this example, the following command runs a Vertex AI hyperparameter tuning job with 8 trials that attempts to maximize the validation AUC metric. The hyperparameters it optimizes are the number of hidden layers, the size of the hidden layers, and the learning rate.
Learn more about Hyperparameter tuning overview.
End of explanation
hpt_job.run()
Explanation: Run the hyperparameter tuning job
Use the run() method to execute the hyperparameter tuning job.
End of explanation
print(hpt_job.trials)
Explanation: Display the hyperparameter tuning job trial results
After the hyperparameter tuning job has completed, the property trials will return the results for each trial.
End of explanation
best = (None, None, None, 0.0)
for trial in hpt_job.trials:
# Keep track of the best outcome
if float(trial.final_measurement.metrics[0].value) > best[3]:
try:
best = (
trial.id,
float(trial.parameters[0].value),
float(trial.parameters[1].value),
float(trial.final_measurement.metrics[0].value),
)
except:
best = (
trial.id,
float(trial.parameters[0].value),
None,
float(trial.final_measurement.metrics[0].value),
)
print(best)
Explanation: Best trial
Now look at which trial was the best:
End of explanation
hpt_job.delete()
Explanation: Delete the hyperparameter tuning job
The method 'delete()' will delete the hyperparameter tuning job.
End of explanation
BEST_MODEL = OUTPUT_DIR + "/trial_" + best[0]
! gsutil ls {BEST_MODEL}
Explanation: View output
After the job finishes successfully, you can view the output directory.
End of explanation
# The following imports the query (user) encoder model.
MODEL_TYPE = "query"
# Use the following instead to import the candidate (movie) encoder model.
# MODEL_TYPE = 'candidate'
DISPLAY_NAME = f"{DATASET_NAME}_{MODEL_TYPE}" # The display name of the model.
MODEL_NAME = f"{MODEL_TYPE}_model" # Used by the deployment container.
model = aiplatform.Model.upload(
display_name=DISPLAY_NAME,
artifact_uri=BEST_MODEL,
serving_container_image_uri="us-central1-docker.pkg.dev/cloud-ml-algos/two-tower/deploy",
serving_container_health_route=f"/v1/models/{MODEL_NAME}",
serving_container_predict_route=f"/v1/models/{MODEL_NAME}:predict",
serving_container_environment_variables={
"MODEL_BASE_PATH": "$(AIP_STORAGE_URI)",
"MODEL_NAME": MODEL_NAME,
},
)
Explanation: Upload the model to Vertex AI Model resource
Your training job will export two TF SavedModels under gs://<job_dir>/query_model and gs://<job_dir>/candidate_model. These exported models can be used for online or batch prediction in Vertex Prediction.
First, import the query (or candidate) model using the upload() method, with the following parameters:
display_name: A human readable name for the model resource.
artifact_uri: The Cloud Storage location of the model artifacts.
serving_container_image_uri: The deployment container. In this tutorial, you use the prebuilt Two-Tower deployment container.
serving_container_health_route: The URL for the service to periodically ping for a response to verify that the serving binary is running. For Two-Towers, this will be /v1/models/[model_name].
serving_container_predict_route: The URL for the service to periodically ping for a response to verify that the serving binary is running. For Two-Towers, this will be /v1/models/[model_name]:predict.
serving_container_environment_variables: Preset environment variables to pass into the deployment container.
Note: The underlying deployment container is built on TensorFlow Serving.
End of explanation
endpoint = aiplatform.Endpoint.create(display_name=DATASET_NAME)
Explanation: Deploy the model to Vertex AI Endpoint
Deploying the Vertex AI Model resoure to a Vertex AI Endpoint for online predictions:
Create an Endpoint resource exposing an external interface to users consuming the model.
After the Endpoint is ready, deploy one or more instances of a model to the Endpoint. The deployed model runs the custom container image running Two-Tower encoder to serve embeddings.
Refer to Vertex AI Predictions guide to Deploy a model using the Vertex AI API for more information about the APIs used in the following cells.
Create a Vertex AI Endpoint
Next, you create the Vertex AI Endpoint, from which you subsequently deploy your Vertex AI Model resource to.
End of explanation
response = endpoint.deploy(
model=model,
deployed_model_display_name=DISPLAY_NAME,
machine_type=DEPLOY_COMPUTE,
traffic_split={"0": 100},
)
print(endpoint)
Explanation: Deploying Model resources to an Endpoint resource.
You can deploy one of more Vertex AI Model resource instances to the same endpoint. Each Vertex AI Model resource that is deployed will have its own deployment container for the serving binary.
In the next example, you deploy the Vertex AI Model resource to a Vertex AI Endpoint resource. The Vertex AI Model resource already has defined for it the deployment container image. To deploy, you specify the following additional configuration settings:
The machine type.
The (if any) type and number of GPUs.
Static, manual or auto-scaling of VM instances.
In this example, you deploy the model with the minimal amount of specified parameters, as follows:
model: The Model resource.
deployed_model_displayed_name: The human readable name for the deployed model instance.
machine_type: The machine type for each VM instance.
Do to the requirements to provision the resource, this may take upto a few minutes.
End of explanation
# Input items for the query model:
input_items = [
{"data": '{"user_id": ["1"]}', "key": "key1"},
{"data": '{"user_id": ["2"]}', "key": "key2"},
]
# Input items for the candidate model:
# input_items = [{
# 'data' : '{"movie_id": ["1"], "movie_title": ["fake title"]}',
# 'key': 'key1'
# }]
encodings = endpoint.predict(input_items)
print(f"Number of encodings: {len(encodings.predictions)}")
print(encodings.predictions[0]["encoding"])
Explanation: Creating embeddings
Now that you have deployed the query/candidate encoder model on Vertex AI Prediction, you can call the model to generate embeddings for new data.
Make an online prediction with SDK
Online prediction is used to synchronously query a model on a small batch of instances with minimal latency. The following function calls the deployed model using Vertex AI SDK for Python.
The input data you want predicted embeddings on should be provided as a stringified JSON in the data field. Note that you should also provide a unique key field (of type str) for each input instance so that you can associate each output embedding with its corresponding input.
End of explanation
import json
request = json.dumps({"instances": input_items})
with open("request.json", "w") as writer:
writer.write(f"{request}\n")
ENDPOINT_ID = endpoint.resource_name
! gcloud ai endpoints predict {ENDPOINT_ID} \
--region={REGION} \
--json-request=request.json
Explanation: Make an online prediction with gcloud
You can also do online prediction using the gcloud CLI.
End of explanation
QUERY_EMBEDDING_PATH = f"{BUCKET_URI}/embeddings/train.jsonl"
import tensorflow as tf
with tf.io.gfile.GFile(QUERY_EMBEDDING_PATH, "w") as f:
for i in range(0, 1000):
query = {"data": '{"user_id": ["' + str(i) + '"]}', "key": f"key{i}"}
f.write(json.dumps(query) + "\n")
print("\nNumber of embeddings: ")
! gsutil cat {QUERY_EMBEDDING_PATH} | wc -l
Explanation: Make a batch prediction
Batch prediction is used to asynchronously make predictions on a batch of input data. This is recommended if you have a large input size and do not need an immediate response, such as getting embeddings for candidate objects in order to create an index for a nearest neighbor search service such as Vertex Matching Engine.
Create the batch input file
Next, you generate the batch input file to generate embeddings for the dataset, which you subsequently use to create an index with Vertex AI Matching Engine. In this example, the dataset contains a 1000 unique identifiers (0...999). You will use the trained encoder to generate a predicted embedding for each unique identifier.
The input data needs to be on Cloud Storage and in JSONL format. You can use the sample query object file provided below. Like with online prediction, it's recommended to have the key field so that you can associate each output embedding with its corresponding input.
End of explanation
MIN_NODES = 1
MAX_NODES = 4
batch_predict_job = model.batch_predict(
job_display_name=f"batch_predict_{DISPLAY_NAME}",
gcs_source=[QUERY_EMBEDDING_PATH],
gcs_destination_prefix=f"{BUCKET_URI}/embeddings/output",
machine_type=DEPLOY_COMPUTE,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
Explanation: Send the prediction request
To make a batch prediction request, call the model object's batch_predict method with the following parameters:
- instances_format: The format of the batch prediction request file: "jsonl", "csv", "bigquery", "tf-record", "tf-record-gzip" or "file-list"
- prediction_format: The format of the batch prediction response file: "jsonl", "csv", "bigquery", "tf-record", "tf-record-gzip" or "file-list"
- job_display_name: The human readable name for the prediction job.
- gcs_source: A list of one or more Cloud Storage paths to your batch prediction requests.
- gcs_destination_prefix: The Cloud Storage path that the service will write the predictions to.
- model_parameters: Additional filtering parameters for serving prediction results.
- machine_type: The type of machine to use for training.
- accelerator_type: The hardware accelerator type.
- accelerator_count: The number of accelerators to attach to a worker replica.
- starting_replica_count: The number of compute instances to initially provision.
- max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
Compute instance scaling
You can specify a single instance (or node) to process your batch prediction request. This tutorial uses a single node, so the variables MIN_NODES and MAX_NODES are both set to 1.
If you want to use multiple nodes to process your batch prediction request, set MAX_NODES to the maximum number of nodes you want to use. Vertex AI autoscales the number of nodes used to serve your predictions, up to the maximum number you set. Refer to the pricing page to understand the costs of autoscaling with multiple nodes.
End of explanation
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
result_files = []
for prediction_result in prediction_results:
result_file = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
result_files.append(result_file)
print(result_files)
Explanation: Get the predicted embeddings
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:
instance: The prediction request.
prediction: The prediction response.
End of explanation
embeddings = []
for result_file in result_files:
with tf.io.gfile.GFile(result_file, "r") as f:
instances = list(f)
for instance in instances:
instance = instance.replace('\\"', "'")
result = json.loads(instance)
prediction = result["prediction"]
key = prediction["key"][3:]
encoding = prediction["encoding"]
embedding = {"id": key, "embedding": encoding}
embeddings.append(embedding)
print("Number of embeddings", len(embeddings))
print("Encoding Dimensions", len(embeddings[0]["embedding"]))
print("Example embedding", embeddings[0])
with open("embeddings.json", "w") as f:
for i in range(len(embeddings)):
f.write(json.dumps(embeddings[i]).replace('"', "'"))
f.write("\n")
! head -n 2 embeddings.json
Explanation: Save the embeddings in JSONL format
Next, you store the predicted embeddings as a JSONL formatted file. Each embedding is stored as:
{ 'id': .., 'embedding': [ ... ] }
The format of the embeddings for the index can be in either CSV, JSON, or Avro format.
Learn more about Embedding Formats for Indexing
End of explanation
EMBEDDINGS_URI = f"{BUCKET_URI}/embeddings/twotower/"
! gsutil cp embeddings.json {EMBEDDINGS_URI}
Explanation: Store the JSONL formatted embeddings in Cloud Storage
Next, you upload the training data to your Cloud Storage bucket.
End of explanation
DIMENSIONS = len(embeddings[0]["embedding"])
DISPLAY_NAME = "movies"
tree_ah_index = aiplatform.MatchingEngineIndex.create_tree_ah_index(
display_name=DISPLAY_NAME,
contents_delta_uri=EMBEDDINGS_URI,
dimensions=DIMENSIONS,
approximate_neighbors_count=50,
distance_measure_type="DOT_PRODUCT_DISTANCE",
description="Two tower generated embeddings",
labels={"label_name": "label_value"},
# TreeAH specific parameters
leaf_node_embedding_count=100,
leaf_nodes_to_search_percent=7,
)
INDEX_RESOURCE_NAME = tree_ah_index.resource_name
print(INDEX_RESOURCE_NAME)
Explanation: Create Matching Engine Index
Next, you create the index for your embeddings. Currently, two indexing algorithms are supported:
create_tree_ah_index(): Shallow tree + Asymmetric hashing.
create_brute_force_index(): Linear search.
In this tutorial, you use the create_tree_ah_index()for production scale. The method is called with the following parameters:
display_name: A human readable name for the index.
contents_delta_uri: A Cloud Storage location for the embeddings, which are either to be inserted, updated or deleted.
dimensions: The number of dimensions of the input vector
approximate_neighbors_count: (for Tree AH) The default number of neighbors to find via approximate search before exact reordering is performed. Exact reordering is a procedure where results returned by an approximate search algorithm are reordered via a more expensive distance computation.
distance_measure_type: The distance measure used in nearest neighbor search.
SQUARED_L2_DISTANCE: Euclidean (L2) Distance
L1_DISTANCE: Manhattan (L1) Distance
COSINE_DISTANCE: Cosine Distance. Defined as 1 - cosine similarity.
DOT_PRODUCT_DISTANCE: Default value. Defined as a negative of the dot product.
description: A human readble description of the index.
labels: User metadata in the form of a dictionary.
leaf_node_embedding_count: Number of embeddings on each leaf node. The default value is 1000 if not set.
leaf_nodes_to_search_percent: The default percentage of leaf nodes that any query may be searched. Must be in range 1-100, inclusive. The default value is 10 (means 10%) if not set.
This may take upto 30 minutes.
Learn more about Configuring Matching Engine Indexes.
End of explanation
# This is for display only; you can name the range anything.
PEERING_RANGE_NAME = "vertex-ai-prediction-peering-range"
NETWORK = "default"
# NOTE: `prefix-length=16` means a CIDR block with mask /16 will be
# reserved for use by Google services, such as Vertex AI.
! gcloud compute addresses create $PEERING_RANGE_NAME \
--global \
--prefix-length=16 \
--description="peering range for Google service" \
--network=$NETWORK \
--purpose=VPC_PEERING
Explanation: Setup VPC peering network
To use a Matching Engine Index, you setup a VPC peering network between your project and the Vertex AI Matching Engine service project. This eliminates additional hops in network traffic and allows using efficient gRPC protocol.
Learn more about VPC peering.
IMPORTANT: you can only setup one VPC peering to servicenetworking.googleapis.com per project.
Create VPC peering for default network
For simplicity, we setup VPC peering to the default network. You can create a different network for your project.
If you setup VPC peering with any other network, make sure that the network already exists and that your VM is running on that network.
End of explanation
! gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--network=$NETWORK \
--ranges=$PEERING_RANGE_NAME \
--project=$PROJECT_ID
Explanation: Create the VPC connection
Next, create the connection for VPC peering.
Note: If you get a PERMISSION DENIED, you may not have the neccessary role 'Compute Network Admin' set for your default service account. In the Cloud Console, do the following steps.
Goto IAM & Admin
Find your service account.
Click edit icon.
Select Add Another Role.
Enter 'Compute Network Admin'.
Select Save
End of explanation
! gcloud compute networks peerings list --network $NETWORK
Explanation: Check the status of your peering connections.
End of explanation
full_network_name = f"projects/{PROJECT_NUMBER}/global/networks/{NETWORK}"
Explanation: Construct the full network name
You need to have the full network resource name when you subsequently create an Matching Engine Index Endpoint resource for VPC peering.
End of explanation
index_endpoint = aiplatform.MatchingEngineIndexEndpoint.create(
display_name="index_endpoint_for_demo",
description="index endpoint description",
network=full_network_name,
)
INDEX_ENDPOINT_NAME = index_endpoint.resource_name
print(INDEX_ENDPOINT_NAME)
Explanation: Create an IndexEndpoint with VPC Network
Next, you create a Matching Engine Index Endpoint, similar to the concept of creating a Private Endpoint for prediction with a peer-to-peer network.
To create the Index Endpoint resource, you call the method create() with the following parameters:
display_name: A human readable name for the Index Endpoint.
description: A description for the Index Endpoint.
network: The VPC network resource name.
End of explanation
DEPLOYED_INDEX_ID = "tree_ah_twotower_deployed_" + TIMESTAMP
MIN_NODES = 1
MAX_NODES = 2
DEPLOY_COMPUTE = "n1-standard-16"
index_endpoint.deploy_index(
display_name="deployed_index_for_demo",
index=tree_ah_index,
deployed_index_id=DEPLOYED_INDEX_ID,
machine_type=DEPLOY_COMPUTE,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
print(index_endpoint.deployed_indexes)
Explanation: Deploy the Matching Engine Index to the Index Endpoint resource
Next, deploy your index to the Index Endpoint using the method deploy_index() with the following parameters:
display_name: A human readable name for the deployed index.
index: Your index.
deployed_index_id: A user assigned identifier for the deployed index.
machine_type: (optional) The VM instance type.
min_replica_count: (optional) Minimum number of VM instances for auto-scaling.
max_replica_count: (optional) Maximum number of VM instances for auto-scaling.
Learn more about Machine resources for Index Endpoint
End of explanation
# The number of nearest neighbors to be retrieved from database for each query.
NUM_NEIGHBOURS = 10
# Test query
queries = [embeddings[0]["embedding"], embeddings[1]["embedding"]]
matches = index_endpoint.match(
deployed_index_id=DEPLOYED_INDEX_ID, queries=queries, num_neighbors=NUM_NEIGHBOURS
)
for instance in matches:
print("INSTANCE")
for match in instance:
print(match)
Explanation: Create and execute an online query
Now that your index is deployed, you can make queries.
First, you construct a vector query using synthetic data, to use as the example to return matches for.
Next, you make the matching request using the method match(), with the following parameters:
deployed_index_id: The identifier of the deployed index.
queries: A list of queries (instances).
num_neighbors: The number of closest matches to return.
End of explanation
# Delete endpoint resource
endpoint.delete(force=True)
# Delete model resource
model.delete()
# Force undeployment of indexes and delete endpoint
try:
index_endpoint.delete(force=True)
except Exception as e:
print(e)
# Delete indexes
try:
tree_ah_index.delete()
brute_force_index.delete()
except Exception as e:
print(e)
# Delete Cloud Storage objects that were created
delete_bucket = False
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil -m rm -r $OUTPUT_DIR
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
End of explanation |
3,884 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center> Using DEAP to do multiobjective optimization with NSGA2</center>
<center>Yannis Merlet, sept 2018</center>
1st release of this notebook, still in progress though. If you have any suggestion to improve it, please let me know or PR it on github !
Context
At LOCIE Lab, the project Réha-Parcs aims to apply multi-objective optimization on building stock. Bibliography tended to pick Genetic Algorithm to optimize.
In the field of building physics, the algorithm NSGA2 is widely used because it is know as stable and results are good in term of convergence and diversity.
A good implementation of NSGA2 is available in the library DEAP. This library makes possible to do complex many objective optimization and to use a lot of different algorithm pretty easily.
The aim of this notebook is to provide PhD student at LOCIE with a cheatsheet to launch quickly an optimization with this algorithm with some explanation on the code. It is freely based on the example provided by the developpers of DEAP for the algorithm NSGA2 since their code is pretty short and efficient. I tried to provide more explanations and my small experience on this example here.
Some prior knowledge about how NSGA-II is working is needed as this notebook is focusing on its implementation.
Initialization of the Algorithm
Step1: Let's import stuff...
Here 3 main components of DEAP are imported
Step2: Toolboxes, creators... it's quite DEAP-specific, but powerful !
Step3: DEAP is based on toolbox
Step4: You will need to initialize the population. The individual toolbox features an individual creator that transforms a list into an individual object
Step5: A lot of code lines here, but not much really hard to understand.
On the first line, a Pareto object is created to store individuals that belong to the first order front. Then 6 lines of code enable to check wether the fitness associated with each individual is valid and if not to evaluate the individual and attribute the fitness.
Then the algorithm selects each pair of individual in the order and mutate them and do some crossover after comparing them and selecting the best one.
The code is not that DEAP specific and is self-sufficient for the rest of the process.
Evaluation function
This is wrong, but here is a way to test the code
Step6: Please replace that with the evaluation function you want !
In Réha-Parcs project, we use a thermal simulation function to get the energy demand of a building and a comfort criteria claculated by the simulation.
Last but not least
Step7: If you would like to plot your pareto front it is possible as well pretty easily with the datas provided. | Python Code:
import random
import datetime
import multiprocessing
import numpy as np
from deap import base
from deap import creator
from deap import tools
### If your evaluation function is external...
# import YourEvaluationFunction as evaluation
Explanation: <center> Using DEAP to do multiobjective optimization with NSGA2</center>
<center>Yannis Merlet, sept 2018</center>
1st release of this notebook, still in progress though. If you have any suggestion to improve it, please let me know or PR it on github !
Context
At LOCIE Lab, the project Réha-Parcs aims to apply multi-objective optimization on building stock. Bibliography tended to pick Genetic Algorithm to optimize.
In the field of building physics, the algorithm NSGA2 is widely used because it is know as stable and results are good in term of convergence and diversity.
A good implementation of NSGA2 is available in the library DEAP. This library makes possible to do complex many objective optimization and to use a lot of different algorithm pretty easily.
The aim of this notebook is to provide PhD student at LOCIE with a cheatsheet to launch quickly an optimization with this algorithm with some explanation on the code. It is freely based on the example provided by the developpers of DEAP for the algorithm NSGA2 since their code is pretty short and efficient. I tried to provide more explanations and my small experience on this example here.
Some prior knowledge about how NSGA-II is working is needed as this notebook is focusing on its implementation.
Initialization of the Algorithm
End of explanation
NGEN = 50 # Number of Generation
MU = 100 # Number of individual in population
CXPB = 0.8 #Crossover probability
NDIM = 4 # Number of dimension of the individual (=number of gene)
# Bounds on the first 3 genes
LOW1, UP1 = 0, 28
# Bounds on the last gene
LOW2, UP2 = 0, 5
BOUNDS = [(LOW1, UP1) for i in range(NDIM-1)] + [(LOW2, UP2)]
Explanation: Let's import stuff...
Here 3 main components of DEAP are imported:
- base contains all of the base classes and the toolbox generator
- the creator enable to generate objects like individuals or population
- tools contains built-in function related to optimisation such as selection tournament for the best individuals or crossover and mutation operators
We will need constant as parameter of the algorithm:
End of explanation
toolbox = base.Toolbox()
def init_opti():
creator.create("FitnessMin", base.Fitness, weights=(-1.0, -1.0, -1.0))
creator.create("Individual", list, typecode='d', fitness=creator.FitnessMin)
toolbox.register("individual", init_ind, icls=creator.Individual, ranges=BOUNDS)
toolbox.register("population", tools.initRepeat, list, toolbox.individual)
toolbox.register("evaluate", evaluation)
toolbox.register("mate", tools.cxOnePoint)
toolbox.register("mutate", tools.mutUniformInt, low=[x[0] for x in BOUNDS],
up=[x[1] for x in BOUNDS], indpb=0.1 / NDIM)
toolbox.register("select", tools.selNSGA2)
Explanation: Toolboxes, creators... it's quite DEAP-specific, but powerful !
End of explanation
def init_ind(icls, ranges):
genome = list()
for p in ranges:
genome.append(np.random.randint(*p))
return icls(genome)
Explanation: DEAP is based on toolbox: each toolbox is a component of the algorithm. Here 2 creators are used and the first one is important because it's creating the attribute fitness for each individual. Moreover, the tuple weights has as many element as the evaluation function has objectives. Here, it means that we will minimize (because weights are negative) 3 objectives. Should you wish to had an objective, you must had a term to this tuple otherwise it won't be taken into account (it happened to me !)
Afterwards in the code are created toolboxes that choose the characteristic of the optimization such as the definition of the individual, the evaluation function, mutation and crossover operators and selection operator.
NSGA2 is used for the selection tournament described by Deb. I used a classic crossing operator in one point in this example. THe mutation operator has the bounds for each gene of the individual as parameters, otherwise it can mutate out of bounds and create an error in the evaluation function.
End of explanation
def main():
pareto = tools.ParetoFront()
pop = toolbox.population(n=MU)
graph = []
data = []
# Evaluate the individuals with an invalid fitness
invalid_ind = [ind for ind in pop if not ind.fitness.valid]
fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
data.append(fitnesses)
for ind, fit in zip(invalid_ind, fitnesses):
ind.fitness.values = fit
graph.append(ind.fitness.values)
# This is just to assign the crowding distance to the individuals
# no actual selection is done
pop = toolbox.select(pop, len(pop))
# Begin the generational process
for gen in range(1, NGEN):
# Vary the population
offspring = tools.selTournamentDCD(pop, len(pop))
offspring = [toolbox.clone(ind) for ind in offspring]
for ind1, ind2 in zip(offspring[::2], offspring[1::2]):
if random.random() <= CXPB:
toolbox.mate(ind1, ind2)
toolbox.mutate(ind1)
toolbox.mutate(ind2)
del ind1.fitness.values, ind2.fitness.values
# Evaluate the individuals with an invalid fitness
invalid_ind = [ind for ind in offspring if not ind.fitness.valid]
fitnesses = toolbox.map(toolbox.evaluate, invalid_ind)
data.append(fitnesses)
for ind, fit in zip(invalid_ind, fitnesses):
ind.fitness.values = fit
graph.append(ind.fitness.values)
# Select the next generation population
pop = toolbox.select(pop + offspring, MU)
pareto.update(pop)
return pop, pareto, graph, data
Explanation: You will need to initialize the population. The individual toolbox features an individual creator that transforms a list into an individual object: it is this icls function. The initialization can be done in various way depending on what is needing in input for the evaluation function. Here it need integers in the bounds predefined earlier.
Launching the optimization: where the magic happens
My main function is long, I should factorize it but for now here it is. Comments and explanation are coming just after !
End of explanation
def evaluation(ind):
objective1 = random.randint(10,1000)
objective2 = random.randint(10,50)
objective3 = random.randint(200,500)
return objective1, objective2, objective3
Explanation: A lot of code lines here, but not much really hard to understand.
On the first line, a Pareto object is created to store individuals that belong to the first order front. Then 6 lines of code enable to check wether the fitness associated with each individual is valid and if not to evaluate the individual and attribute the fitness.
Then the algorithm selects each pair of individual in the order and mutate them and do some crossover after comparing them and selecting the best one.
The code is not that DEAP specific and is self-sufficient for the rest of the process.
Evaluation function
This is wrong, but here is a way to test the code: I created a fast computing random evaluation function. It works but there will no evolutionary process involved as the function is not consistent.
End of explanation
if __name__ == "__main__":
init_opti()
# Multiprocessing pool: I want to compute faster
pool = multiprocessing.Pool()
toolbox.register("map", pool.map)
pop, optimal_front, graph_data, data = main()
# Saving the Pareto Front, for further exploitation
with open('./pareto_front.txt', 'w') as front:
for ind in optimal_front:
front.write(str(ind.fitness) + '\n')
Explanation: Please replace that with the evaluation function you want !
In Réha-Parcs project, we use a thermal simulation function to get the energy demand of a building and a comfort criteria claculated by the simulation.
Last but not least: let's launch it
I did set it to do multiprocessing for the example, mainly to show that it was integrated in a toolbox in DEAP: thanks to the developers for making it so easy ! Be careful to start your pool after the creation of your creator objects, otherwise they won't be taken into account in the pool.
For more on the multiprocessing in Python, please refer to this notebook:
Unlock the power of your computer with multiprocessing computation
End of explanation
import matplotlib.pyplot as plt
x, y, z = zip(*[ind.fitness.values for ind in optimal_front])
fig = plt.figure()
fig.set_size_inches(15,10)
axe = plt.subplot2grid((2,2),(0,0))
axe.set_ylabel('Objective 2', fontsize=15)
axe.scatter(x, y, c='b', marker='+')
axe = plt.subplot2grid((2,2),(1,0))
axe.set_ylabel('Objective 3', fontsize=15)
axe.set_xlabel('Objective 1', fontsize=15)
axe.scatter(x, z, c='b', marker='+')
axe = plt.subplot2grid((2,2),(1,1))
axe.set_xlabel('Objective 2', fontsize=15)
scat = axe.scatter(y, z, c='b', marker='+')
plt.show()
Explanation: If you would like to plot your pareto front it is possible as well pretty easily with the datas provided.
End of explanation |
3,885 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
follow-trend
1. S&P 500 index closes above its 200 day moving average
2. The stock closes above its upper band, buy
1. S&P 500 index closes below its 200 day moving average
2. The stock closes below its lower band, sell your long position.
(Compare the result of applying same strategy to Multiple securities.)
Step1: Some global data
Step2: Define symbols
Step3: Run Strategy
Step4: Summarize results | Python Code:
import datetime
import matplotlib.pyplot as plt
import pandas as pd
import pinkfish as pf
import strategy
# Format price data
pd.options.display.float_format = '{:0.2f}'.format
%matplotlib inline
# Set size of inline plots
'''note: rcParams can't be in same cell as import matplotlib
or %matplotlib inline
%matplotlib notebook: will lead to interactive plots embedded within
the notebook, you can zoom and resize the figure
%matplotlib inline: only draw static images in the notebook
'''
plt.rcParams["figure.figsize"] = (10, 7)
Explanation: follow-trend
1. S&P 500 index closes above its 200 day moving average
2. The stock closes above its upper band, buy
1. S&P 500 index closes below its 200 day moving average
2. The stock closes below its lower band, sell your long position.
(Compare the result of applying same strategy to Multiple securities.)
End of explanation
capital = 10000
start = datetime.datetime(2000, 1, 1)
end = datetime.datetime.now()
Explanation: Some global data
End of explanation
SP500_Sectors = ['SPY', 'XLB', 'XLE', 'XLF', 'XLI', 'XLK', 'XLP', 'XLU', 'XLV', 'XLY']
Other_Sectors = ['RSP', 'DIA', 'IWM', 'QQQ', 'DAX', 'EEM', 'TLT', 'GLD', 'XHB']
Elite_Stocks = ['ADP', 'BMY', 'BRK-B', 'BTI', 'BUD', 'CL', 'CLX', 'CMCSA', 'DIS', 'DOV']
Elite_Stocks += ['GIS', 'HD', 'HRL', 'HSY', 'INTC', 'JNJ', 'K', 'KMB', 'KMI', 'KO']
Elite_Stocks += ['LLY', 'LMT', 'MCD', 'MO', 'MRK', 'MSFT', 'NUE', 'PG', 'PM', 'RDS-B']
Elite_Stocks += ['SO', 'T', 'UL', 'V', 'VZ', 'XOM']
# Pick one of the above
symbols = SP500_Sectors
options = {
'use_adj' : False,
'use_cache' : True,
'sma_period': 200,
'percent_band' : 0,
'use_regime_filter' : True
}
Explanation: Define symbols
End of explanation
strategies = pd.Series(dtype=object)
for symbol in symbols:
print(symbol, end=" ")
strategies[symbol] = strategy.Strategy(symbol, capital, start, end, options)
strategies[symbol].run()
Explanation: Run Strategy
End of explanation
metrics = ('start',
'annual_return_rate',
'max_closed_out_drawdown',
'sharpe_ratio',
'sortino_ratio',
'monthly_std',
'annual_std',
'pct_time_in_market',
'total_num_trades',
'pct_profitable_trades',
'avg_points')
df = pf.optimizer_summary(strategies, metrics)
pd.set_option('display.max_columns', len(df.columns))
df
# Averages
avg_annual_return_rate = df.loc['annual_return_rate'].mean()
avg_sharpe_ratio = df.loc['sharpe_ratio'].mean()
print('avg_annual_return_rate: {:.2f}'.format(avg_annual_return_rate))
print('avg_sharpe_ratio: {:.2f}'.format(avg_sharpe_ratio))
pf.plot_equity_curves(strategies)
Explanation: Summarize results
End of explanation |
3,886 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spectrometer accuracy assesment using validation tarps
Background
In this lesson we will be examing the accuracy of the Neon Imaging Spectrometer (NIS) against targets with known reflectance. The targets consist of two 10 x 10 m tarps which have been specially designed to have 3% reflectance (black tarp) and 48% reflectance (white tarp) across all of the wavelengths collected by the NIS (see images below). During the Sept. 12 2016 flight over the Chequamegon-Nicolet National Forest, an area in D05 which is part of Steigerwaldt (STEI) site, these tarps were deployed in a gravel pit. During the airborne overflight, observations were also taken over the tarps with an ASD field spectrometer. The ASD measurments provide a validation source against the the airborne measurements.
To test the accuracy, we will utilize reflectance curves from the tarps as well as from the associated flight line and execute absolute and relative comparisons. The major error sources in the NIS can be generally categorized into the following sources
1) Calibration of the sensor
2) Quality of ortho-rectification
3) Accuracy of radiative transfer code and subsequent ATCOR interpolation
4) Selection of atmospheric input parameters
5) Terrain relief
6) Terrain cover
Note that the manual for ATCOR, the atmospheric correction software used by AOP, specifies the accuracy of reflectance retrievals to be between 3 and 5% of total reflectance. The tarps are located in a flat area, therefore, influences by terrain releif should be minimal. We will ahve to keep the remining errors in mind as we analyze the data.
Objective
In this lesson we will learn how to retrieve relfectance curves from a pre-specified coordainte in a NEON AOP HDF 5 file, learn how to read a tab delimited text file, retrieve bad band window indexes and mask portions of a reflectance curve, plot reflectance curves on a graph and save the file, gain an understanding of some sources of uncertainty in NIS data.
Suggested pre-requisites
Working with NEON AOP Hyperspectral Data in Python Jupyter Notebooks
Learn to Efficiently Process NEON Hyperspectral Data
We'll start by adding all of the necessary libraries to our python script
Step1: As well as our function to read the hdf5 reflectance files and associated metadata
Step2: Define the location where you are holding the data for the data institute. The h5_filename will be the flightline which contains the tarps, and the tarp_48_filename and tarp_03_filename contain the field validated spectra for the white and black tarp respectively, organized by wavelength and reflectance.
Step3: We want to pull the spectra from the airborne data from the center of the tarp to minimize any errors introduced by infiltrating light in adjecent pixels, or through errors in ortho-rectification (source 2). We have pre-determined the coordinates for the center of each tarp which are as follows
Step4: Now we'll use our function designed for NEON AOP's HDF5 files to access the hyperspectral data
Step5: Within the reflectance curves there are areas with noisey data due to atmospheric windows in the water absorption bands. For this exercise we do not want to plot these areas as they obscure detailes in the plots due to their anamolous values. The meta data assocaited with these band locations is contained in the metadata gatherd by our function. We will pull out these areas as 'bad band windows' and determine which indexes in the reflectance curves contain the bad bands
Step6: Now join the list of indexes together into a single variable
Step7: The reflectance data is saved in files which are 'tab delimited.' We will use a numpy function (genfromtxt) to quickly import the tarp reflectance curves observed with the ASD using the '\t' delimeter to indicate tabs are used.
Step8: Now we'll set all the data inside of those windows to NaNs (not a number) so they will not be included in the plots
Step9: The next step is to determine which pixel in the reflectance data belongs to the center of each tarp. To do this, we will subtract the tarp center pixel location from the upper left corner pixels specified in the map info of the H5 file. This information is saved in the metadata dictionary output from our function that reads NEON AOP HDF5 files. The difference between these coordaintes gives us the x and y index of the reflectance curve.
Step10: Next, we will plot both the curve from the airborne data taken at the center of the tarps as well as the curves obtained from the ASD data to provide a visualisation of their consistency for both tarps. Once generated, we will also save the figure to a pre-determined location.
Step11: This produces plots showing the results of the ASD and airborne measurements over the 48% tarp. Visually, the comparison between the two appears to be fairly good. However, over the 3% tarp we appear to be over-estimating the reflectance. Large absolute differences could be associated with ATCOR input parameters (source 4). For example, the user must input the local visibility, which is related to aerosal optical thickness (AOT). We don't measure this at every site, therefore input a standard parameter for all sites.
Given the 3% reflectance tarp has much lower overall reflactance, it may be more informative to determine what the absolute difference between the two curves are and plot that as well.
Step12: From this we are able to see that the 48% tarp actually has larger absolute differences than the 3% tarp. The 48% tarp performs poorly at the shortest and longest waveleghts as well as near the edges of the 'bad band windows.' This is related to difficulty in calibrating the sensor in these sensitive areas (source 1).
Let's now determine the result of the percent difference, which is the metric used by ATCOR to report accuracy. We can do this by calculating the ratio of the absolute difference between curves to the total reflectance | Python Code:
import h5py
import csv
import numpy as np
import os
import gdal
import matplotlib.pyplot as plt
import sys
from math import floor
import time
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
Explanation: Spectrometer accuracy assesment using validation tarps
Background
In this lesson we will be examing the accuracy of the Neon Imaging Spectrometer (NIS) against targets with known reflectance. The targets consist of two 10 x 10 m tarps which have been specially designed to have 3% reflectance (black tarp) and 48% reflectance (white tarp) across all of the wavelengths collected by the NIS (see images below). During the Sept. 12 2016 flight over the Chequamegon-Nicolet National Forest, an area in D05 which is part of Steigerwaldt (STEI) site, these tarps were deployed in a gravel pit. During the airborne overflight, observations were also taken over the tarps with an ASD field spectrometer. The ASD measurments provide a validation source against the the airborne measurements.
To test the accuracy, we will utilize reflectance curves from the tarps as well as from the associated flight line and execute absolute and relative comparisons. The major error sources in the NIS can be generally categorized into the following sources
1) Calibration of the sensor
2) Quality of ortho-rectification
3) Accuracy of radiative transfer code and subsequent ATCOR interpolation
4) Selection of atmospheric input parameters
5) Terrain relief
6) Terrain cover
Note that the manual for ATCOR, the atmospheric correction software used by AOP, specifies the accuracy of reflectance retrievals to be between 3 and 5% of total reflectance. The tarps are located in a flat area, therefore, influences by terrain releif should be minimal. We will ahve to keep the remining errors in mind as we analyze the data.
Objective
In this lesson we will learn how to retrieve relfectance curves from a pre-specified coordainte in a NEON AOP HDF 5 file, learn how to read a tab delimited text file, retrieve bad band window indexes and mask portions of a reflectance curve, plot reflectance curves on a graph and save the file, gain an understanding of some sources of uncertainty in NIS data.
Suggested pre-requisites
Working with NEON AOP Hyperspectral Data in Python Jupyter Notebooks
Learn to Efficiently Process NEON Hyperspectral Data
We'll start by adding all of the necessary libraries to our python script
End of explanation
def h5refl2array(h5_filename):
hdf5_file = h5py.File(h5_filename,'r')
#Get the site name
file_attrs_string = str(list(hdf5_file.items()))
file_attrs_string_split = file_attrs_string.split("'")
sitename = file_attrs_string_split[1]
refl = hdf5_file[sitename]['Reflectance']
reflArray = refl['Reflectance_Data']
refl_shape = reflArray.shape
wavelengths = refl['Metadata']['Spectral_Data']['Wavelength']
#Create dictionary containing relevant metadata information
metadata = {}
metadata['shape'] = reflArray.shape
metadata['mapInfo'] = refl['Metadata']['Coordinate_System']['Map_Info']
#Extract no data value & set no data value to NaN\n",
metadata['scaleFactor'] = float(reflArray.attrs['Scale_Factor'])
metadata['noDataVal'] = float(reflArray.attrs['Data_Ignore_Value'])
metadata['bad_band_window1'] = (refl.attrs['Band_Window_1_Nanometers'])
metadata['bad_band_window2'] = (refl.attrs['Band_Window_2_Nanometers'])
metadata['projection'] = refl['Metadata']['Coordinate_System']['Proj4'].value
metadata['EPSG'] = int(refl['Metadata']['Coordinate_System']['EPSG Code'].value)
mapInfo = refl['Metadata']['Coordinate_System']['Map_Info'].value
mapInfo_string = str(mapInfo); #print('Map Info:',mapInfo_string)\n",
mapInfo_split = mapInfo_string.split(",")
#Extract the resolution & convert to floating decimal number
metadata['res'] = {}
metadata['res']['pixelWidth'] = mapInfo_split[5]
metadata['res']['pixelHeight'] = mapInfo_split[6]
#Extract the upper left-hand corner coordinates from mapInfo\n",
xMin = float(mapInfo_split[3]) #convert from string to floating point number\n",
yMax = float(mapInfo_split[4])
#Calculate the xMax and yMin values from the dimensions\n",
xMax = xMin + (refl_shape[1]*float(metadata['res']['pixelWidth'])) #xMax = left edge + (# of columns * resolution)\n",
yMin = yMax - (refl_shape[0]*float(metadata['res']['pixelHeight'])) #yMin = top edge - (# of rows * resolution)\n",
metadata['extent'] = (xMin,xMax,yMin,yMax),
metadata['ext_dict'] = {}
metadata['ext_dict']['xMin'] = xMin
metadata['ext_dict']['xMax'] = xMax
metadata['ext_dict']['yMin'] = yMin
metadata['ext_dict']['yMax'] = yMax
hdf5_file.close
return reflArray, metadata, wavelengths
Explanation: As well as our function to read the hdf5 reflectance files and associated metadata
End of explanation
print('Start CHEQ tarp uncertainty script')
h5_filename = 'C:/RSDI_2017/data/CHEQ/H5/NEON_D05_CHEQ_DP1_20160912_160540_reflectance.h5'
tarp_48_filename = 'C:/RSDI_2017/data/CHEQ/H5/CHEQ_Tarp_48_01_refl_bavg.txt'
tarp_03_filename = 'C:/RSDI_2017/data/CHEQ/H5/CHEQ_Tarp_03_02_refl_bavg.txt'
Explanation: Define the location where you are holding the data for the data institute. The h5_filename will be the flightline which contains the tarps, and the tarp_48_filename and tarp_03_filename contain the field validated spectra for the white and black tarp respectively, organized by wavelength and reflectance.
End of explanation
tarp_48_center = np.array([727487,5078970])
tarp_03_center = np.array([727497,5078970])
Explanation: We want to pull the spectra from the airborne data from the center of the tarp to minimize any errors introduced by infiltrating light in adjecent pixels, or through errors in ortho-rectification (source 2). We have pre-determined the coordinates for the center of each tarp which are as follows:
48% reflectance tarp UTMx: 727487, UTMy: 5078970
3% reflectance tarp UTMx: 727497, UTMy: 5078970
Let's define these coordaintes
End of explanation
[reflArray,metadata,wavelengths] = h5refl2array(h5_filename)
Explanation: Now we'll use our function designed for NEON AOP's HDF5 files to access the hyperspectral data
End of explanation
bad_band_window1 = (metadata['bad_band_window1'])
bad_band_window2 = (metadata['bad_band_window2'])
index_bad_window1 = [i for i, x in enumerate(wavelengths) if x > bad_band_window1[0] and x < bad_band_window1[1]]
index_bad_window2 = [i for i, x in enumerate(wavelengths) if x > bad_band_window2[0] and x < bad_band_window2[1]]
Explanation: Within the reflectance curves there are areas with noisey data due to atmospheric windows in the water absorption bands. For this exercise we do not want to plot these areas as they obscure detailes in the plots due to their anamolous values. The meta data assocaited with these band locations is contained in the metadata gatherd by our function. We will pull out these areas as 'bad band windows' and determine which indexes in the reflectance curves contain the bad bands
End of explanation
index_bad_windows = index_bad_window1+index_bad_window2
Explanation: Now join the list of indexes together into a single variable
End of explanation
tarp_48_data = np.genfromtxt(tarp_48_filename, delimiter = '\t')
tarp_03_data = np.genfromtxt(tarp_03_filename, delimiter = '\t')
Explanation: The reflectance data is saved in files which are 'tab delimited.' We will use a numpy function (genfromtxt) to quickly import the tarp reflectance curves observed with the ASD using the '\t' delimeter to indicate tabs are used.
End of explanation
tarp_48_data[index_bad_windows] = np.nan
tarp_03_data[index_bad_windows] = np.nan
Explanation: Now we'll set all the data inside of those windows to NaNs (not a number) so they will not be included in the plots
End of explanation
x_tarp_48_index = int((tarp_48_center[0] - metadata['ext_dict']['xMin'])/float(metadata['res']['pixelWidth']))
y_tarp_48_index = int((metadata['ext_dict']['yMax'] - tarp_48_center[1])/float(metadata['res']['pixelHeight']))
x_tarp_03_index = int((tarp_03_center[0] - metadata['ext_dict']['xMin'])/float(metadata['res']['pixelWidth']))
y_tarp_03_index = int((metadata['ext_dict']['yMax'] - tarp_03_center[1])/float(metadata['res']['pixelHeight']))
Explanation: The next step is to determine which pixel in the reflectance data belongs to the center of each tarp. To do this, we will subtract the tarp center pixel location from the upper left corner pixels specified in the map info of the H5 file. This information is saved in the metadata dictionary output from our function that reads NEON AOP HDF5 files. The difference between these coordaintes gives us the x and y index of the reflectance curve.
End of explanation
plt.figure(1)
tarp_48_reflectance = np.asarray(reflArray[y_tarp_48_index,x_tarp_48_index,:], dtype=np.float32)/metadata['scaleFactor']
tarp_48_reflectance[index_bad_windows] = np.nan
plt.plot(wavelengths,tarp_48_reflectance,label = 'Airborne Reflectance')
plt.plot(wavelengths,tarp_48_data[:,1], label = 'ASD Reflectance')
plt.title('CHEQ 20160912 48% tarp')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Refelctance (%)')
plt.legend()
plt.savefig('CHEQ_20160912_48_tarp.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
plt.figure(2)
tarp_03_reflectance = np.asarray(reflArray[y_tarp_03_index,x_tarp_03_index,:], dtype=np.float32)/ metadata['scaleFactor']
tarp_03_reflectance[index_bad_windows] = np.nan
plt.plot(wavelengths,tarp_03_reflectance,label = 'Airborne Reflectance')
plt.plot(wavelengths,tarp_03_data[:,1],label = 'ASD Reflectance')
plt.title('CHEQ 20160912 3% tarp')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Refelctance (%)')
plt.legend()
plt.savefig('CHEQ_20160912_3_tarp.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
Explanation: Next, we will plot both the curve from the airborne data taken at the center of the tarps as well as the curves obtained from the ASD data to provide a visualisation of their consistency for both tarps. Once generated, we will also save the figure to a pre-determined location.
End of explanation
plt.figure(3)
plt.plot(wavelengths,tarp_48_reflectance-tarp_48_data[:,1])
plt.title('CHEQ 20160912 48% tarp absolute difference')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Absolute Refelctance Difference (%)')
plt.savefig('CHEQ_20160912_48_tarp_absolute_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
plt.figure(4)
plt.plot(wavelengths,tarp_03_reflectance-tarp_03_data[:,1])
plt.title('CHEQ 20160912 3% tarp absolute difference')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Absolute Refelctance Difference (%)')
plt.savefig('CHEQ_20160912_3_tarp_absolute_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
Explanation: This produces plots showing the results of the ASD and airborne measurements over the 48% tarp. Visually, the comparison between the two appears to be fairly good. However, over the 3% tarp we appear to be over-estimating the reflectance. Large absolute differences could be associated with ATCOR input parameters (source 4). For example, the user must input the local visibility, which is related to aerosal optical thickness (AOT). We don't measure this at every site, therefore input a standard parameter for all sites.
Given the 3% reflectance tarp has much lower overall reflactance, it may be more informative to determine what the absolute difference between the two curves are and plot that as well.
End of explanation
plt.figure(5)
plt.plot(wavelengths,100*np.divide(tarp_48_reflectance-tarp_48_data[:,1],tarp_48_data[:,1]))
plt.title('CHEQ 20160912 48% tarp percent difference')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Percent Refelctance Difference')
plt.ylim((-100,100))
plt.savefig('CHEQ_20160912_48_tarp_relative_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
plt.figure(6)
plt.plot(wavelengths,100*np.divide(tarp_03_reflectance-tarp_03_data[:,1],tarp_03_data[:,1]))
plt.title('CHEQ 20160912 3% tarp percent difference')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Percent Refelctance Difference')
plt.ylim((-100,150))
plt.savefig('CHEQ_20160912_3_tarp_relative_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
Explanation: From this we are able to see that the 48% tarp actually has larger absolute differences than the 3% tarp. The 48% tarp performs poorly at the shortest and longest waveleghts as well as near the edges of the 'bad band windows.' This is related to difficulty in calibrating the sensor in these sensitive areas (source 1).
Let's now determine the result of the percent difference, which is the metric used by ATCOR to report accuracy. We can do this by calculating the ratio of the absolute difference between curves to the total reflectance
End of explanation |
3,887 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-hr4', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: CMCC
Source ID: CMCC-CM2-HR4
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:49
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
3,888 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
APScheduler and PyMongo
The following is a simple example using APScheduler and PyMongo to pull down the price of bitcoin every minute using the <a href="http
Step1: First we created a function, get_price(), which APScheduler will call every minute. The function gets JSON from the CoinDesk API page and from this we extract the current price in Euro. We insert this value and the current time into our bitcoin collection
Step2: There are many different types of schedulers, with <a href="http
Step3: The following can be run while the kernal is still running and adding prices to the collection but will need to be done outside of ipython notebook. I also have spyder installed and was able to run "for price in bitcoin.find() | Python Code:
from pymongo import MongoClient
client = MongoClient()
bitcoin = client.test_database.bitcoin
import urllib2
import requests
response = requests.get("http://api.coindesk.com/v1/bpi/currentprice.json")
bitcoin_response = response.json()
print bitcoin_response['bpi']['EUR']['rate_float']
Explanation: APScheduler and PyMongo
The following is a simple example using APScheduler and PyMongo to pull down the price of bitcoin every minute using the <a href="http://www.coindesk.com/api/" target="_blank">CoinDesk API</a>, storing it the prices in a Mongo database and then importing them into pandas for plotting.
If running these notebooks make sure to have a mongod instance running.
End of explanation
from apscheduler.schedulers.blocking import BlockingScheduler
import datetime
def get_price():
response = requests.get("http://api.coindesk.com/v1/bpi/currentprice.json")
bitcoin_response = response.json()
price = bitcoin_response['bpi']['EUR']['rate_float']
time = datetime.datetime.now()
bitcoin.insert({"time" : time, "price" : price})
Explanation: First we created a function, get_price(), which APScheduler will call every minute. The function gets JSON from the CoinDesk API page and from this we extract the current price in Euro. We insert this value and the current time into our bitcoin collection
End of explanation
bitcoin.remove({}) #Added to empty out the collection the first time the code is run
scheduler = BlockingScheduler()
scheduler.add_job(get_price, 'interval', minutes=1)
try:
scheduler.start()
except (KeyboardInterrupt, SystemExit):
pass
Explanation: There are many different types of schedulers, with <a href="http://apscheduler.readthedocs.org/en/latest/modules/schedulers/blocking.html" target="_blank">BlockingScheduler</a> being described as the easiest to use. Using add_job() we add the function that we want called tell it that it will be called a intervals of 1 minute. There are many other ways the function can be called and can be found in the <a href="http://apscheduler.readthedocs.org/en/latest/index.html" target="_blank">documents.</a>
I let this block run for a few minutes and then pressed the stop button to interupt the kernel.
End of explanation
import pandas as pd
import matplotlib.pyplot as plt
%pylab inline
bitcoin_df = pd.DataFrame(list(bitcoin.find()))
bitcoin_df.head()
len(bitcoin_df)
plt.plot(bitcoin_df['price'])
Explanation: The following can be run while the kernal is still running and adding prices to the collection but will need to be done outside of ipython notebook. I also have spyder installed and was able to run "for price in bitcoin.find(): print price"
Below we convert the collection to a pandas dataframe and then make a plot of the price movement.
End of explanation |
3,889 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Enter State Farm
Step1: Setup batches
Step2: Rather than using batches, we could just import all the data into an array to save some processing time. (In most examples I'm using the batches, however - just because that's how I happened to start out.)
Step3: Re-run sample experiments on full dataset
We should find that everything that worked on the sample (see statefarm-sample.ipynb), works on the full dataset too. Only better! Because now we have more data. So let's see how they go - the models in this section are exact copies of the sample notebook models.
Single conv layer
Step4: Interestingly, with no regularization or augmentation we're getting some reasonable results from our simple convolutional model. So with augmentation, we hopefully will see some very good results.
Data augmentation
Step5: I'm shocked by how good these results are! We're regularly seeing 75-80% accuracy on the validation set, which puts us into the top third or better of the competition. With such a simple model and no dropout or semi-supervised learning, this really speaks to the power of this approach to data augmentation.
Four conv/pooling pairs + dropout
Unfortunately, the results are still very unstable - the validation accuracy jumps from epoch to epoch. Perhaps a deeper model with some dropout would help.
Step6: This is looking quite a bit better - the accuracy is similar, but the stability is higher. There's still some way to go however...
Imagenet conv features
Since we have so little data, and it is similar to imagenet images (full color photos), using pre-trained VGG weights is likely to be helpful - in fact it seems likely that we won't need to fine-tune the convolutional layer weights much, if at all. So we can pre-compute the output of the last convolutional layer, as we did in lesson 3 when we experimented with dropout. (However this means that we can't use full data augmentation, since we can't pre-compute something that changes every image.)
Step7: Batchnorm dense layers on pretrained conv layers
Since we've pre-computed the output of the last convolutional layer, we need to create a network that takes that as input, and predicts our 10 classes. Let's try using a simplified version of VGG's dense layers.
Step8: Looking good! Let's try pre-computing 5 epochs worth of augmented data, so we can experiment with combining dropout and augmentation on the pre-trained model.
Pre-computed data augmentation + dropout
We'll use our usual data augmentation parameters
Step9: We use those to create a dataset of convolutional features 5x bigger than the training set.
Step10: Let's include the real training data as well in its non-augmented form.
Step11: Since we've now got a dataset 6x bigger than before, we'll need to copy our labels 6 times too.
Step12: Based on some experiments the previous model works well, with bigger dense layers.
Step13: Now we can train the model as usual, with pre-computed augmented data.
Step14: Looks good - let's save those weights.
Step15: Pseudo labeling
We're going to try using a combination of pseudo labeling and knowledge distillation to allow us to use unlabeled data (i.e. do semi-supervised learning). For our initial experiment we'll use the validation set as the unlabeled data, so that we can see that it is working without using the test set. At a later date we'll try using the test set.
To do this, we simply calculate the predictions of our model...
Step16: ...concatenate them with our training labels...
Step17: ...and fine-tune our model using that data.
Step18: That's a distinct improvement - even although the validation set isn't very big. This looks encouraging for when we try this on the test set.
Step19: Submit
We'll find a good clipping amount using the validation set, prior to submitting.
Step20: This gets 0.534 on the leaderboard.
The "things that didn't really work" section
You can safely ignore everything from here on, because they didn't really help.
Finetune some conv layers too
Step21: Ensembling | Python Code:
from __future__ import division, print_function
%matplotlib inline
#path = "data/state/"
path = "data/state/sample/"
from importlib import reload # Python 3
import utils; reload(utils)
from utils import *
from IPython.display import FileLink
batch_size=64
Explanation: Enter State Farm
End of explanation
batches = get_batches(path+'train', batch_size=batch_size)
val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)
steps_per_epoch = int(np.ceil(batches.samples/batch_size))
validation_steps = int(np.ceil(val_batches.samples/(batch_size*2)))
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, filenames, test_filenames) = get_classes(path)
Explanation: Setup batches
End of explanation
trn = get_data(path+'train')
val = get_data(path+'valid')
save_array(path+'results/val.dat', val)
save_array(path+'results/trn.dat', trn)
val = load_array(path+'results/val.dat')
trn = load_array(path+'results/trn.dat')
Explanation: Rather than using batches, we could just import all the data into an array to save some processing time. (In most examples I'm using the batches, however - just because that's how I happened to start out.)
End of explanation
def conv1(batches):
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Conv2D(32,(3,3), activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Conv2D(64,(3,3), activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Flatten(),
Dense(200, activation='relu'),
BatchNormalization(),
Dense(10, activation='softmax')
])
model.compile(Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, steps_per_epoch, epochs=2, validation_data=val_batches,
validation_steps=validation_steps)
model.optimizer.lr = 0.001
model.fit_generator(batches, steps_per_epoch, epochs=4, validation_data=val_batches,
validation_steps=validation_steps)
return model
model = conv1(batches)
Explanation: Re-run sample experiments on full dataset
We should find that everything that worked on the sample (see statefarm-sample.ipynb), works on the full dataset too. Only better! Because now we have more data. So let's see how they go - the models in this section are exact copies of the sample notebook models.
Single conv layer
End of explanation
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05,
shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model = conv1(batches)
model.optimizer.lr = 0.0001
model.fit_generator(batches, steps_per_epoch, epochs=15, validation_data=val_batches,
validation_steps=validation_steps)
Explanation: Interestingly, with no regularization or augmentation we're getting some reasonable results from our simple convolutional model. So with augmentation, we hopefully will see some very good results.
Data augmentation
End of explanation
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05,
shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Conv2D(32,(3,3), activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D(),
Conv2D(64,(3,3), activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D(),
Conv2D(128,(3,3), activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D(),
Flatten(),
Dense(200, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(200, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(10, activation='softmax')
])
model.compile(Adam(lr=10e-5), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, steps_per_epoch, epochs=2, validation_data=val_batches,
validation_steps=validation_steps)
model.optimizer.lr=0.001
model.fit_generator(batches, steps_per_epoch, epochs=10, validation_data=val_batches,
validation_steps=validation_steps)
model.optimizer.lr=0.00001
model.fit_generator(batches, steps_per_epoch, epochs=10, validation_data=val_batches,
validation_steps=validation_steps)
Explanation: I'm shocked by how good these results are! We're regularly seeing 75-80% accuracy on the validation set, which puts us into the top third or better of the competition. With such a simple model and no dropout or semi-supervised learning, this really speaks to the power of this approach to data augmentation.
Four conv/pooling pairs + dropout
Unfortunately, the results are still very unstable - the validation accuracy jumps from epoch to epoch. Perhaps a deeper model with some dropout would help.
End of explanation
vgg = Vgg16()
model=vgg.model
last_conv_idx = [i for i,l in enumerate(model.layers) if type(l) is Convolution2D][-1]
conv_layers = model.layers[:last_conv_idx+1]
conv_model = Sequential(conv_layers)
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, filenames, test_filenames) = get_classes(path)
test_batches = get_batches(path+'test', batch_size=batch_size*2, shuffle=False)
conv_feat = conv_model.predict_generator(batches, int(np.ceil(batches.samples/batch_size)))
conv_val_feat = conv_model.predict_generator(val_batches, int(np.ceil(val_batches.samples/(batch_size*2))))
conv_test_feat = conv_model.predict_generator(test_batches, int(np.ceil(test_batches.samples/(batch_size*2))))
save_array(path+'results/conv_val_feat.dat', conv_val_feat)
save_array(path+'results/conv_test_feat.dat', conv_test_feat)
save_array(path+'results/conv_feat.dat', conv_feat)
conv_feat = load_array(path+'results/conv_feat.dat')
conv_val_feat = load_array(path+'results/conv_val_feat.dat')
conv_val_feat.shape
Explanation: This is looking quite a bit better - the accuracy is similar, but the stability is higher. There's still some way to go however...
Imagenet conv features
Since we have so little data, and it is similar to imagenet images (full color photos), using pre-trained VGG weights is likely to be helpful - in fact it seems likely that we won't need to fine-tune the convolutional layer weights much, if at all. So we can pre-compute the output of the last convolutional layer, as we did in lesson 3 when we experimented with dropout. (However this means that we can't use full data augmentation, since we can't pre-compute something that changes every image.)
End of explanation
def get_bn_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dropout(p/2),
Dense(128, activation='relu'),
BatchNormalization(),
Dropout(p/2),
Dense(128, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(10, activation='softmax')
]
p=0.8
bn_model = Sequential(get_bn_layers(p))
bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, epochs=1,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.01
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, epochs=2,
validation_data=(conv_val_feat, val_labels))
bn_model.save_weights(path+'models/conv8.h5')
Explanation: Batchnorm dense layers on pretrained conv layers
Since we've pre-computed the output of the last convolutional layer, we need to create a network that takes that as input, and predicts our 10 classes. Let's try using a simplified version of VGG's dense layers.
End of explanation
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05,
shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)
da_batches = get_batches(path+'train', gen_t, batch_size=batch_size, shuffle=False)
Explanation: Looking good! Let's try pre-computing 5 epochs worth of augmented data, so we can experiment with combining dropout and augmentation on the pre-trained model.
Pre-computed data augmentation + dropout
We'll use our usual data augmentation parameters:
End of explanation
da_conv_feat = conv_model.predict_generator(da_batches, 5*int(np.ceil((da_batches.samples)/(batch_size))), workers=3)
save_array(path+'results/da_conv_feat2.dat', da_conv_feat)
da_conv_feat = load_array(path+'results/da_conv_feat2.dat')
Explanation: We use those to create a dataset of convolutional features 5x bigger than the training set.
End of explanation
da_conv_feat = np.concatenate([da_conv_feat, conv_feat])
Explanation: Let's include the real training data as well in its non-augmented form.
End of explanation
da_trn_labels = np.concatenate([trn_labels]*6)
Explanation: Since we've now got a dataset 6x bigger than before, we'll need to copy our labels 6 times too.
End of explanation
def get_bn_da_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dropout(p),
Dense(256, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(256, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(10, activation='softmax')
]
p=0.8
bn_model = Sequential(get_bn_da_layers(p))
bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
Explanation: Based on some experiments the previous model works well, with bigger dense layers.
End of explanation
bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, epochs=1,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.01
bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, epochs=4,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.0001
bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, epochs=4,
validation_data=(conv_val_feat, val_labels))
Explanation: Now we can train the model as usual, with pre-computed augmented data.
End of explanation
bn_model.save_weights(path+'models/da_conv8_1.h5')
Explanation: Looks good - let's save those weights.
End of explanation
val_pseudo = bn_model.predict(conv_val_feat, batch_size=batch_size)
Explanation: Pseudo labeling
We're going to try using a combination of pseudo labeling and knowledge distillation to allow us to use unlabeled data (i.e. do semi-supervised learning). For our initial experiment we'll use the validation set as the unlabeled data, so that we can see that it is working without using the test set. At a later date we'll try using the test set.
To do this, we simply calculate the predictions of our model...
End of explanation
comb_pseudo = np.concatenate([da_trn_labels, val_pseudo])
comb_feat = np.concatenate([da_conv_feat, conv_val_feat])
Explanation: ...concatenate them with our training labels...
End of explanation
bn_model.load_weights(path+'models/da_conv8_1.h5')
bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, epochs=1,
validation_data=(conv_val_feat, val_labels))
bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, epochs=4,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.00001
bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, epochs=4,
validation_data=(conv_val_feat, val_labels))
Explanation: ...and fine-tune our model using that data.
End of explanation
bn_model.save_weights(path+'models/bn-ps8.h5')
Explanation: That's a distinct improvement - even although the validation set isn't very big. This looks encouraging for when we try this on the test set.
End of explanation
def do_clip(arr, mx): return np.clip(arr, (1-mx)/9, mx)
val_preds = bn_model.predict(conv_val_feat, batch_size=batch_size*2)
np.mean(keras.metrics.categorical_crossentropy(val_labels, do_clip(val_preds, 0.93)).eval())
conv_test_feat = load_array(path+'results/conv_test_feat.dat')
preds = bn_model.predict(conv_test_feat, batch_size=batch_size*2)
subm = do_clip(preds,0.93)
subm_name = path+'results/subm.gz'
classes = sorted(batches.class_indices, key=batches.class_indices.get)
submission = pd.DataFrame(subm, columns=classes)
submission.insert(0, 'img', [a[4:] for a in test_filenames])
submission.head()
submission.to_csv(subm_name, index=False, compression='gzip')
FileLink(subm_name)
Explanation: Submit
We'll find a good clipping amount using the validation set, prior to submitting.
End of explanation
#for l in get_bn_layers(p): conv_model.add(l) # this choice would give a weight shape error
for l in get_bn_da_layers(p): conv_model.add(l) # ... so probably this is the right one
for l1,l2 in zip(bn_model.layers, conv_model.layers[last_conv_idx+1:]):
l2.set_weights(l1.get_weights())
for l in conv_model.layers: l.trainable =False
for l in conv_model.layers[last_conv_idx+1:]: l.trainable =True
comb = np.concatenate([trn, val])
# not knowing what the experiment was about, added this to avoid a shape match error with comb using gen_t.flow
comb_pseudo = np.concatenate([trn_labels, val_pseudo])
gen_t = image.ImageDataGenerator(rotation_range=8, height_shift_range=0.04,
shear_range=0.03, channel_shift_range=10, width_shift_range=0.08)
batches = gen_t.flow(comb, comb_pseudo, batch_size=batch_size)
val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)
conv_model.compile(Adam(lr=0.00001), loss='categorical_crossentropy', metrics=['accuracy'])
conv_model.fit_generator(batches, steps_per_epoch, epochs=1, validation_data=val_batches,
validation_steps=validation_steps)
conv_model.optimizer.lr = 0.0001
conv_model.fit_generator(batches, steps_per_epoch, epochs=3, validation_data=val_batches,
validation_steps=validation_steps)
for l in conv_model.layers[16:]: l.trainable =True
#- added compile instruction in order to avoid Keras 2.1 warning message
conv_model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
conv_model.optimizer.lr = 0.00001
conv_model.fit_generator(batches, steps_per_epoch, epochs=8, validation_data=val_batches,
validation_steps=validation_steps)
conv_model.save_weights(path+'models/conv8_ps.h5')
#conv_model.load_weights(path+'models/conv8_da.h5') # conv8_da.h5 was not saved in this notebook
val_pseudo = conv_model.predict(val, batch_size=batch_size*2)
save_array(path+'models/pseudo8_da.dat', val_pseudo)
Explanation: This gets 0.534 on the leaderboard.
The "things that didn't really work" section
You can safely ignore everything from here on, because they didn't really help.
Finetune some conv layers too
End of explanation
drivers_ds = pd.read_csv(path+'driver_imgs_list.csv')
drivers_ds.head()
img2driver = drivers_ds.set_index('img')['subject'].to_dict()
driver2imgs = {k: g["img"].tolist()
for k,g in drivers_ds[['subject', 'img']].groupby("subject")}
# It seems this function is not used in this notebook
def get_idx(driver_list):
return [i for i,f in enumerate(filenames) if img2driver[f[3:]] in driver_list]
# drivers = driver2imgs.keys() # Python 2
drivers = list(driver2imgs) # Python 3
rnd_drivers = np.random.permutation(drivers)
ds1 = rnd_drivers[:len(rnd_drivers)//2]
ds2 = rnd_drivers[len(rnd_drivers)//2:]
# The following cells seem to require some preparation code not included in this notebook
models=[fit_conv([d]) for d in drivers]
models=[m for m in models if m is not None]
all_preds = np.stack([m.predict(conv_test_feat, batch_size=128) for m in models])
avg_preds = all_preds.mean(axis=0)
avg_preds = avg_preds/np.expand_dims(avg_preds.sum(axis=1), 1)
keras.metrics.categorical_crossentropy(val_labels, np.clip(avg_val_preds,0.01,0.99)).eval()
keras.metrics.categorical_accuracy(val_labels, np.clip(avg_val_preds,0.01,0.99)).eval()
Explanation: Ensembling
End of explanation |
3,890 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Neutron Diffusion Equation Criticality Eigenvalue Calculation
Description
Step1: Material Properties
Step2: Slab Geometry Width and Discretization
Step3: Generation of Leakage and Absorption Matrices
Step4: Boundary Conditions $(\phi(0) = \phi(L) = 0)$
Step5: Power Iteration Scheme for k-eigenvalue and Flux
Algorithm | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
Explanation: Neutron Diffusion Equation Criticality Eigenvalue Calculation
Description: Solves neutron diffusion equation (NDE) in slab geometry. Finds width of critical slab using one-speed diffusion theory with zero flux boundary conditions on the edges.
Neutron Diffusion Equation in Slab with Fission Source
The NDE in a slab is given by
$$ -\frac{d}{dx}D(x)\frac{d\phi(x)}{dx} + \Sigma_a \phi(x) = \frac{1}{k}\nu
\Sigma_f \phi(x) $$
where $D(x)$ is the diffusion coefficient, $\Sigma_a$ and $\Sigma_f$ are
the absorption and fission macroscopic cross sections, $\nu$ is the
average number of neutrons emitted in fission, and $k$ is k-effective.
Import Python Libraries
End of explanation
D = 0.9
nusigf = 0.70
siga = 0.066
Explanation: Material Properties
End of explanation
#Lx = np.pi*((nusigf-siga)/D)**(-0.5)
Lx = 15.0
N = 50;
h = Lx/(N-1)
x = np.zeros(N)
for i in range(N-1):
x[i+1] = x[i] + h
Explanation: Slab Geometry Width and Discretization
End of explanation
L = np.zeros((N,N))
A = np.zeros((N,N))
M = np.zeros((N,N))
for i in range(N):
L[i][i] = L[i][i] + (-2*(-D/(h**2)))
for i in range(1,N):
L[i][i-1] = L[i][i-1] + (1*(-D/h**2))
for i in range(N-1):
L[i][i+1] = L[i][i+1] + (1*(-D/h**2))
for i in range(N):
A[i][i] = A[i][i] + siga
M = L + A
Explanation: Generation of Leakage and Absorption Matrices
End of explanation
M[0][0] = 1
M[0][1] = 0
M[N-1][N-1] = 1
M[N-1][N-2] = 0
phi0 = np.ones((N,1))
phi0[0] = 0
phi0[N-1] = 0
Explanation: Boundary Conditions $(\phi(0) = \phi(L) = 0)$
End of explanation
tol = 1e-15
k = 1.00
for i in range(100):
kold = k
psi = np.linalg.solve(M,nusigf*phi0)
k = sum(nusigf*psi)/sum(nusigf*phi0)
phi0 = (1/k)*psi
phi0[0] = 0
phi0[N-1] = 0
residual = np.abs(k-kold)
if residual <= tol:
break
plt.plot(x,phi0)
plt.xlabel('Slab (cm)')
plt.ylabel('Neutron Flux')
plt.grid()
print "k-effective = ", k
print " approx alpha = ", (k-1)/k * sum(nusigf*phi0)/sum(phi0)
Explanation: Power Iteration Scheme for k-eigenvalue and Flux
Algorithm: We input an initial flux $\phi^{(0)}(x)$ and k-effective value $k_0$ and solve the equation:
$$ M \psi^{(0)}(x) = \frac{1}{k} F \phi^{(0)}(x) $$
for $\psi^{(0)}(x)$. Using this function, we calculate the next k-effective iterate using
$$ k^{n+1} = \frac{\sum \nu \Sigma_f \psi^{(n)}(x)}{\sum \nu \Sigma_f \phi^{(n)}(x)} $$
The new flux $\phi^{(n+1)}(x)$ is calculated
$$ \phi^{(n+1)}(x) = \frac{1}{k} \psi^{(n)}(x) $$.
This is done until the two-norm difference between k-effective iterations is less than some tolerance.
End of explanation |
3,891 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I'm looking for a fast solution to MATLAB's accumarray in numpy. The accumarray accumulates the elements of an array which belong to the same index. An example: | Problem:
import numpy as np
a = np.arange(1,11)
accmap = np.array([0,1,0,0,0,1,1,2,2,1])
result = np.bincount(accmap, weights = a) |
3,892 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problems on Arrays and Strings
P1. Is Unique
Step1: P2. Check Permutation
Step2: P3. URLify
Step3: P4. Palindrome Permutation
Step4: P5. One Away
Step5: P6. String Compression
Step6: P7. Rotate Matrix
Step7: P8. Zero Matrix | Python Code:
# With Hashmap.
# Time Complexity: O(n)
def if_unique(string):
chr_dict = {}
for char in string:
if char not in chr_dict:
chr_dict[char] = 1
else:
return False
return True
# Without additional memory.
# Time Complexity: O(n^2)
def if_unique_m(string):
for idx, char in enumerate(string):
for j in range(idx + 1, len(string)):
if char == string[j]:
return False
return True
# Test cases.
print(if_unique("1234"), if_unique_m("1234"))
print(if_unique("12344"), if_unique_m("12344"))
print(if_unique("1214"), if_unique_m("1214"))
print(if_unique("1"), if_unique_m("1"))
print(if_unique(""), if_unique_m(""))
Explanation: Problems on Arrays and Strings
P1. Is Unique: Implement an algorithm to determine if a string has all unique characters. What if you cannot use additional data structures?
End of explanation
# Using a hashmap.
# Checking if str1 is a permutation of str2.
# Assumptions: The strings can have repeated characters.
# Time Complexity: O(n)
def if_permute(str1, str2):
if len(str1) != len(str2):
return False
def get_chr_dict(string):
chr_dict = {}
for char in string:
if char not in chr_dict:
chr_dict[char] = 1
else:
chr_dict[char] += 1
return chr_dict
str1_d = get_chr_dict(str1) # String 1.
str2_d = get_chr_dict(str2) # String 2.
# Compare dictionaries.
for char in str1_d:
if char not in str2_d or str2_d[char] != str1_d[char]:
return False
return True
# Test Cases
print(if_permute("", ""))
print(if_permute("abc", "abc"))
print(if_permute("abbc", "abbc"))
print(if_permute("abcc", "abcc"))
print(if_permute("aaa", "aaa"))
print(if_permute("aaad", "aaac"))
Explanation: P2. Check Permutation: Given two strings, write a method to decide if one is the permutation of the other.
End of explanation
# Replace spaces with %20 characters.
# Time Complexity: O(n)
def replace_space(string):
parts = string.split(" ")
url = ""
for p in parts:
if p != "":
url += p + "%20"
return url[:-3]
print(replace_space("Mr John Smith "))
print(replace_space(""))
print(replace_space(" John Smith"))
Explanation: P3. URLify: Write a method to replace all spaces in a string with '%20'. You may assume that the string has sufficient space at the end to hold the additional characters, and that you are given the "true" length of the string.
End of explanation
# Build dictionary of all characters in the string and check if all even.
def check_palin_permute(string):
c_dict = {}
for char in string:
if char is not " ":
if char in c_dict:
c_dict[char] += 1
else:
c_dict[char] = 1
num_1 = 0
for char in c_dict:
if c_dict[char]%2 == 1:
num_1 += 1
if num_1 > 1:
return False
return True
print(check_palin_permute("tact coa"))
Explanation: P4. Palindrome Permutation: Given a string, write a function to check if it is a permutation of a palindrome. A palindrome is a word or phrase that is the same forwards and backwards. A permutation is a rearrangement of letters. The palidrome does not need to be limited to just dictionary words.
End of explanation
# Check the length of the strings to find which operation needs to be performed i.e. insert, delete or replace.
# Time Complexity: O(n)
def edit_distance(str1, str2) -> bool:
if abs(len(str1) - len(str2)) > 1:
return False
i = 0; j = 0; edits = 0
while(i < len(str1) and j < len(str2)):
if str1[i] != str2[j]: # Either replace or move.
edits += 1
if len(str1) > len(str2):
j += 1
elif len(str1) < len(str2):
i += 1
i += 1; j += 1
if edits > 1:
return False
return True
print(edit_distance("pale", "ple"))
print(edit_distance("pale", "bake"))
print(edit_distance("pales", "pale"))
print(edit_distance("pale", "bale"))
print(edit_distance("pales", "bale"))
Explanation: P5. One Away: There are three types of edits that can be performed on strings: insert a character, remove a character or replace a character. Given two strings, write a function to check if they are one edit (or zero edits) away.
End of explanation
# Perform running compression on a string.
# Time Complexity: O(n)
def compress(string: str) -> str:
com_str = ""
count = 0
for i in range(0, len(string) - 1):
if string[i] == string[i+1]:
count += 1
else:
com_str += string[i] + str(count + 1)
count = 0 # Reset count.
# Edge case for last character.
if string[i] == string[i+1]:
com_str += string[i] + str(count + 1)
else:
com_str += string[i+1] + str(1)
if len(com_str) > len(string):
return string
return com_str
print(compress("aabbbcdefgFFFFFFFFFc"))
print(compress("aabbbcdefgFFFFFFFFF"))
print(compress("aabbbcdefgFFFF"))
Explanation: P6. String Compression: Implement a method to perform basic string compression using the counts of repeated characters. For example, the string "aabcccccaaa" would become a2b1c5a3. If the "compressed" string would not become smaller than the original string, your method should return the original string. You can assume the string has only uppercase and lowercase letters (a-z)
End of explanation
# Rotate a Matrix in-place.
# Given: The matrix is a square matrix.
# Time Complexity: O(n^2)
# Space Complexity: n^2
Matrix = List[List[int]]
def rotate_matrix(mat: Matrix) -> Matrix:
mat_size = (len(mat), len(mat[0]))
# Create Matrix of equal size.
rot = [[] for i in range(0 , mat_size[0])]
for i in range(mat_size[0] - 1, -1, -1):
for j in range(0, mat_size[1]):
rot[j].append(mat[i][j])
return rot
rotate_matrix([[0, 1, 2], [3, 4, 5], [6, 7, 8]])
# print(swap(2, 3))
Explanation: P7. Rotate Matrix: Given an image represented by an NxN matrix, where each pixel in the imgae is 4 bytes, write a method to rotate the image by 90 degrees. Can you do this in place?
End of explanation
# NOTE: This problem is flawed because by making an entire row and column 0, the entire matrix will become zero.
# Make a zero matrix from a given matrix.
# Time complexity: O(n^3)
Matrix = List[List[int]]
def zero_matrix(mat: Matrix) -> Matrix:
zero_c = {}
zero_r = {}
mat_size = (len(mat), len(mat[0]))
for i in range(0, mat_size[0]):
for j in range(0, mat_size[1]):
if mat[i][j] == 0 and i not in zero_r and j not in zero_c:
for k in range(0, mat_size[0]):
mat[k][j] = 0
for k in range(0, mat_size[1]):
mat[i][k] = 0
zero_r[k] = 1
zero_c[k] = 1
return mat
print(zero_matrix([[0, 1, 2], [3, 4, 5], [6, 7, 8]]))
Explanation: P8. Zero Matrix: Write an algorithm such that if an element in an NxN matrix is 0, its entire row and column are set to 0.
End of explanation |
3,893 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Latent Function Inference with Pyro + GPyTorch (Low-Level Interface)
Overview
In this example, we will give an overview of the low-level Pyro-GPyTorch integration.
The low-level interface makes it possible to write GP models in a Pyro-style -- i.e. defining your own model and guide functions.
These are the key differences between the high-level and low-level interface
Step1: This example uses a GP to infer a latent function $\lambda(x)$, which parameterises the exponential distribution
Step2: Using the low-level Pyro/GPyTorch interface
The low-level iterface should look familiar if you've written Pyro models/guides before. We'll use a gpytorch.models.ApproximateGP object to model the GP. To use the low-level interface, this object needs to define 3 functions
Step3: Performing inference with Pyro
Unlike all the other examples in this library, PyroGP models use Pyro's inference and optimization classes (rather than the classes provided by PyTorch).
If you are unfamiliar with Pyro's inference tools, we recommend checking out the Pyro SVI tutorial.
Step4: In this example, we are only performing inference over the GP latent function (and its associated hyperparameters). In later examples, we will see that this basic loop also performs inference over any additional latent variables that we define.
Making predictions
For some problems, we simply want to use Pyro to perform inference over latent variables. However, we can also use the models' (approximate) predictive posterior distribution. Making predictions with a PyroGP model is exactly the same as for standard GPyTorch models. | Python Code:
import math
import torch
import pyro
import tqdm
import gpytorch
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
Explanation: Latent Function Inference with Pyro + GPyTorch (Low-Level Interface)
Overview
In this example, we will give an overview of the low-level Pyro-GPyTorch integration.
The low-level interface makes it possible to write GP models in a Pyro-style -- i.e. defining your own model and guide functions.
These are the key differences between the high-level and low-level interface:
High level interface
Base class is gpytorch.models.PyroGP.
GPyTorch automatically defines the model and guide functions for Pyro.
Best used when prediction is the primary goal
Low level interface
Base class is gpytorch.models.ApproximateGP.
User defines the model and guide functions for Pyro.
Best used when inference is the primary goal
End of explanation
# Here we specify a 'true' latent function lambda
scale = lambda x: np.sin(2 * math.pi * x) + 1
# Generate synthetic data
# here we generate some synthetic samples
NSamp = 100
X = np.linspace(0, 1, 100)
fig, (lambdaf, samples) = plt.subplots(1, 2, figsize=(10, 3))
lambdaf.plot(X,scale(X))
lambdaf.set_xlabel('x')
lambdaf.set_ylabel('$\lambda$')
lambdaf.set_title('Latent function')
Y = np.zeros_like(X)
for i,x in enumerate(X):
Y[i] = np.random.exponential(scale(x), 1)
samples.scatter(X,Y)
samples.set_xlabel('x')
samples.set_ylabel('y')
samples.set_title('Samples from exp. distrib.')
train_x = torch.tensor(X).float()
train_y = torch.tensor(Y).float()
Explanation: This example uses a GP to infer a latent function $\lambda(x)$, which parameterises the exponential distribution:
$$y \sim \text{Exponential} (\lambda),$$
where:
$$\lambda = \exp(f) \in (0,+\infty)$$
is a GP link function, which transforms the latent gaussian process variable:
$$f \sim GP \in (-\infty,+\infty).$$
In other words, given inputs $X$ and observations $Y$ drawn from exponential distribution with $\lambda = \lambda(X)$, we want to find $\lambda(X)$.
End of explanation
class PVGPRegressionModel(gpytorch.models.ApproximateGP):
def __init__(self, num_inducing=64, name_prefix="mixture_gp"):
self.name_prefix = name_prefix
# Define all the variational stuff
inducing_points = torch.linspace(0, 1, num_inducing)
variational_strategy = gpytorch.variational.VariationalStrategy(
self, inducing_points,
gpytorch.variational.CholeskyVariationalDistribution(num_inducing_points=num_inducing)
)
# Standard initializtation
super().__init__(variational_strategy)
# Mean, covar, likelihood
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean = self.mean_module(x)
covar = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean, covar)
def guide(self, x, y):
# Get q(f) - variational (guide) distribution of latent function
function_dist = self.pyro_guide(x)
# Use a plate here to mark conditional independencies
with pyro.plate(self.name_prefix + ".data_plate", dim=-1):
# Sample from latent function distribution
pyro.sample(self.name_prefix + ".f(x)", function_dist)
def model(self, x, y):
pyro.module(self.name_prefix + ".gp", self)
# Get p(f) - prior distribution of latent function
function_dist = self.pyro_model(x)
# Use a plate here to mark conditional independencies
with pyro.plate(self.name_prefix + ".data_plate", dim=-1):
# Sample from latent function distribution
function_samples = pyro.sample(self.name_prefix + ".f(x)", function_dist)
# Use the link function to convert GP samples into scale samples
scale_samples = function_samples.exp()
# Sample from observed distribution
return pyro.sample(
self.name_prefix + ".y",
pyro.distributions.Exponential(scale_samples.reciprocal()), # rate = 1 / scale
obs=y
)
model = PVGPRegressionModel()
Explanation: Using the low-level Pyro/GPyTorch interface
The low-level iterface should look familiar if you've written Pyro models/guides before. We'll use a gpytorch.models.ApproximateGP object to model the GP. To use the low-level interface, this object needs to define 3 functions:
forward(x) - which computes the prior GP mean and covariance at the supplied times.
guide(x) - which defines the approximate GP posterior.
model(x) - which does the following 3 things
Computes the GP prior at x
Converts GP function samples into scale function samples, using the link function defined above.
Sample from the observed distribution p(y | f). (This takes the place of a gpytorch Likelihood that we would've used in the high-level interface).
End of explanation
# this is for running the notebook in our testing framework
import os
smoke_test = ('CI' in os.environ)
num_iter = 2 if smoke_test else 200
num_particles = 1 if smoke_test else 256
def train():
optimizer = pyro.optim.Adam({"lr": 0.1})
elbo = pyro.infer.Trace_ELBO(num_particles=num_particles, vectorize_particles=True, retain_graph=True)
svi = pyro.infer.SVI(model.model, model.guide, optimizer, elbo)
model.train()
iterator = tqdm.notebook.tqdm(range(num_iter))
for i in iterator:
model.zero_grad()
loss = svi.step(train_x, train_y)
iterator.set_postfix(loss=loss, lengthscale=model.covar_module.base_kernel.lengthscale.item())
%time train()
Explanation: Performing inference with Pyro
Unlike all the other examples in this library, PyroGP models use Pyro's inference and optimization classes (rather than the classes provided by PyTorch).
If you are unfamiliar with Pyro's inference tools, we recommend checking out the Pyro SVI tutorial.
End of explanation
# Here's a quick helper function for getting smoothed percentile values from samples
def percentiles_from_samples(samples, percentiles=[0.05, 0.5, 0.95]):
num_samples = samples.size(0)
samples = samples.sort(dim=0)[0]
# Get samples corresponding to percentile
percentile_samples = [samples[int(num_samples * percentile)] for percentile in percentiles]
# Smooth the samples
kernel = torch.full((1, 1, 5), fill_value=0.2)
percentiles_samples = [
torch.nn.functional.conv1d(percentile_sample.view(1, 1, -1), kernel, padding=2).view(-1)
for percentile_sample in percentile_samples
]
return percentile_samples
# define test set (optionally on GPU)
denser = 2 # make test set 2 times denser then the training set
test_x = torch.linspace(0, 1, denser * NSamp).float()#.cuda()
model.eval()
with torch.no_grad():
output = model(test_x)
# Get E[exp(f)] via f_i ~ GP, 1/n \sum_{i=1}^{n} exp(f_i).
# Similarly get the 5th and 95th percentiles
samples = output(torch.Size([1000])).exp()
lower, mean, upper = percentiles_from_samples(samples)
# Draw some simulated y values
scale_sim = model(train_x)().exp()
y_sim = pyro.distributions.Exponential(scale_sim.reciprocal())()
# visualize the result
fig, (func, samp) = plt.subplots(1, 2, figsize=(12, 3))
line, = func.plot(test_x, mean.detach().cpu().numpy(), label='GP prediction')
func.fill_between(
test_x, lower.detach().cpu().numpy(),
upper.detach().cpu().numpy(), color=line.get_color(), alpha=0.5
)
func.plot(test_x, scale(test_x), label='True latent function')
func.legend()
# sample from p(y|D,x) = \int p(y|f) p(f|D,x) df (doubly stochastic)
samp.scatter(train_x, train_y, alpha = 0.5, label='True train data')
samp.scatter(train_x, y_sim.cpu().detach().numpy(), alpha=0.5, label='Sample from the model')
samp.legend()
Explanation: In this example, we are only performing inference over the GP latent function (and its associated hyperparameters). In later examples, we will see that this basic loop also performs inference over any additional latent variables that we define.
Making predictions
For some problems, we simply want to use Pyro to perform inference over latent variables. However, we can also use the models' (approximate) predictive posterior distribution. Making predictions with a PyroGP model is exactly the same as for standard GPyTorch models.
End of explanation |
3,894 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p><font size="6"><b>Jupyter notebook INTRODUCTION </b></font></p>
Introduction to GIS scripting
May, 2017
© 2017, Stijn Van Hoey (stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
Step1: <big><center>To run a cell
Step2: Writing code is what you will do most during this course!
Markdown
Text cells, using Markdown syntax. With the syntax, you can make text bold or italic, amongst many other things...
list
with
items
Link to interesting resources or images
Step3: Help
Step4: <div class="alert alert-success">
<b>EXERCISE</b>
Step5: edit mode to command mode
edit mode means you're editing a cell, i.e. with your cursor inside a cell to type content --> <font color="green">green colored side</font>
command mode means you're NOT editing(!), i.e. NOT with your cursor inside a cell to type content --> <font color="blue">blue colored side</font>
To start editing, click inside a cell or
<img src="../img/enterbutton.png" alt="Key enter" style="width
Step6: %%timeit
Step7: %lsmagic
Step8: Let's get started! | Python Code:
from IPython.display import Image
Image(url='http://python.org/images/python-logo.gif')
Explanation: <p><font size="6"><b>Jupyter notebook INTRODUCTION </b></font></p>
Introduction to GIS scripting
May, 2017
© 2017, Stijn Van Hoey (stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
End of explanation
# Code cell, then we are using python
print('Hello DS')
DS = 10
print(DS + 5) # Yes, we advise to use Python 3 (!)
Explanation: <big><center>To run a cell: push the start triangle in the menu or type SHIFT + ENTER/RETURN
Notebook cell types
We will work in Jupyter notebooks during this course. A notebook is a collection of cells, that can contain different content:
Code
End of explanation
import os
os.mkdir
my_very_long_variable_name = 3
Explanation: Writing code is what you will do most during this course!
Markdown
Text cells, using Markdown syntax. With the syntax, you can make text bold or italic, amongst many other things...
list
with
items
Link to interesting resources or images:
Blockquotes if you like them
This line is part of the same blockquote.
Mathematical formulas can also be incorporated (LaTeX it is...)
$$\frac{dBZV}{dt}=BZV_{in} - k_1 .BZV$$
$$\frac{dOZ}{dt}=k_2 .(OZ_{sat}-OZ) - k_1 .BZV$$
Or tables:
course | points
--- | ---
Math | 8
Chemistry | 4
or tables with Latex..
Symbool | verklaring
--- | ---
$BZV_{(t=0)}$ | initiële biochemische zuurstofvraag (7.33 mg.l-1)
$OZ_{(t=0)}$ | initiële opgeloste zuurstof (8.5 mg.l-1)
$BZV_{in}$ | input BZV(1 mg.l-1.min-1)
$OZ_{sat}$ | saturatieconcentratie opgeloste zuurstof (11 mg.l-1)
$k_1$ | bacteriële degradatiesnelheid (0.3 min-1)
$k_2$ | reäeratieconstante (0.4 min-1)
Code can also be incorporated, but than just to illustrate:
python
BOT = 12
print(BOT)
In other words, it is markdown, just as you've written in Rmarkdown (!)
See also: https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet
HTML
You can also use HTML commands, just check this cell:
<h3> html-adapted titel with <h3> </h3>
<p></p>
<b> Bold text <b> </b> of <i>or italic <i> </i>
Headings of different sizes: section
subsection
subsubsection
Raw Text
Notebook handling ESSENTIALS
Completion: TAB
The TAB button is essential: It provides you all possible actions you can do after loading in a library AND it is used for automatic autocompletion:
End of explanation
round(3.2)
os.mkdir
# An alternative is to put a question mark behind the command
os.mkdir?
Explanation: Help: SHIFT + TAB
The SHIFT-TAB combination is ultra essential to get information/help about the current operation
End of explanation
# %load ../notebooks/_solutions/00-jupyter_introduction26.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>What happens if you put two question marks behind the command?</li>
</ul>
</div>
End of explanation
%psearch os.*dir
Explanation: edit mode to command mode
edit mode means you're editing a cell, i.e. with your cursor inside a cell to type content --> <font color="green">green colored side</font>
command mode means you're NOT editing(!), i.e. NOT with your cursor inside a cell to type content --> <font color="blue">blue colored side</font>
To start editing, click inside a cell or
<img src="../img/enterbutton.png" alt="Key enter" style="width:150px">
To s stop edting,
<img src="../img/keyescape.png" alt="Key A" style="width:150px">
new cell A-bove
<img src="../img/keya.png" alt="Key A" style="width:150px">
Create a new cell above with the key A... when in command mode
new cell B-elow
<img src="../img/keyb.png" alt="Key B" style="width:150px">
Create a new cell below with the key B... when in command mode
CTRL + SHIFT + P
Just do it!
Trouble...
<div class="alert alert-danger">
<b>NOTE</b>: When you're stuck, or things do crash:
<ul>
<li> first try **Kernel** > **Interrupt** -> you're cell should stop running
<li> if no succes -> **Kernel** > **Restart** -> restart you're notebook
</ul>
</div>
Overload?!?
<img src="../img/toomuch.jpg" alt="Key A" style="width:500px">
<br><br>
<center>No stress, just go to </center>
<br>
<center><p style="font-size: 200%;text-align: center;margin:500">Help > Keyboard shortcuts</p></center>
Stackoverflow is really, really, really nice!
http://stackoverflow.com/questions/tagged/python
Google search is with you!
<big><center>REMEMBER: To run a cell: <strike>push the start triangle in the menu or</strike> type SHIFT + ENTER
some MAGIC...
%psearch
End of explanation
%%timeit
mylist = range(1000)
for i in mylist:
i = i**2
import numpy as np
%%timeit
np.arange(1000)**2
Explanation: %%timeit
End of explanation
%lsmagic
Explanation: %lsmagic
End of explanation
from IPython.display import FileLink, FileLinks
FileLinks('.', recursive=False)
Explanation: Let's get started!
End of explanation |
3,895 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Backpropagation in Multilayer Neural Networks
Goals
Step1: Preprocessing
Normalization
Train / test split
Step2: Numpy Implementation
a) Logistic Regression
In this section we will implement a logistic regression model trainable with SGD using numpy. Here are the objectives
Step3: The softmax function
Now let's implement the softmax vector function
Step4: Make sure that this works one vector at a time (and check that the components sum to one)
Step5: Note that a naive implementation of softmax might not be able process a batch of activations in a single call
Step6: Here is a way to implement softmax that works both for an individual vector of activations and for a batch of activation vectors at once
Step7: Probabilities should sum to 1
Step8: The sum of probabilities for each input vector of logits should some to 1
Step9: Implement a function that given the true one-hot encoded class Y_true and and some predicted probabilities Y_pred returns the negative log likelihood.
Step10: Check that the nll of a very confident yet bad prediction is a much higher positive number
Step11: Make sure that your implementation can compute the average negative log likelihood of a group of predictions
Step12: Let us now study the following linear model trainable by SGD, one sample at a time.
Step13: Evaluate the randomly initialized model on the first example
Step14: Evaluate the trained model on the first example
Step15: b) Feedforward Multilayer
The objective of this section is to implement the backpropagation algorithm (SGD with the chain rule) on a single layer neural network using the sigmoid activation function.
Implement the sigmoid and its element-wise derivative dsigmoid functions
Step17: Implement forward and forward_keep_all functions for a model with a hidden layer with a sigmoid activation function
Step18: c) Exercises
Look at worst prediction errors
Use numpy to find test samples for which the model made the worst predictions,
Use the plot_prediction to look at the model predictions on those,
Would you have done any better?
Step19: Hyper parameters settings
Experiment with different hyper parameters
Step20: Homework assignments
Watch the following video on how to code a minimal deep learning framework that feels like a simplified version
of Keras but using numpy instead of tensorflow | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_digits
digits = load_digits()
sample_index = 45
plt.figure(figsize=(3, 3))
plt.imshow(digits.images[sample_index], cmap=plt.cm.gray_r,
interpolation='nearest')
plt.title("image label: %d" % digits.target[sample_index]);
Explanation: Backpropagation in Multilayer Neural Networks
Goals:
implementING a real gradient descent in Numpy
Dataset:
Similar as first Lab - Digits: 10 class handwritten digits
sklearn.datasets.load_digits
End of explanation
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
data = np.asarray(digits.data, dtype='float32')
target = np.asarray(digits.target, dtype='int32')
X_train, X_test, y_train, y_test = train_test_split(
data, target, test_size=0.15, random_state=37)
# mean = 0 ; standard deviation = 1.0
scaler = preprocessing.StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# print(scaler.mean_)
# print(scaler.scale_)
X_train.shape
X_train.dtype
X_test.shape
y_train.shape
y_train.dtype
Explanation: Preprocessing
Normalization
Train / test split
End of explanation
def one_hot(n_classes, y):
return np.eye(n_classes)[y]
one_hot(n_classes=10, y=3)
one_hot(n_classes=10, y=[0, 4, 9, 1])
Explanation: Numpy Implementation
a) Logistic Regression
In this section we will implement a logistic regression model trainable with SGD using numpy. Here are the objectives:
Implement a simple forward model with no hidden layer (equivalent to a logistic regression):
note: shape, transpose of W with regards to course
$y = softmax(\mathbf{W} \dot x + b)$
Build a predict function which returns the most probable class given an input $x$
Build an accuracy function for a batch of inputs $X$ and the corresponding expected outputs $y_{true}$
Build a grad function which computes $\frac{d}{dW} -\log(softmax(W \dot x + b))$ for an $x$ and its corresponding expected output $y_{true}$ ; check that the gradients are well defined
Build a train function which uses the grad function output to update $\mathbf{W}$ and $b$
One-hot encoding for class label data
First let's define a helper function to compute the one hot encoding of an integer array for a fixed number of classes (similar to keras' to_categorical):
End of explanation
def softmax(X):
# TODO:
return None
Explanation: The softmax function
Now let's implement the softmax vector function:
$$
softmax(\mathbf{x}) = \frac{1}{\sum_{i=1}^{n}{e^{x_i}}}
\cdot
\begin{bmatrix}
e^{x_1}\\
e^{x_2}\\
\vdots\\
e^{x_n}
\end{bmatrix}
$$
End of explanation
print(softmax([10, 2, -3]))
Explanation: Make sure that this works one vector at a time (and check that the components sum to one):
End of explanation
X = np.array([[10, 2, -3],
[-1, 5, -20]])
print(softmax(X))
Explanation: Note that a naive implementation of softmax might not be able process a batch of activations in a single call:
End of explanation
def softmax(X):
exp = np.exp(X)
return exp / np.sum(exp, axis=-1, keepdims=True)
print("softmax of a single vector:")
print(softmax([10, 2, -3]))
Explanation: Here is a way to implement softmax that works both for an individual vector of activations and for a batch of activation vectors at once:
End of explanation
print(np.sum(softmax([10, 2, -3])))
print("sotfmax of 2 vectors:")
X = np.array([[10, 2, -3],
[-1, 5, -20]])
print(softmax(X))
Explanation: Probabilities should sum to 1:
End of explanation
print(np.sum(softmax(X), axis=1))
Explanation: The sum of probabilities for each input vector of logits should some to 1:
End of explanation
def nll(Y_true, Y_pred):
Y_true = np.asarray(Y_true)
Y_pred = np.asarray(Y_pred)
# TODO
return 0.
# Make sure that it works for a simple sample at a time
print(nll([1, 0, 0], [.99, 0.01, 0]))
Explanation: Implement a function that given the true one-hot encoded class Y_true and and some predicted probabilities Y_pred returns the negative log likelihood.
End of explanation
print(nll([1, 0, 0], [0.01, 0.01, .98]))
Explanation: Check that the nll of a very confident yet bad prediction is a much higher positive number:
End of explanation
def nll(Y_true, Y_pred):
Y_true = np.atleast_2d(Y_true)
Y_pred = np.atleast_2d(Y_pred)
# TODO
return 0.
# Check that the average NLL of the following 3 almost perfect
# predictions is close to 0
Y_true = np.array([[0, 1, 0],
[1, 0, 0],
[0, 0, 1]])
Y_pred = np.array([[0, 1, 0],
[.99, 0.01, 0],
[0, 0, 1]])
print(nll(Y_true, Y_pred))
# %load solutions/numpy_nll.py
Explanation: Make sure that your implementation can compute the average negative log likelihood of a group of predictions: Y_pred and Y_true can therefore be past as 2D arrays:
End of explanation
class LogisticRegression():
def __init__(self, input_size, output_size):
self.W = np.random.uniform(size=(input_size, output_size),
high=0.1, low=-0.1)
self.b = np.random.uniform(size=output_size,
high=0.1, low=-0.1)
self.output_size = output_size
def forward(self, X):
Z = np.dot(X, self.W) + self.b
return softmax(Z)
def predict(self, X):
if len(X.shape) == 1:
return np.argmax(self.forward(X))
else:
return np.argmax(self.forward(X), axis=1)
def grad_loss(self, x, y_true):
y_pred = self.forward(x)
dnll_output = y_pred - one_hot(self.output_size, y_true)
grad_W = np.outer(x, dnll_output)
grad_b = dnll_output
grads = {"W": grad_W, "b": grad_b}
return grads
def train(self, x, y, learning_rate):
# Traditional SGD update without momentum
grads = self.grad_loss(x, y)
self.W = self.W - learning_rate * grads["W"]
self.b = self.b - learning_rate * grads["b"]
def loss(self, X, y):
return nll(one_hot(self.output_size, y), self.forward(X))
def accuracy(self, X, y):
y_preds = np.argmax(self.forward(X), axis=1)
return np.mean(y_preds == y)
# Build a model and test its forward inference
n_features = X_train.shape[1]
n_classes = len(np.unique(y_train))
lr = LogisticRegression(n_features, n_classes)
print("Evaluation of the untrained model:")
train_loss = lr.loss(X_train, y_train)
train_acc = lr.accuracy(X_train, y_train)
test_acc = lr.accuracy(X_test, y_test)
print("train loss: %0.4f, train acc: %0.3f, test acc: %0.3f"
% (train_loss, train_acc, test_acc))
Explanation: Let us now study the following linear model trainable by SGD, one sample at a time.
End of explanation
def plot_prediction(model, sample_idx=0, classes=range(10)):
fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2, figsize=(10, 4))
ax0.imshow(scaler.inverse_transform(X_test[sample_idx:sample_idx+1]).reshape(8, 8),
cmap=plt.cm.gray_r, interpolation='nearest')
ax0.set_title("True image label: %d" % y_test[sample_idx]);
ax1.bar(classes, one_hot(len(classes), y_test[sample_idx]), label='true')
ax1.bar(classes, model.forward(X_test[sample_idx]), label='prediction', color="red")
ax1.set_xticks(classes)
prediction = model.predict(X_test[sample_idx])
ax1.set_title('Output probabilities (prediction: %d)'
% prediction)
ax1.set_xlabel('Digit class')
ax1.legend()
plot_prediction(lr, sample_idx=0)
# Training for one epoch
learning_rate = 0.01
for i, (x, y) in enumerate(zip(X_train, y_train)):
lr.train(x, y, learning_rate)
if i % 100 == 0:
train_loss = lr.loss(X_train, y_train)
train_acc = lr.accuracy(X_train, y_train)
test_acc = lr.accuracy(X_test, y_test)
print("Update #%d, train loss: %0.4f, train acc: %0.3f, test acc: %0.3f"
% (i, train_loss, train_acc, test_acc))
Explanation: Evaluate the randomly initialized model on the first example:
End of explanation
plot_prediction(lr, sample_idx=0)
Explanation: Evaluate the trained model on the first example:
End of explanation
def sigmoid(X):
# TODO
return X
def dsigmoid(X):
# TODO
return X
x = np.linspace(-5, 5, 100)
plt.plot(x, sigmoid(x), label='sigmoid')
plt.plot(x, dsigmoid(x), label='dsigmoid')
plt.legend(loc='best');
# %load solutions/sigmoid.py
Explanation: b) Feedforward Multilayer
The objective of this section is to implement the backpropagation algorithm (SGD with the chain rule) on a single layer neural network using the sigmoid activation function.
Implement the sigmoid and its element-wise derivative dsigmoid functions:
$$
sigmoid(x) = \frac{1}{1 + e^{-x}}
$$
$$
dsigmoid(x) = sigmoid(x) \cdot (1 - sigmoid(x))
$$
End of explanation
EPSILON = 1e-8
class NeuralNet():
MLP with 1 hidden layer with a sigmoid activation
def __init__(self, input_size, hidden_size, output_size):
# TODO
self.W_h = None
self.b_h = None
self.W_o = None
self.b_o = None
self.output_size = output_size
def forward_keep_activations(self, X):
# TODO
z_h = 0.
h = 0.
y = np.zeros(size=self.output_size)
return y, h, z_h
def forward(self, X):
y, h, z_h = self.forward_keep_activations(X)
return y
def loss(self, X, y):
# TODO
return 42.
def grad_loss(self, x, y_true):
# TODO
return {"W_h": 0., "b_h": 0., "W_o": 0., "b_o": 0.}
def train(self, x, y, learning_rate):
# TODO
pass
def predict(self, X):
if len(X.shape) == 1:
return np.argmax(self.forward(X))
else:
return np.argmax(self.forward(X), axis=1)
def accuracy(self, X, y):
y_preds = np.argmax(self.forward(X), axis=1)
return np.mean(y_preds == y)
# %load solutions/neural_net.py
n_hidden = 10
model = NeuralNet(n_features, n_hidden, n_classes)
model.loss(X_train, y_train)
model.accuracy(X_train, y_train)
plot_prediction(model, sample_idx=5)
losses, accuracies, accuracies_test = [], [], []
losses.append(model.loss(X_train, y_train))
accuracies.append(model.accuracy(X_train, y_train))
accuracies_test.append(model.accuracy(X_test, y_test))
print("Random init: train loss: %0.5f, train acc: %0.3f, test acc: %0.3f"
% (losses[-1], accuracies[-1], accuracies_test[-1]))
for epoch in range(15):
for i, (x, y) in enumerate(zip(X_train, y_train)):
model.train(x, y, 0.1)
losses.append(model.loss(X_train, y_train))
accuracies.append(model.accuracy(X_train, y_train))
accuracies_test.append(model.accuracy(X_test, y_test))
print("Epoch #%d, train loss: %0.5f, train acc: %0.3f, test acc: %0.3f"
% (epoch + 1, losses[-1], accuracies[-1], accuracies_test[-1]))
plt.plot(losses)
plt.title("Training loss");
plt.plot(accuracies, label='train')
plt.plot(accuracies_test, label='test')
plt.ylim(0, 1.1)
plt.ylabel("accuracy")
plt.legend(loc='best');
plot_prediction(model, sample_idx=4)
Explanation: Implement forward and forward_keep_all functions for a model with a hidden layer with a sigmoid activation function:
$\mathbf{h} = sigmoid(\mathbf{W}^h \mathbf{x} + \mathbf{b^h})$
$\mathbf{y} = softmax(\mathbf{W}^o \mathbf{h} + \mathbf{b^o})$
Notes:
try to keep the code as similar as possible as the previous one;
forward now has a keep activations parameter to also return hidden activations and pre activations;
Update the grad function to compute all gradients; check that the gradients are well defined;
Implement the train and loss functions.
Bonus: reimplementing all from scratch only using the lecture slides but without looking at the solution of the LogisticRegression is an excellent exercise.
End of explanation
# %load solutions/worst_predictions.py
Explanation: c) Exercises
Look at worst prediction errors
Use numpy to find test samples for which the model made the worst predictions,
Use the plot_prediction to look at the model predictions on those,
Would you have done any better?
End of explanation
# %load solutions/keras_model.py
# %load solutions/keras_model_test_loss.py
Explanation: Hyper parameters settings
Experiment with different hyper parameters:
learning rate,
size of hidden layer,
initialization scheme: test with 0 initialization vs uniform,
implement other activation functions,
implement the support for a second hidden layer.
Mini-batches
The current implementations of train and grad_loss function currently only accept a single sample at a time:
implement the support for training with a mini-batch of 32 samples at a time instead of one,
experiment with different sizes of batches,
monitor the norm of the average gradients on the full training set at the end of each epoch.
Momentum
Bonus: Implement momentum
Back to Keras
Implement the same network architecture with Keras;
Check that the Keras model can approximately reproduce the behavior of the Numpy model when using similar hyperparameter values (size of the model, type of activations, learning rate value and use of momentum);
Compute the negative log likelihood of a sample 42 in the test set (can use model.predict_proba);
Compute the average negative log-likelihood on the full test set.
Compute the average negative log-likelihood on the full training set and check that you can get the value of the loss reported by Keras.
Is the model overfitting or underfitting? (ensure that the model has fully converged by increasing the number of epochs to 50 or more if necessary).
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo("o64FV-ez6Gw")
Explanation: Homework assignments
Watch the following video on how to code a minimal deep learning framework that feels like a simplified version
of Keras but using numpy instead of tensorflow:
End of explanation |
3,896 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kalman Filter
Kalman filters are linear models for state estimation of dynamic systems [1]. They have been the <i>de facto</i> standard in many robotics and tracking/prediction applications because they are well suited for systems with uncertainty about an observable dynamic process. They use a "observe, predict, correct" paradigm to extract information from an otherwise noisy signal. In Pyro, we can build differentiable Kalman filters with learnable parameters using the pyro.contrib.tracking library
Dynamic process
To start, consider this simple motion model
Step1: Next, let's specify the measurements. Notice that we only measure the positions of the particle.
Step2: We'll use a Delta autoguide to learn MAP estimates of the position and measurement covariances. The EKFDistribution computes the joint log density of all of the EKF states given a tensor of sequential measurements. | Python Code:
import os
import math
import torch
import pyro
import pyro.distributions as dist
from pyro.infer.autoguide import AutoDelta
from pyro.optim import Adam
from pyro.infer import SVI, Trace_ELBO, config_enumerate
from pyro.contrib.tracking.extended_kalman_filter import EKFState
from pyro.contrib.tracking.distributions import EKFDistribution
from pyro.contrib.tracking.dynamic_models import NcvContinuous
from pyro.contrib.tracking.measurements import PositionMeasurement
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('1.7.0')
dt = 1e-2
num_frames = 10
dim = 4
# Continuous model
ncv = NcvContinuous(dim, 2.0)
# Truth trajectory
xs_truth = torch.zeros(num_frames, dim)
# initial direction
theta0_truth = 0.0
# initial state
with torch.no_grad():
xs_truth[0, :] = torch.tensor([0.0, 0.0, math.cos(theta0_truth), math.sin(theta0_truth)])
for frame_num in range(1, num_frames):
# sample independent process noise
dx = pyro.sample('process_noise_{}'.format(frame_num), ncv.process_noise_dist(dt))
xs_truth[frame_num, :] = ncv(xs_truth[frame_num-1, :], dt=dt) + dx
Explanation: Kalman Filter
Kalman filters are linear models for state estimation of dynamic systems [1]. They have been the <i>de facto</i> standard in many robotics and tracking/prediction applications because they are well suited for systems with uncertainty about an observable dynamic process. They use a "observe, predict, correct" paradigm to extract information from an otherwise noisy signal. In Pyro, we can build differentiable Kalman filters with learnable parameters using the pyro.contrib.tracking library
Dynamic process
To start, consider this simple motion model:
$$ X_{k+1} = FX_k + \mathbf{W}_k $$
$$ \mathbf{Z}_k = HX_k + \mathbf{V}_k $$
where $k$ is the state, $X$ is the signal estimate, $Z_k$ is the observed value at timestep $k$, $\mathbf{W}_k$ and $\mathbf{V}_k$ are independent noise processes (ie $\mathbb{E}[w_k v_j^T] = 0$ for all $j, k$) which we'll approximate as Gaussians. Note that the state transitions are linear.
Kalman Update
At each time step, we perform a prediction for the mean and covariance:
$$ \hat{X}k = F\hat{X}{k-1}$$
$$\hat{P}k = FP{k-1}F^T + Q$$
and a correction for the measurement:
$$ K_k = \hat{P}_k H^T(H\hat{P}_k H^T + R)^{-1}$$
$$ X_k = \hat{X}_k + K_k(z_k - H\hat{X}_k)$$
$$ P_k = (I-K_k H)\hat{P}_k$$
where $X$ is the position estimate, $P$ is the covariance matrix, $K$ is the Kalman Gain, and $Q$ and $R$ are covariance matrices.
For an in-depth derivation, see [2]
Nonlinear Estimation: Extended Kalman Filter
What if our system is non-linear, eg in GPS navigation? Consider the following non-linear system:
$$ X_{k+1} = \mathbf{f}(X_k) + \mathbf{W}_k $$
$$ \mathbf{Z}_k = \mathbf{h}(X_k) + \mathbf{V}_k $$
Notice that $\mathbf{f}$ and $\mathbf{h}$ are now (smooth) non-linear functions.
The Extended Kalman Filter (EKF) attacks this problem by using a local linearization of the Kalman filter via a Taylors Series expansion.
$$ f(X_k, k) \approx f(x_k^R, k) + \mathbf{H}_k(X_k - x_k^R) + \cdots$$
where $\mathbf{H}_k$ is the Jacobian matrix at time $k$, $x_k^R$ is the previous optimal estimate, and we ignore the higher order terms. At each time step, we compute a Jacobian conditioned the previous predictions (this computation is handled by Pyro under the hood), and use the result to perform a prediction and update.
Omitting the derivations, the modification to the above predictions are now:
$$ \hat{X}k \approx \mathbf{f}(X{k-1}^R)$$
$$ \hat{P}k = \mathbf{H}\mathbf{f}(X_{k-1})P_{k-1}\mathbf{H}\mathbf{f}^T(X{k-1}) + Q$$
and the updates are now:
$$ X_k \approx \hat{X}k + K_k\big(z_k - \mathbf{h}(\hat{X}_k)\big)$$
$$ K_k = \hat{P}_k \mathbf{H}\mathbf{h}(\hat{X}k) \Big(\mathbf{H}\mathbf{h}(\hat{X}k)\hat{P}_k \mathbf{H}\mathbf{h}(\hat{X}k) + R_k\Big)^{-1} $$
$$ P_k = \big(I - K_k \mathbf{H}\mathbf{h}(\hat{X}_k)\big)\hat{P}_K$$
In Pyro, all we need to do is create an EKFState object and use its predict and update methods. Pyro will do exact inference to compute the innovations and we will use SVI to learn a MAP estimate of the position and measurement covariances.
As an example, let's look at an object moving at near-constant velocity in 2-D in a discrete time space over 100 time steps.
End of explanation
# Measurements
measurements = []
mean = torch.zeros(2)
# no correlations
cov = 1e-5 * torch.eye(2)
with torch.no_grad():
# sample independent measurement noise
dzs = pyro.sample('dzs', dist.MultivariateNormal(mean, cov).expand((num_frames,)))
# compute measurement means
zs = xs_truth[:, :2] + dzs
Explanation: Next, let's specify the measurements. Notice that we only measure the positions of the particle.
End of explanation
def model(data):
# a HalfNormal can be used here as well
R = pyro.sample('pv_cov', dist.HalfCauchy(2e-6)) * torch.eye(4)
Q = pyro.sample('measurement_cov', dist.HalfCauchy(1e-6)) * torch.eye(2)
# observe the measurements
pyro.sample('track_{}'.format(i), EKFDistribution(xs_truth[0], R, ncv,
Q, time_steps=num_frames),
obs=data)
guide = AutoDelta(model) # MAP estimation
optim = pyro.optim.Adam({'lr': 2e-2})
svi = SVI(model, guide, optim, loss=Trace_ELBO(retain_graph=True))
pyro.set_rng_seed(0)
pyro.clear_param_store()
for i in range(250 if not smoke_test else 2):
loss = svi.step(zs)
if not i % 10:
print('loss: ', loss)
# retrieve states for visualization
R = guide()['pv_cov'] * torch.eye(4)
Q = guide()['measurement_cov'] * torch.eye(2)
ekf_dist = EKFDistribution(xs_truth[0], R, ncv, Q, time_steps=num_frames)
states= ekf_dist.filter_states(zs)
Explanation: We'll use a Delta autoguide to learn MAP estimates of the position and measurement covariances. The EKFDistribution computes the joint log density of all of the EKF states given a tensor of sequential measurements.
End of explanation |
3,897 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Instanciando Componente de Publicação de Mensagens no MQTT
Step1: Componente para simulação de um sensor
bash
IoT_sensor(<name/id>, <grandeza física >, <unidade de medida>, <menor valor>, <maior valor possível>, <intervalo entre leituras (segundos)>)
Exemplo de sensor de pressão
Step2: Conectando os Componentes | Python Code:
publisher = IoT_mqtt_publisher("localhost", 1883)
Explanation: Instanciando Componente de Publicação de Mensagens no MQTT
End of explanation
sensor_1 = IoT_sensor("1", "temperature", "°C", 20, 26, 2)
sensor_2 = IoT_sensor("2", "umidade", "%", 50, 60, 3)
sensor_3 = IoT_sensor("3", "temperature", "°C", 28, 30, 4)
sensor_4 = IoT_sensor("4", "umidade", "%", 40, 55, 5)
Explanation: Componente para simulação de um sensor
bash
IoT_sensor(<name/id>, <grandeza física >, <unidade de medida>, <menor valor>, <maior valor possível>, <intervalo entre leituras (segundos)>)
Exemplo de sensor de pressão:
```python
sensor_pressao = IoT_sensor("32", "pressao", "bar", 20, 35, 5)
```
Componentes IoT_sensor podem se conectar a componentes do tipo IoT_mqtt_publisher para publicar, em um tópico, mensagens referentes às leituras feitas pelo sensor. Por exemplo, o sensor do exemplo acima produziu a seguinte mensagem no tópico sensor/32/pressao:
python
{
"source": "sensor",
"name": "32",
"type": "reading",
"body": {
"timestamp": "2019-08-17 17:02:15",
"dimension": "pressao",
"value": 25.533895448246717,
"unity": "bar"
}
}
Instanciando Sensores
End of explanation
sensor_1.connect(publisher)
sensor_2.connect(publisher)
sensor_3.connect(publisher)
sensor_4.connect(publisher)
Explanation: Conectando os Componentes
End of explanation |
3,898 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 13
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License
Step4: Code from previous chapters
make_system, plot_results, and calc_total_infected are unchanged.
Step5: In the previous chapters I presented an SIR model of infectious disease, specifically the Kermack-McKendrick model. We extended the model to include vaccination and the effect of a hand-washing campaign, and used the extended model to allocate a limited budget optimally, that is, to minimize the number of infections.
But we assumed that the parameters of the model, contact rate and
recovery rate, were known. In this chapter, we explore the behavior of
the model as we vary these parameters, use analysis to understand these relationships better, and propose a method for using data to estimate parameters.
Sweeping beta
Recall that $\beta$ is the contact rate, which captures both the
frequency of interaction between people and the fraction of those
interactions that result in a new infection. If $N$ is the size of the
population and $s$ is the fraction that's susceptible, $s N$ is the
number of susceptibles, $\beta s N$ is the number of contacts per day
between susceptibles and other people, and $\beta s i N$ is the number
of those contacts where the other person is infectious.
As $\beta$ increases, we expect the total number of infections to
increase. To quantify that relationship, I'll create a range of values
for $\beta$
Step6: Then run the simulation for each value and print the results.
Step7: We can wrap that code in a function and store the results in a
SweepSeries object
Step8: Now we can run sweep_beta like this
Step9: And plot the results
Step10: The first line uses string operations to assemble a label for the
plotted line
Step11: Remember that this figure
is a parameter sweep, not a time series, so the x-axis is the parameter
beta, not time.
When beta is small, the contact rate is low and the outbreak never
really takes off; the total number of infected students is near zero. As
beta increases, it reaches a threshold near 0.3 where the fraction of
infected students increases quickly. When beta exceeds 0.5, more than
80% of the population gets sick.
Sweeping gamma
Let's see what that looks like for a few different values of gamma.
Again, we'll use linspace to make an array of values
Step12: And run sweep_beta for each value of gamma
Step13: The following figure shows the results. When gamma is low, the
recovery rate is low, which means people are infectious longer. In that
case, even a low contact rate (beta) results in an epidemic.
When gamma is high, beta has to be even higher to get things going.
SweepFrame
In the previous section, we swept a range of values for gamma, and for
each value, we swept a range of values for beta. This process is a
two-dimensional sweep.
If we want to store the results, rather than plot them, we can use a
SweepFrame, which is a kind of DataFrame where the rows sweep one
parameter, the columns sweep another parameter, and the values contain
metrics from a simulation.
This function shows how it works
Step14: sweep_parameters takes as parameters an array of values for beta and
an array of values for gamma.
It creates a SweepFrame to store the results, with one column for each
value of gamma and one row for each value of beta.
Each time through the loop, we run sweep_beta. The result is a
SweepSeries object with one element for each value of beta. The
assignment inside the loop stores the SweepSeries as a new column in
the SweepFrame, corresponding to the current value of gamma.
At the end, the SweepFrame stores the fraction of students infected
for each pair of parameters, beta and gamma.
We can run sweep_parameters like this
Step15: With the results in a SweepFrame, we can plot each column like this
Step16: Alternatively, we can plot each row like this
Step17: This example demonstrates one use of a SweepFrame
Step18: Infection rates are lowest in the lower right, where the contact rate is and the recovery rate is high. They increase as we move to the upper left, where the contact rate is high and the recovery rate is low.
This figure suggests that there might be a relationship between beta
and gamma that determines the outcome of the model. In fact, there is.
In the next chapter we'll explore it by running simulations, then derive it by analysis.
Summary
Exercises
Exercise | Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
Explanation: Chapter 13
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
from modsim import State, System
def make_system(beta, gamma):
Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def update_func(state, t, system):
Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
from numpy import arange
from modsim import TimeFrame
def run_simulation(system, update_func):
Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
frame = TimeFrame(columns=system.init.index)
frame.loc[system.t0] = system.init
for t in arange(system.t0, system.t_end):
frame.loc[t+1] = update_func(frame.loc[t], t, system)
return frame
def calc_total_infected(results, system):
s_0 = results.S[system.t0]
s_end = results.S[system.t_end]
return s_0 - s_end
Explanation: Code from previous chapters
make_system, plot_results, and calc_total_infected are unchanged.
End of explanation
from numpy import linspace
beta_array = linspace(0.1, 1.1, 11)
gamma = 0.25
Explanation: In the previous chapters I presented an SIR model of infectious disease, specifically the Kermack-McKendrick model. We extended the model to include vaccination and the effect of a hand-washing campaign, and used the extended model to allocate a limited budget optimally, that is, to minimize the number of infections.
But we assumed that the parameters of the model, contact rate and
recovery rate, were known. In this chapter, we explore the behavior of
the model as we vary these parameters, use analysis to understand these relationships better, and propose a method for using data to estimate parameters.
Sweeping beta
Recall that $\beta$ is the contact rate, which captures both the
frequency of interaction between people and the fraction of those
interactions that result in a new infection. If $N$ is the size of the
population and $s$ is the fraction that's susceptible, $s N$ is the
number of susceptibles, $\beta s N$ is the number of contacts per day
between susceptibles and other people, and $\beta s i N$ is the number
of those contacts where the other person is infectious.
As $\beta$ increases, we expect the total number of infections to
increase. To quantify that relationship, I'll create a range of values
for $\beta$:
End of explanation
for beta in beta_array:
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
print(beta, calc_total_infected(results, system))
Explanation: Then run the simulation for each value and print the results.
End of explanation
def sweep_beta(beta_array, gamma):
sweep = SweepSeries()
for beta in beta_array:
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
sweep[beta] = calc_total_infected(results, system)
return sweep
Explanation: We can wrap that code in a function and store the results in a
SweepSeries object:
End of explanation
infected_sweep = sweep_beta(beta_array, gamma)
Explanation: Now we can run sweep_beta like this:
End of explanation
label = f'gamma = {gamma}'
label
Explanation: And plot the results:
End of explanation
infected_sweep.plot(label=label)
decorate(xlabel='Contact rate (beta)',
ylabel='Fraction infected')
Explanation: The first line uses string operations to assemble a label for the
plotted line:
When the + operator is applied to strings, it joins them
end-to-end, which is called concatenation.
The function str converts any type of object to a String
representation. In this case, gamma is a number, so we have to
convert it to a string before trying to concatenate it.
If the value of gamma is 0.25, the value of label is the string
'gamma = 0.25'.
End of explanation
gamma_array = linspace(0.1, 0.7, 4)
Explanation: Remember that this figure
is a parameter sweep, not a time series, so the x-axis is the parameter
beta, not time.
When beta is small, the contact rate is low and the outbreak never
really takes off; the total number of infected students is near zero. As
beta increases, it reaches a threshold near 0.3 where the fraction of
infected students increases quickly. When beta exceeds 0.5, more than
80% of the population gets sick.
Sweeping gamma
Let's see what that looks like for a few different values of gamma.
Again, we'll use linspace to make an array of values:
End of explanation
for gamma in gamma_array:
infected_sweep = sweep_beta(beta_array, gamma)
label = 'gamma = ' + str(gamma)
plot(infected_sweep, label=label)
decorate()
Explanation: And run sweep_beta for each value of gamma:
End of explanation
def sweep_parameters(beta_array, gamma_array):
frame = SweepFrame(columns=gamma_array)
for gamma in gamma_array:
frame[gamma] = sweep_beta(beta_array, gamma)
return frame
Explanation: The following figure shows the results. When gamma is low, the
recovery rate is low, which means people are infectious longer. In that
case, even a low contact rate (beta) results in an epidemic.
When gamma is high, beta has to be even higher to get things going.
SweepFrame
In the previous section, we swept a range of values for gamma, and for
each value, we swept a range of values for beta. This process is a
two-dimensional sweep.
If we want to store the results, rather than plot them, we can use a
SweepFrame, which is a kind of DataFrame where the rows sweep one
parameter, the columns sweep another parameter, and the values contain
metrics from a simulation.
This function shows how it works:
End of explanation
frame = sweep_parameters(beta_array, gamma_array)
Explanation: sweep_parameters takes as parameters an array of values for beta and
an array of values for gamma.
It creates a SweepFrame to store the results, with one column for each
value of gamma and one row for each value of beta.
Each time through the loop, we run sweep_beta. The result is a
SweepSeries object with one element for each value of beta. The
assignment inside the loop stores the SweepSeries as a new column in
the SweepFrame, corresponding to the current value of gamma.
At the end, the SweepFrame stores the fraction of students infected
for each pair of parameters, beta and gamma.
We can run sweep_parameters like this:
End of explanation
for gamma in gamma_array:
label = f'gamma = {gamma}'
plot(frame[gamma], label=label)
decorate(xlabel='Contact rate (beta)',
ylabel='Fraction infected')
plt.legend(bbox_to_anchor=(1.02, 1.02))
plt.tight_layout()
Explanation: With the results in a SweepFrame, we can plot each column like this:
End of explanation
for beta in beta_array:
label = f'beta = {beta}'
plot(frame.loc[beta], label=label)
decorate(xlabel='Recovery rate (gamma)',
ylabel='Fraction infected')
plt.legend(bbox_to_anchor=(1.02, 1.02))
plt.tight_layout()
Explanation: Alternatively, we can plot each row like this:
End of explanation
from modsim import contour
contour(frame)
decorate(xlabel='Recovery rate (gamma)',
ylabel='Contact rate (beta)',
title='Fraction infected, contour plot')
Explanation: This example demonstrates one use of a SweepFrame: we can run the analysis once, save the results, and then generate different visualizations.
Another way to visualize the results of a two-dimensional sweep is a
contour plot, which shows the parameters on the axes and contour
lines, that is, lines of constant value. In this example, the value is
the fraction of students infected.
The ModSim library provides contour, which takes a SweepFrame as a
parameter:
End of explanation
# Solution
# Sweep beta with fixed gamma
gamma = 1/2
infected_sweep = sweep_beta(beta_array, gamma)
# Solution
# Interpolating by eye, we can see that the infection rate passes through 0.4
# when beta is between 0.6 and 0.7
# We can use the `crossings` function to interpolate more precisely
# (although we don't know about it yet :)
beta_estimate = crossings(infected_sweep, 0.4)
# Solution
# Time between contacts is 1/beta
time_between_contacts = 1/beta_estimate
Explanation: Infection rates are lowest in the lower right, where the contact rate is and the recovery rate is high. They increase as we move to the upper left, where the contact rate is high and the recovery rate is low.
This figure suggests that there might be a relationship between beta
and gamma that determines the outcome of the model. In fact, there is.
In the next chapter we'll explore it by running simulations, then derive it by analysis.
Summary
Exercises
Exercise: Suppose the infectious period for the Freshman Plague is known to be 2 days on average, and suppose during one particularly bad year, 40% of the class is infected at some point. Estimate the time between contacts.
End of explanation |
3,899 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
利用神经网络的 Kera 预测学生录取情况
在该 notebook 中,我们基于以下三条数据预测了加州大学洛杉矶分校的研究生录取情况:
GRE 分数(测试)即 GRE Scores (Test)
GPA 分数(成绩)即 GPA Scores (Grades)
评级(1-4)即 Class rank (1-4)
数据集来源:http
Step1: <div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align
Step2: 粗略地说,它看起来像是,成绩 (grades) 和测试(test) 分数 高的学生通过了,而得分低的学生却没有,但数据并没有如我们所希望的那样,很好地分离。 也许将评级 (rank) 考虑进来会有帮助? 接下来我们将绘制 4 个图,每个图代表一个级别。
Step3: 现在看起来更棒啦,看上去评级越低,录取率越高。 让我们使用评级 (rank) 作为我们的输入之一。 为了做到这一点,我们应该对它进行一次one-hot 编码。
将评级进行 One-hot 编码
我们将在 pandas 中使用 get_dummies 函数。
Step4: <div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align
Step5: <div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align
Step6: 将数据分成特征和目标(标签)
现在,在培训前的最后一步,我们将把数据分为特征 (features)(X)和目标 (targets)(y)。
另外,在 Keras 中,我们需要对输出进行 one-hot 编码。 我们将使用to_categorical function 来做到这一点。
Step7: 定义模型架构
我们将使用 Keras 来构建神经网络。
Step8: 训练模型
Step9: 模型评分 | Python Code:
# Importing pandas and numpy
import pandas as pd
import numpy as np
# Reading the csv file into a pandas DataFrame
data = pd.read_csv('student_data.csv')
# Printing out the first 10 rows of our data
data[:10]
Explanation: 利用神经网络的 Kera 预测学生录取情况
在该 notebook 中,我们基于以下三条数据预测了加州大学洛杉矶分校的研究生录取情况:
GRE 分数(测试)即 GRE Scores (Test)
GPA 分数(成绩)即 GPA Scores (Grades)
评级(1-4)即 Class rank (1-4)
数据集来源:http://www.ats.ucla.edu/
加载数据
为了加载数据并很好地进行格式化,我们将使用两个非常有用的包,即 Pandas 和 Numpy。 你可以在这里此文档:
https://pandas.pydata.org/pandas-docs/stable/
https://docs.scipy.org/
End of explanation
# Importing matplotlib
import matplotlib.pyplot as plt
# Function to help us plot
def plot_points(data):
X = np.array(data[["gre","gpa"]])
y = np.array(data["admit"])
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')
plt.xlabel('Test (GRE)')
plt.ylabel('Grades (GPA)')
# Plotting the points
plot_points(data)
plt.show()
Explanation: <div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>admit</th>
<th>gre</th>
<th>gpa</th>
<th>rank</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>380</td>
<td>3.61</td>
<td>3</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>660</td>
<td>3.67</td>
<td>3</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>800</td>
<td>4.00</td>
<td>1</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>640</td>
<td>3.19</td>
<td>4</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>520</td>
<td>2.93</td>
<td>4</td>
</tr>
<tr>
<th>5</th>
<td>1</td>
<td>760</td>
<td>3.00</td>
<td>2</td>
</tr>
<tr>
<th>6</th>
<td>1</td>
<td>560</td>
<td>2.98</td>
<td>1</td>
</tr>
<tr>
<th>7</th>
<td>0</td>
<td>400</td>
<td>3.08</td>
<td>2</td>
</tr>
<tr>
<th>8</th>
<td>1</td>
<td>540</td>
<td>3.39</td>
<td>3</td>
</tr>
<tr>
<th>9</th>
<td>0</td>
<td>700</td>
<td>3.92</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
绘制数据
首先让我们对数据进行绘图,看看它是什么样的。为了绘制二维图,让我们先忽略评级 (rank)。
End of explanation
# Separating the ranks
data_rank1 = data[data["rank"]==1]
data_rank2 = data[data["rank"]==2]
data_rank3 = data[data["rank"]==3]
data_rank4 = data[data["rank"]==4]
# Plotting the graphs
plot_points(data_rank1)
plt.title("Rank 1")
plt.show()
plot_points(data_rank2)
plt.title("Rank 2")
plt.show()
plot_points(data_rank3)
plt.title("Rank 3")
plt.show()
plot_points(data_rank4)
plt.title("Rank 4")
plt.show()
Explanation: 粗略地说,它看起来像是,成绩 (grades) 和测试(test) 分数 高的学生通过了,而得分低的学生却没有,但数据并没有如我们所希望的那样,很好地分离。 也许将评级 (rank) 考虑进来会有帮助? 接下来我们将绘制 4 个图,每个图代表一个级别。
End of explanation
# Make dummy variables for rank
one_hot_data = pd.concat([data, pd.get_dummies(data['rank'], prefix='rank')], axis=1)
# Drop the previous rank column
one_hot_data = one_hot_data.drop('rank', axis=1)
# Print the first 10 rows of our data
one_hot_data[:10]
Explanation: 现在看起来更棒啦,看上去评级越低,录取率越高。 让我们使用评级 (rank) 作为我们的输入之一。 为了做到这一点,我们应该对它进行一次one-hot 编码。
将评级进行 One-hot 编码
我们将在 pandas 中使用 get_dummies 函数。
End of explanation
# Copying our data
processed_data = one_hot_data[:]
# Scaling the columns
processed_data['gre'] = processed_data['gre']/800
processed_data['gpa'] = processed_data['gpa']/4.0
processed_data[:10]
Explanation: <div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>admit</th>
<th>gre</th>
<th>gpa</th>
<th>rank_1</th>
<th>rank_2</th>
<th>rank_3</th>
<th>rank_4</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>380</td>
<td>3.61</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>660</td>
<td>3.67</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>800</td>
<td>4.00</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>640</td>
<td>3.19</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>520</td>
<td>2.93</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<th>5</th>
<td>1</td>
<td>760</td>
<td>3.00</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>6</th>
<td>1</td>
<td>560</td>
<td>2.98</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>7</th>
<td>0</td>
<td>400</td>
<td>3.08</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>8</th>
<td>1</td>
<td>540</td>
<td>3.39</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<th>9</th>
<td>0</td>
<td>700</td>
<td>3.92</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
缩放数据
下一步是缩放数据。 我们注意到成绩 (grades) 的范围是 1.0-4.0,而测试分数 (test scores) 的范围大概是 200-800,这个范围要大得多。 这意味着我们的数据存在偏差,使得神经网络很难处理。 让我们将两个特征放在 0-1 的范围内,将分数除以 4.0,将测试分数除以 800。
End of explanation
sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False)
train_data, test_data = processed_data.iloc[sample], processed_data.drop(sample)
print("Number of training samples is", len(train_data))
print("Number of testing samples is", len(test_data))
print(train_data[:10])
print(test_data[:10])
Number of training samples is 360
Number of testing samples is 40
admit gre gpa rank_1 rank_2 rank_3 rank_4
375 0 0.700 0.8725 0 0 0 1
270 1 0.800 0.9875 0 1 0 0
87 0 0.750 0.8700 0 1 0 0
296 0 0.700 0.7900 1 0 0 0
340 0 0.625 0.8075 0 0 0 1
35 0 0.500 0.7625 0 1 0 0
293 0 1.000 0.9925 1 0 0 0
372 1 0.850 0.6050 1 0 0 0
307 0 0.725 0.8775 0 1 0 0
114 0 0.900 0.9600 0 0 1 0
admit gre gpa rank_1 rank_2 rank_3 rank_4
0 0 0.475 0.9025 0 0 1 0
6 1 0.700 0.7450 1 0 0 0
13 0 0.875 0.7700 0 1 0 0
16 0 0.975 0.9675 0 0 0 1
17 0 0.450 0.6400 0 0 1 0
22 0 0.750 0.7050 0 0 0 1
24 1 0.950 0.8375 0 1 0 0
27 1 0.650 0.9350 0 0 0 1
33 1 1.000 1.0000 0 0 1 0
39 1 0.650 0.6700 0 0 1 0
Explanation: <div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>admit</th>
<th>gre</th>
<th>gpa</th>
<th>rank_1</th>
<th>rank_2</th>
<th>rank_3</th>
<th>rank_4</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0</td>
<td>0.475</td>
<td>0.9025</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<th>1</th>
<td>1</td>
<td>0.825</td>
<td>0.9175</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<th>2</th>
<td>1</td>
<td>1.000</td>
<td>1.0000</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>3</th>
<td>1</td>
<td>0.800</td>
<td>0.7975</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<th>4</th>
<td>0</td>
<td>0.650</td>
<td>0.7325</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<th>5</th>
<td>1</td>
<td>0.950</td>
<td>0.7500</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>6</th>
<td>1</td>
<td>0.700</td>
<td>0.7450</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>7</th>
<td>0</td>
<td>0.500</td>
<td>0.7700</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<th>8</th>
<td>1</td>
<td>0.675</td>
<td>0.8475</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<th>9</th>
<td>0</td>
<td>0.875</td>
<td>0.9800</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
将数据分成训练集和测试集
为了测试我们的算法,我们将数据分为训练集和测试集。 测试集的大小将占总数据的 10%。
End of explanation
import keras
# Separate data and one-hot encode the output
# Note: We're also turning the data into numpy arrays, in order to train the model in Keras
features = np.array(train_data.drop('admit', axis=1))
targets = np.array(keras.utils.to_categorical(train_data['admit'], 2))
features_test = np.array(test_data.drop('admit', axis=1))
targets_test = np.array(keras.utils.to_categorical(test_data['admit'], 2))
print(features[:10])
print(targets[:10])
[[ 0.7 0.8725 0. 0. 0. 1. ]
[ 0.8 0.9875 0. 1. 0. 0. ]
[ 0.75 0.87 0. 1. 0. 0. ]
[ 0.7 0.79 1. 0. 0. 0. ]
[ 0.625 0.8075 0. 0. 0. 1. ]
[ 0.5 0.7625 0. 1. 0. 0. ]
[ 1. 0.9925 1. 0. 0. 0. ]
[ 0.85 0.605 1. 0. 0. 0. ]
[ 0.725 0.8775 0. 1. 0. 0. ]
[ 0.9 0.96 0. 0. 1. 0. ]]
[[ 1. 0.]
[ 0. 1.]
[ 1. 0.]
[ 1. 0.]
[ 1. 0.]
[ 1. 0.]
[ 1. 0.]
[ 0. 1.]
[ 1. 0.]
[ 1. 0.]]
Explanation: 将数据分成特征和目标(标签)
现在,在培训前的最后一步,我们将把数据分为特征 (features)(X)和目标 (targets)(y)。
另外,在 Keras 中,我们需要对输出进行 one-hot 编码。 我们将使用to_categorical function 来做到这一点。
End of explanation
# Imports
import numpy as np
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.utils import np_utils
# Building the model
model = Sequential()
model.add(Dense(128, activation='relu', input_shape=(6,)))
model.add(Dropout(.2))
model.add(Dense(64, activation='relu'))
model.add(Dropout(.1))
model.add(Dense(2, activation='softmax'))
# Compiling the model
model.compile(loss = 'categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_16 (Dense) (None, 128) 896
_________________________________________________________________
dropout_11 (Dropout) (None, 128) 0
_________________________________________________________________
dense_17 (Dense) (None, 64) 8256
_________________________________________________________________
dropout_12 (Dropout) (None, 64) 0
_________________________________________________________________
dense_18 (Dense) (None, 2) 130
=================================================================
Total params: 9,282
Trainable params: 9,282
Non-trainable params: 0
_________________________________________________________________
Explanation: 定义模型架构
我们将使用 Keras 来构建神经网络。
End of explanation
# Training the model
model.fit(features, targets, epochs=200, batch_size=100, verbose=0)
<keras.callbacks.History at 0x114a34eb8>
Explanation: 训练模型
End of explanation
# Evaluating the model on the training and testing set
score = model.evaluate(features, targets)
print("\n Training Accuracy:", score[1])
score = model.evaluate(features_test, targets_test)
print("\n Testing Accuracy:", score[1])
32/360 [=>............................] - ETA: 0s
Training Accuracy: 0.730555555556
32/40 [=======================>......] - ETA: 0s
Testing Accuracy: 0.7
Explanation: 模型评分
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.