Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
6,200 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tigergraph Bindings
Step1: Dynamic user-defined GSQL endpoints
Step3: On-the-fly GSQL interpreted queries | Python Code:
import graphistry
# !pip install graphistry -q
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
g = graphistry.tigergraph(
protocol='http', server='www.acme.org',
user='tigergraph', pwd='tigergraph',
db='Storage', #optional
#web_port = 14240, api_port = 9000, verbose=True
)
Explanation: Tigergraph Bindings: Demo of IT Infra Analysis
Uses bindings built into PyGraphistry for Tigergraph:
Configure DB connection
Call dynamic endpoints for user-defined endpoints
Call interpreted-mode query
Visualize results
Import and connect
End of explanation
g2 = g.gsql_endpoint(
'StorageImpact', {'vertexType': 'Service', 'input': 61921, 'input.type': 'Pool'},
#{'edges': '@@edgeList', 'nodes': '@@nodeList'}
)
print('# edges:', len(g2._edges))
g2.plot()
Explanation: Dynamic user-defined GSQL endpoints: Call, analyze, & plot
End of explanation
g3 = g.gsql(
INTERPRET QUERY () FOR GRAPH Storage {
OrAccum<BOOL> @@stop;
ListAccum<EDGE> @@edgeList;
SetAccum<vertex> @@set;
@@set += to_vertex("61921", "Pool");
Start = @@set;
while Start.size() > 0 and @@stop == false do
Start = select t from Start:s-(:e)-:t
where e.goUpper == TRUE
accum @@edgeList += e
having t.type != "Service";
end;
print @@edgeList;
}
,
#{'edges': '@@edgeList', 'nodes': '@@nodeList'} # can skip by default
)
print('# edges:', len(g3._edges))
g3.plot()
Explanation: On-the-fly GSQL interpreted queries: Call, analyze, & plot
End of explanation |
6,201 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The model
This style of modeling is often called the "piecewise exponential model", or PEM. It is the simplest case where we estimate the hazard of an event occurring in a time period as the outcome, rather than estimating the survival (ie, time to event) as the outcome.
Recall that, in the context of survival modeling, we have two models
Step1: Simulate survival data
In order to demonstrate the use of this model, we will first simulate some survival data using survivalstan.sim.sim_data_exp_correlated. As the name implies, this function simulates data assuming a constant hazard throughout the follow-up time period, which is consistent with the Exponential survival function.
This function includes two simulated covariates by default (age and sex). We also simulate a situation where hazard is a function of the simulated value for sex.
We also center the age variable since this will make it easier to interpret estimates of the baseline hazard.
Step2: *Aside
Step3: It's not that obvious from the field names, but in this example "subjects" are indexed by the field index.
We can plot these data using lifelines, or the rudimentary plotting functions provided by survivalstan.
Step4: Transform to long or per-timepoint form
Finally, since this is a PEM model, we transform our data to long or per-timepoint form.
Step5: We now have one record per timepoint (distinct values of end_time) per subject (index, in the original data frame).
Step6: Fit stan model
Now, we are ready to fit our model using survivalstan.fit_stan_survival_model.
We pass a few parameters to the fit function, many of which are required. See ?survivalstan.fit_stan_survival_model for details.
Similar to what we did above, we are asking survivalstan to cache this model fit object. See stancache for more details on how this works. Also, if you didn't want to use the cache, you could omit the parameter FIT_FUN and survivalstan would use the standard pystan functionality.
Step7: Superficial review of convergence
We will note here some top-level summaries of posterior draws -- this is a minimal example so it's unlikely that this model converged very well.
In practice, you would want to do a lot more investigation of convergence issues, etc. For now the goal is to demonstrate the functionalities available here.
We can summarize posterior estimates for a single parameter, (e.g. the built-in Stan parameter lp__)
Step8: Or, for sets of parameters with the same name
Step9: It's also not uncommon to graphically summarize the Rhat values, to get a sense of similarity among the chains for particular parameters.
Step10: Plot posterior estimates of parameters
We can use plot_coefs to summarize posterior estimates of parameters.
In this basic pem_survival_model, we estimate a parameter for baseline hazard for each observed timepoint which is then adjusted for the duration of the timepoint. For consistency, the baseline values are normalized to the unit time given in the input data. This allows us to compare hazard estimates across timepoints without having to know the duration of a timepoint. (in general, the duration-adjusted hazard paramters are suffixed with _raw whereas those which are unit-normalized do not have a suffix).
In this model, the baseline hazard is parameterized by two components -- there is an overall mean across all timepoints (log_baseline_mu) and some variance per timepoint (log_baseline_tp). The degree of variance is estimated from the data as log_baseline_sigma. All components have weak default priors. See the stan code above for details.
In this case, the model estimates a minimal degree of variance across timepoints, which is good given that the simulated data assumed a constant hazard over time.
Step11: We can also summarize the posterior estimates for our beta coefficients. This is actually the default behavior of plot_coefs. Here we hope to see the posterior estimates of beta coefficients include the value we used for our simulation (0.5).
Step12: Posterior predictive checking
Finally, survivalstan provides some utilities for posterior predictive checking.
The goal of posterior-predictive checking is to compare the uncertainty of model predictions to observed values.
We are not doing true out-of-sample predictions, but we are able to sanity-check our model's calibration. We expect approximately 5% of observed values to fall outside of their corresponding 95% posterior-predicted intervals.
By default, survivalstan's plot_pp_survival method will plot whiskers at the 2.5th and 97.5th percentile values, corresponding to 95% predicted intervals.
Step13: We can also summarize and plot survival by our covariates of interest, provided they are included in the original dataframe provided to fit_stan_survival_model.
Step14: This plot can also be customized by a variety of aesthetic elements
Step15: Building up the plot semi-manually, for more customization
We can also access the utility methods within survivalstan.utils to more or less produce the same plot. This sequence is intended to both illustrate how the above-described plot was constructed, and expose some of the
functionality in a more concrete fashion.
Probably the most useful element is being able to summarize & return posterior-predicted values to begin with
Step16: Here are what these data look like
Step17: (Note that this itself is a summary of the posterior draws returned by survivalstan.utils.prep_pp_data. In this case, the survival stats are summarized by values of ['iter', 'model_cohort', by].
We can then call out to survivalstan.utils._plot_pp_survival_data to construct the plot. In this case, we overlay the posterior predicted intervals with observed values.
Step18: Use plotly to summarize posterior predicted values
First, we will precompute 50th and 95th posterior intervals for each observed timepoint, by group.
Step19: Next, we construct our graph "traces", consisting of 3 elements (solid line and two shaded areas) per observed group.
Step20: Finally, we build a minimal layout structure to house our graph
Step21: Here is our plot | Python Code:
print(survivalstan.models.pem_survival_model)
Explanation: The model
This style of modeling is often called the "piecewise exponential model", or PEM. It is the simplest case where we estimate the hazard of an event occurring in a time period as the outcome, rather than estimating the survival (ie, time to event) as the outcome.
Recall that, in the context of survival modeling, we have two models:
A model for Survival ($S$), ie the probability of surviving to time $t$:
$$ S(t)=Pr(Y > t) $$
A model for the instantaneous hazard $\lambda$, ie the probability of a failure event occuring in the interval [$t$, $t+\delta t$], given survival to time $t$:
$$ \lambda(t) = \lim_{\delta t \rightarrow 0 } \; \frac{Pr( t \le Y \le t + \delta t | Y > t)}{\delta t} $$
By definition, these two are related to one another by the following equation:
$$ \lambda(t) = \frac{-S'(t)}{S(t)} $$
Solving this, yields the following:
$$ S(t) = \exp\left( -\int_0^t \lambda(z) dz \right) $$
This model is called the piecewise exponential model because of this relationship between the Survival and hazard functions. It's piecewise because we are not estimating the instantaneous hazard; we are instead breaking time periods up into pieces and estimating the hazard for each piece.
There are several variations on the PEM model implemented in survivalstan. In this notebook, we are exploring just one of them.
A note about data formatting
When we model Survival, we typically operate on data in time-to-event form. In this form, we have one record per Subject (ie, per patient). Each record contains [event_status, time_to_event] as the outcome. This data format is sometimes called per-subject.
When we model the hazard by comparison, we typically operate on data that are transformed to include one record per Subject per time_period. This is called per-timepoint or long form.
All other things being equal, a model for Survival will typically estimate more efficiently (faster & smaller memory footprint) than one for hazard simply because the data are larger in the per-timepoint form than the per-subject form. The benefit of the hazard models is increased flexibility in terms of specifying the baseline hazard, time-varying effects, and introducing time-varying covariates.
In this example, we are demonstrating use of the standard PEM survival model, which uses data in long form. The stan code expects to recieve data in this structure.
Stan code for the model
This model is provided in survivalstan.models.pem_survival_model. Let's take a look at the stan code.
End of explanation
d = stancache.cached(
survivalstan.sim.sim_data_exp_correlated,
N=100,
censor_time=20,
rate_form='1 + sex',
rate_coefs=[-3, 0.5],
)
d['age_centered'] = d['age'] - d['age'].mean()
Explanation: Simulate survival data
In order to demonstrate the use of this model, we will first simulate some survival data using survivalstan.sim.sim_data_exp_correlated. As the name implies, this function simulates data assuming a constant hazard throughout the follow-up time period, which is consistent with the Exponential survival function.
This function includes two simulated covariates by default (age and sex). We also simulate a situation where hazard is a function of the simulated value for sex.
We also center the age variable since this will make it easier to interpret estimates of the baseline hazard.
End of explanation
d.head()
Explanation: *Aside: In order to make this a more reproducible example, this code is using a file-caching function stancache.cached to wrap a function call to survivalstan.sim.sim_data_exp_correlated. *
Explore simulated data
Here is what these data look like - this is per-subject or time-to-event form:
End of explanation
survivalstan.utils.plot_observed_survival(df=d[d['sex']=='female'], event_col='event', time_col='t', label='female')
survivalstan.utils.plot_observed_survival(df=d[d['sex']=='male'], event_col='event', time_col='t', label='male')
plt.legend()
Explanation: It's not that obvious from the field names, but in this example "subjects" are indexed by the field index.
We can plot these data using lifelines, or the rudimentary plotting functions provided by survivalstan.
End of explanation
dlong = stancache.cached(
survivalstan.prep_data_long_surv,
df=d, event_col='event', time_col='t'
)
Explanation: Transform to long or per-timepoint form
Finally, since this is a PEM model, we transform our data to long or per-timepoint form.
End of explanation
dlong.query('index == 1').sort_values('end_time')
Explanation: We now have one record per timepoint (distinct values of end_time) per subject (index, in the original data frame).
End of explanation
testfit = survivalstan.fit_stan_survival_model(
model_cohort = 'test model',
model_code = survivalstan.models.pem_survival_model,
df = dlong,
sample_col = 'index',
timepoint_end_col = 'end_time',
event_col = 'end_failure',
formula = '~ age_centered + sex',
iter = 5000,
chains = 4,
seed = 9001,
FIT_FUN = stancache.cached_stan_fit,
)
Explanation: Fit stan model
Now, we are ready to fit our model using survivalstan.fit_stan_survival_model.
We pass a few parameters to the fit function, many of which are required. See ?survivalstan.fit_stan_survival_model for details.
Similar to what we did above, we are asking survivalstan to cache this model fit object. See stancache for more details on how this works. Also, if you didn't want to use the cache, you could omit the parameter FIT_FUN and survivalstan would use the standard pystan functionality.
End of explanation
survivalstan.utils.print_stan_summary([testfit], pars='lp__')
Explanation: Superficial review of convergence
We will note here some top-level summaries of posterior draws -- this is a minimal example so it's unlikely that this model converged very well.
In practice, you would want to do a lot more investigation of convergence issues, etc. For now the goal is to demonstrate the functionalities available here.
We can summarize posterior estimates for a single parameter, (e.g. the built-in Stan parameter lp__):
End of explanation
survivalstan.utils.print_stan_summary([testfit], pars='log_baseline_raw')
Explanation: Or, for sets of parameters with the same name:
End of explanation
survivalstan.utils.plot_stan_summary([testfit], pars='log_baseline_raw')
Explanation: It's also not uncommon to graphically summarize the Rhat values, to get a sense of similarity among the chains for particular parameters.
End of explanation
survivalstan.utils.plot_coefs([testfit], element='baseline')
Explanation: Plot posterior estimates of parameters
We can use plot_coefs to summarize posterior estimates of parameters.
In this basic pem_survival_model, we estimate a parameter for baseline hazard for each observed timepoint which is then adjusted for the duration of the timepoint. For consistency, the baseline values are normalized to the unit time given in the input data. This allows us to compare hazard estimates across timepoints without having to know the duration of a timepoint. (in general, the duration-adjusted hazard paramters are suffixed with _raw whereas those which are unit-normalized do not have a suffix).
In this model, the baseline hazard is parameterized by two components -- there is an overall mean across all timepoints (log_baseline_mu) and some variance per timepoint (log_baseline_tp). The degree of variance is estimated from the data as log_baseline_sigma. All components have weak default priors. See the stan code above for details.
In this case, the model estimates a minimal degree of variance across timepoints, which is good given that the simulated data assumed a constant hazard over time.
End of explanation
survivalstan.utils.plot_coefs([testfit])
Explanation: We can also summarize the posterior estimates for our beta coefficients. This is actually the default behavior of plot_coefs. Here we hope to see the posterior estimates of beta coefficients include the value we used for our simulation (0.5).
End of explanation
survivalstan.utils.plot_pp_survival([testfit], fill=False)
survivalstan.utils.plot_observed_survival(df=d, event_col='event', time_col='t', color='green', label='observed')
plt.legend()
Explanation: Posterior predictive checking
Finally, survivalstan provides some utilities for posterior predictive checking.
The goal of posterior-predictive checking is to compare the uncertainty of model predictions to observed values.
We are not doing true out-of-sample predictions, but we are able to sanity-check our model's calibration. We expect approximately 5% of observed values to fall outside of their corresponding 95% posterior-predicted intervals.
By default, survivalstan's plot_pp_survival method will plot whiskers at the 2.5th and 97.5th percentile values, corresponding to 95% predicted intervals.
End of explanation
survivalstan.utils.plot_pp_survival([testfit], by='sex')
Explanation: We can also summarize and plot survival by our covariates of interest, provided they are included in the original dataframe provided to fit_stan_survival_model.
End of explanation
survivalstan.utils.plot_pp_survival([testfit], by='sex', pal=['red', 'blue'])
Explanation: This plot can also be customized by a variety of aesthetic elements
End of explanation
ppsurv = survivalstan.utils.prep_pp_survival_data([testfit], by='sex')
Explanation: Building up the plot semi-manually, for more customization
We can also access the utility methods within survivalstan.utils to more or less produce the same plot. This sequence is intended to both illustrate how the above-described plot was constructed, and expose some of the
functionality in a more concrete fashion.
Probably the most useful element is being able to summarize & return posterior-predicted values to begin with:
End of explanation
ppsurv.head()
Explanation: Here are what these data look like:
End of explanation
subplot = plt.subplots(1, 1)
survivalstan.utils._plot_pp_survival_data(ppsurv.query('sex == "male"').copy(),
subplot=subplot, color='blue', alpha=0.5)
survivalstan.utils._plot_pp_survival_data(ppsurv.query('sex == "female"').copy(),
subplot=subplot, color='red', alpha=0.5)
survivalstan.utils.plot_observed_survival(df=d[d['sex']=='female'], event_col='event', time_col='t',
color='red', label='female')
survivalstan.utils.plot_observed_survival(df=d[d['sex']=='male'], event_col='event', time_col='t',
color='blue', label='male')
plt.legend()
Explanation: (Note that this itself is a summary of the posterior draws returned by survivalstan.utils.prep_pp_data. In this case, the survival stats are summarized by values of ['iter', 'model_cohort', by].
We can then call out to survivalstan.utils._plot_pp_survival_data to construct the plot. In this case, we overlay the posterior predicted intervals with observed values.
End of explanation
ppsummary = ppsurv.groupby(['sex','event_time'])['survival'].agg({
'95_lower': lambda x: np.percentile(x, 2.5),
'95_upper': lambda x: np.percentile(x, 97.5),
'50_lower': lambda x: np.percentile(x, 25),
'50_upper': lambda x: np.percentile(x, 75),
'median': lambda x: np.percentile(x, 50),
}).reset_index()
shade_colors = dict(male='rgba(0, 128, 128, {})', female='rgba(214, 12, 140, {})')
line_colors = dict(male='rgb(0, 128, 128)', female='rgb(214, 12, 140)')
ppsummary.sort_values(['sex', 'event_time'], inplace=True)
Explanation: Use plotly to summarize posterior predicted values
First, we will precompute 50th and 95th posterior intervals for each observed timepoint, by group.
End of explanation
import plotly
import plotly.plotly as py
import plotly.graph_objs as go
plotly.offline.init_notebook_mode(connected=True)
data5 = list()
for grp, grp_df in ppsummary.groupby('sex'):
x = list(grp_df['event_time'].values)
x_rev = x[::-1]
y_upper = list(grp_df['50_upper'].values)
y_lower = list(grp_df['50_lower'].values)
y_lower = y_lower[::-1]
y2_upper = list(grp_df['95_upper'].values)
y2_lower = list(grp_df['95_lower'].values)
y2_lower = y2_lower[::-1]
y = list(grp_df['median'].values)
my_shading50 = go.Scatter(
x = x + x_rev,
y = y_upper + y_lower,
fill = 'tozerox',
fillcolor = shade_colors[grp].format(0.3),
line = go.Line(color = 'transparent'),
showlegend = True,
name = '{} - 50% CI'.format(grp),
)
my_shading95 = go.Scatter(
x = x + x_rev,
y = y2_upper + y2_lower,
fill = 'tozerox',
fillcolor = shade_colors[grp].format(0.1),
line = go.Line(color = 'transparent'),
showlegend = True,
name = '{} - 95% CI'.format(grp),
)
my_line = go.Scatter(
x = x,
y = y,
line = go.Line(color=line_colors[grp]),
mode = 'lines',
name = grp,
)
data5.append(my_line)
data5.append(my_shading50)
data5.append(my_shading95)
Explanation: Next, we construct our graph "traces", consisting of 3 elements (solid line and two shaded areas) per observed group.
End of explanation
layout5 = go.Layout(
yaxis=dict(
title='Survival (%)',
#zeroline=False,
tickformat='.0%',
),
xaxis=dict(title='Days since enrollment')
)
Explanation: Finally, we build a minimal layout structure to house our graph:
End of explanation
py.iplot(go.Figure(data=data5, layout=layout5), filename='survivalstan/pem_survival_model_ppsummary')
Explanation: Here is our plot:
End of explanation |
6,202 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, heterogeneous models are from this Jupyter notebook by Heiner Igel (@heinerigel), Florian Wölfl and Lion Krischer (@krischer) which is a supplemenatry material to the book Computational Seismology
Step1: Simple absorbing boundary for 2D acoustic FD modelling
Realistic FD modelling results for surface seismic acquisition geometries require a further modification of the 2D acoustic FD code. Except for the free surface boundary condition on top of the model, we want to suppress the artifical reflections from the other boundaries.
Such absorbing boundaries can be implemented by different approaches. A comprehensive overview is compiled in
Gao et al. 2015, Comparison of artificial absorbing boundaries for acoustic wave equation modelling
Before implementing the absorbing boundary frame, we modify some parts of the optimized 2D acoustic FD code
Step2: In order to modularize the code, we move the 2nd partial derivatives of the wave equation into a function update_d2px_d2pz, so the application of the JIT decorator can be restriced to this function
Step3: In the FD modelling code FD_2D_acoustic_JIT, a more flexible model definition is introduced by the function model. The block Initalize animation of pressure wavefield before the time loop displays the velocity model and initial pressure wavefield. During the time-loop, the pressure wavefield is updated with
image.set_data(p.T)
fig.canvas.draw()
at the every isnap timestep
Step4: Homogeneous block model without absorbing boundary frame
As a reference, we first model the homogeneous block model, defined in the function model, without an absorbing boundary frame
Step5: After defining the modelling parameters, we can run the modified FD code ...
Step6: Notice the strong, artifical boundary reflections in the wavefield movie
Simple absorbing Sponge boundary
The simplest, and unfortunately least efficient, absorbing boundary was developed by Cerjan et al. (1985). It is based on the simple idea to damp the pressure wavefields $p^n_{i,j}$ and $p^{n+1}_{i,j}$ in an absorbing boundary frame by an exponential function
Step7: This implementation of the Sponge boundary sets a free-surface boundary condition on top of the model, while inciding waves at the other boundaries are absorbed
Step8: The FD code itself requires only some small modifications, we have to add the absorb function to define the amount of damping in the boundary frame and apply the damping function to the pressure wavefields pnew and p
Step9: Let's evaluate the influence of the Sponge boundaries on the artifical boundary reflections | Python Code:
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../../style/custom.css'
HTML(open(css_file, "r").read())
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, heterogeneous models are from this Jupyter notebook by Heiner Igel (@heinerigel), Florian Wölfl and Lion Krischer (@krischer) which is a supplemenatry material to the book Computational Seismology: A Practical Introduction, notebook style sheet by L.A. Barba, N.C. Clementi
End of explanation
# Import Libraries
# ----------------
import numpy as np
from numba import jit
import matplotlib
import matplotlib.pyplot as plt
from pylab import rcParams
# Ignore Warning Messages
# -----------------------
import warnings
warnings.filterwarnings("ignore")
from mpl_toolkits.axes_grid1 import make_axes_locatable
# Definition of initial modelling parameters
# ------------------------------------------
xmax = 2000.0 # maximum spatial extension of the 1D model in x-direction (m)
zmax = xmax # maximum spatial extension of the 1D model in z-direction (m)
dx = 10.0 # grid point distance in x-direction (m)
dz = dx # grid point distance in z-direction (m)
tmax = 0.75 # maximum recording time of the seismogram (s)
dt = 0.0010 # time step
vp0 = 3000. # P-wave speed in medium (m/s)
# acquisition geometry
xsrc = 1000.0 # x-source position (m)
zsrc = xsrc # z-source position (m)
f0 = 100.0 # dominant frequency of the source (Hz)
t0 = 0.1 # source time shift (s)
isnap = 2 # snapshot interval (timesteps)
Explanation: Simple absorbing boundary for 2D acoustic FD modelling
Realistic FD modelling results for surface seismic acquisition geometries require a further modification of the 2D acoustic FD code. Except for the free surface boundary condition on top of the model, we want to suppress the artifical reflections from the other boundaries.
Such absorbing boundaries can be implemented by different approaches. A comprehensive overview is compiled in
Gao et al. 2015, Comparison of artificial absorbing boundaries for acoustic wave equation modelling
Before implementing the absorbing boundary frame, we modify some parts of the optimized 2D acoustic FD code:
End of explanation
@jit(nopython=True) # use JIT for C-performance
def update_d2px_d2pz(p, dx, dz, nx, nz, d2px, d2pz):
for i in range(1, nx - 1):
for j in range(1, nz - 1):
d2px[i,j] = (p[i + 1,j] - 2 * p[i,j] + p[i - 1,j]) / dx**2
d2pz[i,j] = (p[i,j + 1] - 2 * p[i,j] + p[i,j - 1]) / dz**2
return d2px, d2pz
Explanation: In order to modularize the code, we move the 2nd partial derivatives of the wave equation into a function update_d2px_d2pz, so the application of the JIT decorator can be restriced to this function:
End of explanation
# FD_2D_acoustic code with JIT optimization
# -----------------------------------------
def FD_2D_acoustic_JIT(dt,dx,dz,f0):
# define model discretization
# ---------------------------
nx = (int)(xmax/dx) # number of grid points in x-direction
print('nx = ',nx)
nz = (int)(zmax/dz) # number of grid points in x-direction
print('nz = ',nz)
nt = (int)(tmax/dt) # maximum number of time steps
print('nt = ',nt)
isrc = (int)(xsrc/dx) # source location in grid in x-direction
jsrc = (int)(zsrc/dz) # source location in grid in x-direction
# Source time function (Gaussian)
# -------------------------------
src = np.zeros(nt + 1)
time = np.linspace(0 * dt, nt * dt, nt)
# 1st derivative of Gaussian
src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2))
# define clip value: 0.1 * absolute maximum value of source wavelet
clip = 0.1 * max([np.abs(src.min()), np.abs(src.max())]) / (dx*dz) * dt**2
# Define model
# ------------
vp = np.zeros((nx,nz))
vp = model(nx,nz,vp,dx,dz)
vp2 = vp**2
# Initialize empty pressure arrays
# --------------------------------
p = np.zeros((nx,nz)) # p at time n (now)
pold = np.zeros((nx,nz)) # p at time n-1 (past)
pnew = np.zeros((nx,nz)) # p at time n+1 (present)
d2px = np.zeros((nx,nz)) # 2nd spatial x-derivative of p
d2pz = np.zeros((nx,nz)) # 2nd spatial z-derivative of p
# Initalize animation of pressure wavefield
# -----------------------------------------
fig = plt.figure(figsize=(7,3.5)) # define figure size
plt.tight_layout()
extent = [0.0,xmax,zmax,0.0] # define model extension
# Plot pressure wavefield movie
ax1 = plt.subplot(121)
image = plt.imshow(p.T, animated=True, cmap="RdBu", extent=extent,
interpolation='nearest', vmin=-clip, vmax=clip)
plt.title('Pressure wavefield')
plt.xlabel('x [m]')
plt.ylabel('z [m]')
# Plot Vp-model
ax2 = plt.subplot(122)
image1 = plt.imshow(vp.T/1000, cmap=plt.cm.viridis, interpolation='nearest',
extent=extent)
plt.title('Vp-model')
plt.xlabel('x [m]')
plt.setp(ax2.get_yticklabels(), visible=False)
divider = make_axes_locatable(ax2)
cax2 = divider.append_axes("right", size="2%", pad=0.1)
fig.colorbar(image1, cax=cax2)
plt.ion()
plt.show(block=False)
# Calculate Partial Derivatives
# -----------------------------
for it in range(nt):
# FD approximation of spatial derivative by 3 point operator
d2px, d2pz = update_d2px_d2pz(p, dx, dz, nx, nz, d2px, d2pz)
# Time Extrapolation
# ------------------
pnew = 2 * p - pold + vp2 * dt**2 * (d2px + d2pz)
# Add Source Term at isrc
# -----------------------
# Absolute pressure w.r.t analytical solution
pnew[isrc,jsrc] = pnew[isrc,jsrc] + src[it] / (dx * dz) * dt ** 2
# Remap Time Levels
# -----------------
pold, p = p, pnew
# display pressure snapshots
if (it % isnap) == 0:
image.set_data(p.T)
fig.canvas.draw()
Explanation: In the FD modelling code FD_2D_acoustic_JIT, a more flexible model definition is introduced by the function model. The block Initalize animation of pressure wavefield before the time loop displays the velocity model and initial pressure wavefield. During the time-loop, the pressure wavefield is updated with
image.set_data(p.T)
fig.canvas.draw()
at the every isnap timestep:
End of explanation
# Homogeneous model
def model(nx,nz,vp,dx,dz):
vp += vp0
return vp
Explanation: Homogeneous block model without absorbing boundary frame
As a reference, we first model the homogeneous block model, defined in the function model, without an absorbing boundary frame:
End of explanation
%matplotlib notebook
dx = 5.0 # grid point distance in x-direction (m)
dz = dx # grid point distance in z-direction (m)
f0 = 100.0 # centre frequency of the source wavelet (Hz)
# calculate dt according to the CFL-criterion
dt = dx / (np.sqrt(2.0) * vp0)
FD_2D_acoustic_JIT(dt,dx,dz,f0)
Explanation: After defining the modelling parameters, we can run the modified FD code ...
End of explanation
# Define simple absorbing boundary frame based on wavefield damping
# according to Cerjan et al., 1985, Geophysics, 50, 705-708
def absorb(nx,nz):
FW = # thickness of absorbing frame (gridpoints)
a = # damping variation within the frame
coeff = np.zeros(FW)
# define coefficients
# initialize array of absorbing coefficients
absorb_coeff = np.ones((nx,nz))
# compute coefficients for left grid boundaries (x-direction)
# compute coefficients for right grid boundaries (x-direction)
# compute coefficients for bottom grid boundaries (z-direction)
return absorb_coeff
Explanation: Notice the strong, artifical boundary reflections in the wavefield movie
Simple absorbing Sponge boundary
The simplest, and unfortunately least efficient, absorbing boundary was developed by Cerjan et al. (1985). It is based on the simple idea to damp the pressure wavefields $p^n_{i,j}$ and $p^{n+1}_{i,j}$ in an absorbing boundary frame by an exponential function:
\begin{equation}
f_{abs} = exp(-a^2(FW-i)^2), \nonumber
\end{equation}
where $FW$ denotes the thickness of the boundary frame in gridpoints, while the factor $a$ defines the damping variation within the frame. It is import to avoid overlaps of the damping profile in the model corners, when defining the absorbing function:
End of explanation
# Plot absorbing damping profile
# ------------------------------
fig = plt.figure(figsize=(6,4)) # define figure size
extent = [0.0,xmax,0.0,zmax] # define model extension
# calculate absorbing boundary weighting coefficients
nx = 400
nz = 400
absorb_coeff = absorb(nx,nz)
plt.imshow(absorb_coeff.T)
plt.colorbar()
plt.title('Sponge boundary condition')
plt.xlabel('x [m]')
plt.ylabel('z [m]')
plt.show()
Explanation: This implementation of the Sponge boundary sets a free-surface boundary condition on top of the model, while inciding waves at the other boundaries are absorbed:
End of explanation
# FD_2D_acoustic code with JIT optimization
# -----------------------------------------
def FD_2D_acoustic_JIT_absorb(dt,dx,dz,f0):
# define model discretization
# ---------------------------
nx = (int)(xmax/dx) # number of grid points in x-direction
print('nx = ',nx)
nz = (int)(zmax/dz) # number of grid points in x-direction
print('nz = ',nz)
nt = (int)(tmax/dt) # maximum number of time steps
print('nt = ',nt)
isrc = (int)(xsrc/dx) # source location in grid in x-direction
jsrc = (int)(zsrc/dz) # source location in grid in x-direction
# Source time function (Gaussian)
# -------------------------------
src = np.zeros(nt + 1)
time = np.linspace(0 * dt, nt * dt, nt)
# 1st derivative of Gaussian
src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2))
# define clip value: 0.1 * absolute maximum value of source wavelet
clip = 0.1 * max([np.abs(src.min()), np.abs(src.max())]) / (dx*dz) * dt**2
# Define absorbing boundary frame
# -------------------------------
# Define model
# ------------
vp = np.zeros((nx,nz))
vp = model(nx,nz,vp,dx,dz)
vp2 = vp**2
# Initialize empty pressure arrays
# --------------------------------
p = np.zeros((nx,nz)) # p at time n (now)
pold = np.zeros((nx,nz)) # p at time n-1 (past)
pnew = np.zeros((nx,nz)) # p at time n+1 (present)
d2px = np.zeros((nx,nz)) # 2nd spatial x-derivative of p
d2pz = np.zeros((nx,nz)) # 2nd spatial z-derivative of p
# Initalize animation of pressure wavefield
# -----------------------------------------
fig = plt.figure(figsize=(7,3.5)) # define figure size
plt.tight_layout()
extent = [0.0,xmax,zmax,0.0] # define model extension
# Plot pressure wavefield movie
ax1 = plt.subplot(121)
image = plt.imshow(p.T, animated=True, cmap="RdBu", extent=extent,
interpolation='nearest', vmin=-clip, vmax=clip)
plt.title('Pressure wavefield')
plt.xlabel('x [m]')
plt.ylabel('z [m]')
# Plot Vp-model
ax2 = plt.subplot(122)
image1 = plt.imshow(vp.T/1000, cmap=plt.cm.viridis, interpolation='nearest',
extent=extent)
plt.title('Vp-model')
plt.xlabel('x [m]')
plt.setp(ax2.get_yticklabels(), visible=False)
divider = make_axes_locatable(ax2)
cax2 = divider.append_axes("right", size="2%", pad=0.1)
fig.colorbar(image1, cax=cax2)
plt.ion()
plt.show(block=False)
# Calculate Partial Derivatives
# -----------------------------
for it in range(nt):
# FD approximation of spatial derivative by 3 point operator
d2px, d2pz = update_d2px_d2pz(p, dx, dz, nx, nz, d2px, d2pz)
# Time Extrapolation
# ------------------
pnew = 2 * p - pold + vp2 * dt**2 * (d2px + d2pz)
# Add Source Term at isrc
# -----------------------
# Absolute pressure w.r.t analytical solution
pnew[isrc,jsrc] = pnew[isrc,jsrc] + src[it] / (dx * dz) * dt ** 2
# Apply absorbing boundary frame to p and pnew
# Remap Time Levels
# -----------------
pold, p = p, pnew
# display pressure snapshots
if (it % isnap) == 0:
image.set_data(p.T)
fig.canvas.draw()
Explanation: The FD code itself requires only some small modifications, we have to add the absorb function to define the amount of damping in the boundary frame and apply the damping function to the pressure wavefields pnew and p
End of explanation
%matplotlib notebook
dx = 5.0 # grid point distance in x-direction (m)
dz = dx # grid point distance in z-direction (m)
f0 = 100.0 # centre frequency of the source wavelet (Hz)
# calculate dt according to the CFL-criterion
dt = dx / (np.sqrt(2.0) * vp0)
FD_2D_acoustic_JIT_absorb(dt,dx,dz,f0)
Explanation: Let's evaluate the influence of the Sponge boundaries on the artifical boundary reflections:
End of explanation |
6,203 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Histogrammar basic tutorial
Histogrammar is a Python package that allows you to make histograms from numpy arrays, and pandas and spark dataframes. (There is also a scala backend for Histogrammar.)
This basic tutorial shows how to
Step1: Data generation
Let's first load some data!
Step2: Let's fill a histogram!
Histogrammar treats histograms as objects. You will see this has various advantages.
Let's fill a simple histogram with a numpy array.
Step3: Histogrammar also supports open-ended histograms, which are sparsely represented. Open-ended histograms are used when you have a distribution of known scale (bin width) but unknown domain (lowest and highest bin index). Bins in a sparse histogram only get created and filled if the corresponding data points are encountered.
A sparse histogram has a binWidth, and optionally an origin parameter. The origin is the left edge of the bin whose index is 0 and is set to 0.0 by default. Sparse histograms are nice if you don't want to restrict the range, for example for tracking data distributions over time, which may have large, sudden outliers.
Step4: Filling from a dataframe
Let's make the same 1d (sparse) histogram directly from a (pandas) dataframe.
Step5: When importing histogrammar, pandas (and spark) dataframes get extra functions to create histograms that all start with "hg_". For example
Step6: Handy histogram methods
For any 1-dimensional histogram extract the bin entries, edges and centers as follows
Step7: Irregular bin histogram variants
There are two other open-ended histogram variants in addition to the SparselyBin we have seen before. Whereas SparselyBin is used when bins have equal width, the others offer similar alternatives to a single fixed bin width.
There are two ways
Step8: Note the slightly different plotting style for CentrallyBin histograms (e.g. x-axis labels are central values instead of edges).
Multi-dimensional histograms
Let's make a multi-dimensional histogram. In Histogrammar, a multi-dimensional histogram is composed as two recursive histograms.
We will use histograms with irregular binning in this example.
Step9: Accessing bin entries
For most 2+ dimensional histograms, one can get the bin entries and centers as follows
Step10: Accessing a sub-histogram
Depending on the histogram type of the first axis, hg.Bin or other, one can access the sub-histograms directly from
Step11: Histogram types recap
So far we have covered the histogram types
Step12: Categorize histograms also accept booleans
Step13: Other histogram functionality
There are several more histogram types
Step14: Stack histograms are useful to make efficiency curves.
With all these histograms you can make multi-dimensional histograms. For example, you can evaluate the mean and standard deviation of one feature as a function of bins of another feature. (A "profile" plot, similar to a box plot.)
Step15: Convenience functions
There are several convenience functions to make such composed histograms. These are
Step16: Overview of histograms
Here you can find the list of all available histograms and aggregators and how to use each one
Step17: Storage
Histograms can be easily stored and retrieved in/from the json format. | Python Code:
%%capture
# install histogrammar (if not installed yet)
import sys
!"{sys.executable}" -m pip install histogrammar
import histogrammar as hg
import pandas as pd
import numpy as np
import matplotlib
Explanation: Histogrammar basic tutorial
Histogrammar is a Python package that allows you to make histograms from numpy arrays, and pandas and spark dataframes. (There is also a scala backend for Histogrammar.)
This basic tutorial shows how to:
- make histograms with numpy arrays and pandas dataframes,
- plot them,
- make multi-dimensional histograms,
- the various histogram types,
- to make many histograms at ones,
- and store and retrieve them.
Enjoy!
End of explanation
# open a pandas dataframe for use below
from histogrammar import resources
df = pd.read_csv(resources.data("test.csv.gz"), parse_dates=["date"])
df.head()
Explanation: Data generation
Let's first load some data!
End of explanation
# this creates a histogram with 100 even-sized bins in the (closed) range [-5, 5]
hist1 = hg.Bin(num=100, low=-5, high=5)
# filling it with one data point:
hist1.fill(0.5)
hist1.entries
# filling the histogram with an array:
hist1.fill.numpy(np.random.normal(size=10000))
hist1.entries
# let's plot it
hist1.plot.matplotlib();
# Alternatively, you can call this to make the same histogram:
# hist1 = hg.Histogram(num=100, low=-5, high=5)
Explanation: Let's fill a histogram!
Histogrammar treats histograms as objects. You will see this has various advantages.
Let's fill a simple histogram with a numpy array.
End of explanation
hist2 = hg.SparselyBin(binWidth=10, origin=0)
hist2.fill.numpy(df['age'].values)
hist2.plot.matplotlib();
# Alternatively, you can call this to make the same histogram:
# hist2 = hg.SparselyHistogram(binWidth=10)
Explanation: Histogrammar also supports open-ended histograms, which are sparsely represented. Open-ended histograms are used when you have a distribution of known scale (bin width) but unknown domain (lowest and highest bin index). Bins in a sparse histogram only get created and filled if the corresponding data points are encountered.
A sparse histogram has a binWidth, and optionally an origin parameter. The origin is the left edge of the bin whose index is 0 and is set to 0.0 by default. Sparse histograms are nice if you don't want to restrict the range, for example for tracking data distributions over time, which may have large, sudden outliers.
End of explanation
hist3 = hg.SparselyBin(binWidth=10, origin=0, quantity='age')
hist3.fill.numpy(df)
hist3.plot.matplotlib();
Explanation: Filling from a dataframe
Let's make the same 1d (sparse) histogram directly from a (pandas) dataframe.
End of explanation
# Alternatively, do:
hist3 = df.hg_SparselyBin(binWidth=10, origin=0, quantity='age')
# ... where hist3 automatically picks up column age from the dataframe,
# ... and does not need to be filled by calling fill.numpy() explicitly.
Explanation: When importing histogrammar, pandas (and spark) dataframes get extra functions to create histograms that all start with "hg_". For example: hg_Bin or hg_SparselyBin.
Note that the column "age" is picked by setting quantity="age", and also that the filling step is done automatically.
End of explanation
# full range of bin entries, and those in a specified range:
(hist3.bin_entries(), hist3.bin_entries(low=30, high=80))
# full range of bin edges, and those in a specified range:
(hist3.bin_edges(), hist3.bin_edges(low=31, high=71))
# full range of bin centers, and those in a specified range:
(hist3.bin_centers(), hist3.bin_centers(low=31, high=80))
hsum = hist2 + hist3
hsum.entries
hsum *= 4
hsum.entries
Explanation: Handy histogram methods
For any 1-dimensional histogram extract the bin entries, edges and centers as follows:
End of explanation
hist4 = hg.CentrallyBin(centers=[15, 25, 35, 45, 55, 65, 75, 85, 95], quantity='age')
hist4.fill.numpy(df)
hist4.plot.matplotlib();
hist4.bin_edges()
Explanation: Irregular bin histogram variants
There are two other open-ended histogram variants in addition to the SparselyBin we have seen before. Whereas SparselyBin is used when bins have equal width, the others offer similar alternatives to a single fixed bin width.
There are two ways:
- CentrallyBin histograms, defined by specifying bin centers;
- IrregularlyBin histograms, with irregular bin edges.
They both partition a space into irregular subdomains with no gaps and no overlaps.
End of explanation
edges1 = [-100, -75, -50, -25, 0, 25, 50, 75, 100]
edges2 = [-200, -150, -100, -50, 0, 50, 100, 150, 200]
hist1 = hg.IrregularlyBin(edges=edges1, quantity='latitude')
hist2 = hg.IrregularlyBin(edges=edges2, quantity='longitude', value=hist1)
# for 3 dimensions or higher simply add the 2-dim histogram to the value argument
hist3 = hg.SparselyBin(binWidth=10, quantity='age', value=hist2)
hist1.bin_centers()
hist2.bin_centers()
hist2.fill.numpy(df)
hist2.plot.matplotlib();
# number of dimensions per histogram
(hist1.n_dim, hist2.n_dim, hist3.n_dim)
Explanation: Note the slightly different plotting style for CentrallyBin histograms (e.g. x-axis labels are central values instead of edges).
Multi-dimensional histograms
Let's make a multi-dimensional histogram. In Histogrammar, a multi-dimensional histogram is composed as two recursive histograms.
We will use histograms with irregular binning in this example.
End of explanation
from histogrammar.plot.hist_numpy import get_2dgrid
x_labels, y_labels, grid = get_2dgrid(hist2)
y_labels, grid
Explanation: Accessing bin entries
For most 2+ dimensional histograms, one can get the bin entries and centers as follows:
End of explanation
# Acces sub-histograms from IrregularlyBin from hist.bins
# The first item of the tuple is the lower bin-edge of the bin.
hist2.bins[1]
h = hist2.bins[1][1]
h.plot.matplotlib()
h.bin_entries()
Explanation: Accessing a sub-histogram
Depending on the histogram type of the first axis, hg.Bin or other, one can access the sub-histograms directly from:
hist.values or
hist.bins
End of explanation
histy = hg.Categorize('eyeColor')
histx = hg.Categorize('favoriteFruit', value=histy)
histx.fill.numpy(df)
histx.plot.matplotlib();
# show the datatype(s) of the histogram
histx.datatype
Explanation: Histogram types recap
So far we have covered the histogram types:
- Bin histograms: with a fixed range and even-sized bins,
- SparselyBin histograms: open-ended and with a fixed bin-width,
- CentrallyBin histograms: open-ended and using bin centers.
- IrregularlyBin histograms: open-ended and using (irregular) bin edges,
All of these process numeric variables only.
Categorical variables
For categorical variables use the Categorize histogram
- Categorize histograms: accepting categorical variables such as strings and booleans.
End of explanation
histy = hg.Categorize('isActive')
histy.fill.numpy(df)
histy.plot.matplotlib();
histy.bin_entries()
histy.bin_labels()
# histy.bin_centers() will work as well for Categorize histograms
Explanation: Categorize histograms also accept booleans:
End of explanation
hmin = df.hg_Minimize('latitude')
hmax = df.hg_Maximize('longitude')
(hmin.min, hmax.max)
havg = df.hg_Average('latitude')
hdev = df.hg_Deviate('longitude')
(havg.mean, hdev.mean, hdev.variance)
hsum = df.hg_Sum('age')
hsum.sum
# let's illustrate the Stack histogram with longitude distribution
# first we plot the regular distribution
hl = df.hg_SparselyBin(25, 'longitude')
hl.plot.matplotlib();
# Stack counts how often data points are greater or equal to the provided thresholds
thresholds = [-200, -150, -100, -50, 0, 50, 100, 150, 200]
hs = df.hg_Stack(thresholds=thresholds, quantity='longitude')
hs.thresholds
hs.bin_entries()
Explanation: Other histogram functionality
There are several more histogram types:
- Minimize, Maximize: keep track of the min or max value of a numeric distribution,
- Average, Deviate: keep track of the mean or mean and standard deviation of a numeric distribution,
- Sum: keep track of the sum of a numeric distribution,
- Stack: keep track how many data points pass certain thresholds.
- Bag: works like a dict, it keeps tracks of all unique values encountered in a column, and can also do this for vector s of numbers. For strings, Bag works just like the Categorize histogram.
End of explanation
hav = hg.Deviate('age')
hlo = hg.SparselyBin(25, 'longitude', value=hav)
hlo.fill.numpy(df)
hlo.bins
hlo.plot.matplotlib();
Explanation: Stack histograms are useful to make efficiency curves.
With all these histograms you can make multi-dimensional histograms. For example, you can evaluate the mean and standard deviation of one feature as a function of bins of another feature. (A "profile" plot, similar to a box plot.)
End of explanation
# For example, call this convenience function to make the same histogram as above:
hlo = df.hg_SparselyProfileErr(25, 'longitude', 'age')
hlo.plot.matplotlib();
Explanation: Convenience functions
There are several convenience functions to make such composed histograms. These are:
- Profile: Convenience function for creating binwise averages.
- SparselyProfile: Convenience function for creating sparsely binned binwise averages.
- ProfileErr: Convenience function for creating binwise averages and variances.
- SparselyProfile: Convenience function for creating sparsely binned binwise averages and variances.
- TwoDimensionallyHistogram: Convenience function for creating a conventional, two-dimensional histogram.
- TwoDimensionallySparselyHistogram: Convenience function for creating a sparsely binned, two-dimensional histogram.
End of explanation
hists = df.hg_make_histograms()
hists.keys()
h = hists['transaction']
h.plot.matplotlib();
h = hists['date']
h.plot.matplotlib();
# you can also select which and make multi-dimensional histograms
hists = df.hg_make_histograms(features = ['longitude:age'])
hist = hists['longitude:age']
hist.plot.matplotlib();
Explanation: Overview of histograms
Here you can find the list of all available histograms and aggregators and how to use each one:
https://histogrammar.github.io/histogrammar-docs/specification/1.0/
The most useful aggregators are the following. Tinker with them to get familiar; building up an analysis is easier when you know "there's an app for that."
Simple counters:
Count: just counts. Every aggregator has an entries field, but Count only has this field.
Average and Deviate: add mean and variance, cumulatively.
Minimize and Maximize: lowest and highest value seen.
Histogram-like objects:
Bin and SparselyBin: split a numerical domain into uniform bins and redirect aggregation into those bins.
Categorize: split a string-valued domain by unique values; good for making bar charts (which are histograms with a string-valued axis).
CentrallyBin and IrregularlyBin: split a numerical domain into arbitrary subintervals, usually for separate plots like particle pseudorapidity or collision centrality.
Collections:
Label, UntypedLabel, and Index: bundle objects with string-based keys (Label and UntypedLabel) or simply an ordered array (effectively, integer-based keys) consisting of a single type (Label and Index) or any types (UntypedLabel).
Branch: for the fourth case, an ordered array of any types. A Branch is useful as a "cable splitter". For instance, to make a histogram that tracks minimum and maximum value, do this:
Making many histograms at once
There a nice method to make many histograms in one go. See here.
By default automagical binning is applied to make the histograms.
More details one how to use this function are found in in the advanced tutorial.
End of explanation
# storage
hist.toJsonFile('long_age.json')
# retrieval
factory = hg.Factory()
hist2 = factory.fromJsonFile('long_age.json')
hist2.plot.matplotlib();
# we can store the histograms if we want to
import json
from histogrammar.util import dumper
# store
with open('histograms.json', 'w') as outfile:
json.dump(hists, outfile, default=dumper)
# and load again
with open('histograms.json') as handle:
hists2 = json.load(handle)
hists.keys()
Explanation: Storage
Histograms can be easily stored and retrieved in/from the json format.
End of explanation |
6,204 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
02. Connect datalab
datalab은 Google Cloud에서 서비스하는 Jupyter notebook이라고 보면 됩니다
최근 20170818 버전부터 python2, python3 커널을 사용할 수 있습니다 (기존에는 python2만 해당)
datalab은 Google Cloud Storage / BigQuery 등에서 IO 속도가 매우 빠릅니다
datalab install
datalab connect [datalab instance 명]
<img src="../images/007_connect_datalab.png" width="700" height="700">
- Google Datalab의 모습
Step1: pandas dataframe으로 변환
Step2: %bq magic command를 이용한 project sample 확인
Step3: %%bq query를 사용해 바로 query문을 날림
Step4: 매직커맨드를 활용해 pandas data frame 생성
Step5: google chart api를 활용해 바로 chart 그리기 가능 | Python Code:
import google.datalab.bigquery as bq
# Query 생성
query_string = '''
#standardSQL
SELECT corpus AS title, COUNT(*) AS unique_words
FROM `publicdata.samples.shakespeare`
GROUP BY title
ORDER BY unique_words DESC
LIMIT 10
'''
query = bq.Query(query_string)
output_options = bq.QueryOutput.table(use_cache=True)
result = query.execute(output_options=output_options).result() # query 실행
result
Explanation: 02. Connect datalab
datalab은 Google Cloud에서 서비스하는 Jupyter notebook이라고 보면 됩니다
최근 20170818 버전부터 python2, python3 커널을 사용할 수 있습니다 (기존에는 python2만 해당)
datalab은 Google Cloud Storage / BigQuery 등에서 IO 속도가 매우 빠릅니다
datalab install
datalab connect [datalab instance 명]
<img src="../images/007_connect_datalab.png" width="700" height="700">
- Google Datalab의 모습
End of explanation
pandas_df = result.to_dataframe()
pandas_df
sample_dataset = bq.Dataset('bigquery-public-data.samples')
# dataset이 존재하는지 유무
sample_dataset.exists()
Explanation: pandas dataframe으로 변환
End of explanation
%bq datasets list --project cloud-datalab-samples
Explanation: %bq magic command를 이용한 project sample 확인
End of explanation
%%bq query
#standardSQL
SELECT corpus AS title, COUNT(*) AS unique_words
FROM `publicdata.samples.shakespeare`
GROUP BY title
ORDER BY unique_words DESC
LIMIT 10
Explanation: %%bq query를 사용해 바로 query문을 날림
End of explanation
%%bq query -n requests
SELECT timestamp, latency, endpoint
FROM `cloud-datalab-samples.httplogs.logs_20140615`
WHERE endpoint = 'Popular' OR endpoint = 'Recent'
df = requests.execute(output_options=bq.QueryOutput.dataframe()).result()
len(df)
df.head()
Explanation: 매직커맨드를 활용해 pandas data frame 생성
End of explanation
%%bq query --name data
WITH quantiles AS (
SELECT APPROX_QUANTILES(LOG10(latency), 50) AS timearray
FROM `cloud-datalab-samples.httplogs.logs_20140615`
WHERE latency <> 0
)
select row_number() over(order by time) as percentile, time from quantiles cross join unnest(quantiles.timearray) as time
order by percentile
%chart columns --data data --fields percentile,time
Explanation: google chart api를 활용해 바로 chart 그리기 가능
End of explanation |
6,205 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
linear regression
解析式直接求解
Step1: $y = Xw$
$ w = (X^TX)^[-1]X^T*y$
Step2: Results
w1 = 2.97396653
w2 = -0.54139002
w3 = 0.97132913
b = 2.03076198
Step3: 梯度下降法求解
目标函数选取要合适一些, 前边乘以适当的系数.
注意检验梯度的计算是否正确...
Step4: 随机梯度下降法求解
Stochastic gradient descent
使用了固定步长
一开始用的0.1, 始终达不到给定的精度
于是添加了判定条件用来更新步长. | Python Code:
df['x4'] = 1
X = df.iloc[:,(0,1,2,4)].values
y = df.y.values
Explanation: linear regression
解析式直接求解
End of explanation
inv_XX_T = inv(X.T.dot(X))
w = inv_XX_T.dot(X.T).dot(df.y.values)
w
Explanation: $y = Xw$
$ w = (X^TX)^[-1]X^T*y$
End of explanation
qr(inv_XX_T)
X.shape
#solve(X,y)##只能解方阵
Explanation: Results
w1 = 2.97396653
w2 = -0.54139002
w3 = 0.97132913
b = 2.03076198
End of explanation
def f(w,X,y):
return ((X.dot(w)-y)**2/(2*1000)).sum()
def grad_f(w,X,y):
return (X.dot(w) - y).dot(X)/1000
w0 = np.array([100.0,100.0,100.0,100.0])
epsilon = 1e-10
alpha = 0.1
check_condition = 1
while check_condition > epsilon:
w0 += -alpha*grad_f(w0,X,y)
check_condition = abs(grad_f(w0,X,y)).sum()
print w0
Explanation: 梯度下降法求解
目标函数选取要合适一些, 前边乘以适当的系数.
注意检验梯度的计算是否正确...
End of explanation
def cost_function(w,X,y):
return (X.dot(w)-y)**2/2
def grad_cost_f(w,X,y):
return (np.dot(X, w) - y)*X
w0 = np.array([1.0, 1.0, 1.0, 1.0])
epsilon = 1e-3
alpha = 0.01
# 生成随机index,用来随机索引数据.
random_index = np.arange(1000)
np.random.shuffle(random_index)
cost_value = np.inf #初始化目标函数值
while abs(grad_f(w0,X,y)).sum() > epsilon:
for i in range(1000):
w0 += -alpha*grad_cost_f(w0,X[random_index[i]],y[random_index[i]])
#检查目标函数变化趋势, 如果趋势变化达到临界值, 更新更小的步长继续计算
difference = cost_value - f(w0, X, y)
if difference < 1e-10:
alpha *= 0.9
cost_value = f(w0, X, y)
print w0
Explanation: 随机梯度下降法求解
Stochastic gradient descent
使用了固定步长
一开始用的0.1, 始终达不到给定的精度
于是添加了判定条件用来更新步长.
End of explanation |
6,206 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides.describe()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
data.head()
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1.0 / (1.0 + np.exp(-1.0 * x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(error, self.weights_hidden_to_output.T)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error
hidden_error_term = hidden_error * hidden_outputs * (1.0 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += X[:, None] * hidden_error_term[None, :]
# Weight step (hidden to output)
delta_weights_h_o += hidden_outputs[:, None] * output_error_term
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
output_nodes = 1
# Grid search
iterations_list = [100, 1000, 10000]
learning_rate_list = [0.01, 0.03, 0.1, 0.3]
hidden_nodes_list = [2, 5, 10, 20]
N_i = train_features.shape[1]
for iterations in iterations_list:
for learning_rate in learning_rate_list:
for hidden_nodes in hidden_nodes_list:
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
sys.stdout.write("iterations: {0}, learning_rate: {1}, hidden_nodes: {2}\n".format(
iterations, learning_rate, hidden_nodes))
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
sys.stdout.write("\n")
import sys
### Set the hyperparameters here ###
iterations = 10000 # Best
learning_rate = 0.3 # Best
hidden_nodes = 20 # Best
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
import sys
### Set the hyperparameters here ###
iterations = 10000 # Best
learning_rate = 0.3 # Best
hidden_nodes = 10 # Best is 20 but vaidation loss has more variance. so I alter hidden_nodes to 10.
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
6,207 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Section 0
Step1: Section 1
Step2: Section 2
Step3: Section 3
Step4: Section 4
Recommendation and predictions for Articles
Recommendation method
Step5: Testing data for cluster assigning. | Python Code:
dataset= sf.SFrame('Dataset/KO_data.csv')
dataset.remove_column('X1')
dataset= dataset.add_row_number()
dataset.rename({'id':'X1'})
tfidfvec= TfidfVectorizer(stop_words='english')
tf_idf_matrix= tfidfvec.fit_transform(dataset['text'])
tf_idf_matrix = normalize(tf_idf_matrix)
Explanation: Section 0:
Dataset definition and feature extraction (tf-idf)
End of explanation
#Smart Initialization for means with using KMeans++ model
def initialize_means(num_clusters,features_matrix):
from sklearn.cluster import KMeans
np.random.seed(5)
kmeans_model = KMeans(n_clusters=num_clusters, init='k-means++', n_init=5, max_iter=400, random_state=1, n_jobs=1)
kmeans_model.fit(features_matrix)
centroids, cluster_assignment = kmeans_model.cluster_centers_, kmeans_model.labels_
means = [centroid for centroid in centroids]
return [means , cluster_assignment]
#Smart initialization for weights
def initialize_weights(num_clusters,features_matrix,cluster_assignment):
num_docs = features_matrix.shape[0]
weights = []
for i in xrange(num_clusters):
num_assigned = len(cluster_assignment[cluster_assignment==i]) # YOUR CODE HERE
w = float(num_assigned) / num_docs
weights.append(w)
return weights
#Smart initialization for covariances
def initialize_covs(num_clusters,features_matrix,cluster_assignment):
covs = []
for i in xrange(num_clusters):
member_rows = features_matrix[cluster_assignment==i]
cov = (member_rows.multiply(member_rows) - 2*member_rows.dot(diag(means[i]))).sum(axis=0).A1 / member_rows.shape[0] \
+ means[i]**2
cov[cov < 1e-8] = 1e-8
covs.append(cov)
return covs
Explanation: Section 1:
Model Parameters smart initialization
Used Kmeans++ model to initialize the parameters for the model of EM algorithm.
- Kmeans++ used to initialize the means (Centroids of clusters)
End of explanation
# Model 1 with 10 clusters
(means , cluster_assignment_10model)= initialize_means(10,tf_idf_matrix)
covs= initialize_covs(10,tf_idf_matrix, cluster_assignment_10model)
weights= initialize_weights(10,tf_idf_matrix, cluster_assignment_10model)
model_em_10k= EM_for_high_dimension(tf_idf_matrix, means, covs, weights, cov_smoothing=1e-10)
# Model 2 with 20 clusters.
(means , cluster_assignment_20model)= initialize_means(20,tf_idf_matrix)
covs= initialize_covs(20,tf_idf_matrix, cluster_assignment_20model)
weights= initialize_weights(20,tf_idf_matrix, cluster_assignment_20model)
model_em_20k= EM_for_high_dimension(tf_idf_matrix, means, covs, weights, cov_smoothing=1e-10)
Explanation: Section 2:
Training Models with different number of clusters
Initializing the parameters for each model then start training using the Expectation-Maximization algorithm.
End of explanation
def visualize_EM_clusters(tf_idf, means, covs, map_index_to_word):
print('')
print('==========================================================')
num_clusters = len(means)
for c in xrange(num_clusters):
print('Cluster {0:d}: Largest mean parameters in cluster '.format(c))
print('\n{0: <12}{1: <12}{2: <12}'.format('Word', 'Mean', 'Variance'))
# The k'th element of sorted_word_ids should be the index of the word
# that has the k'th-largest value in the cluster mean. Hint: Use np.argsort().
sorted_word_ids = np.argsort(means[c])[::-1]
for i in sorted_word_ids[:10]:
print '{0: <12}{1:<10.2e}{2:10.2e}'.format(map_index_to_word[i],
means[c][i],
covs[c][i])
print '\n=========================================================='
def clusters_report(clusters_idx):
cluster_id=0
for cluster_indicies in clusters_idx:
countP=0
countB=0
countE=0
for i in cluster_indicies:
if dataset['category'][i]=='product':
countP+=1
elif dataset['category'][i]=='engineering':
countE+=1
elif dataset['category'][i]=='business':
countB+=1
print "Cluster ",cluster_id ,"\n==========================\n"
cluster_id+=1
print "product count : ",countP ,"\nengineering count : ",countE,"\nbusiness count : ",countB , "\n"
visualize_EM_clusters(tf_idf_matrix, model_em_10k['means'], model_em_10k['covs'], tfidfvec.get_feature_names())
visualize_EM_clusters(tf_idf_matrix, model_em_20k['means'], model_em_20k['covs'], tfidfvec.get_feature_names())
# No. of articles in each cluster for first model with 10 clusters
resps_10k= sf.SFrame(model_em_10k['resp'])
resps_10k= resps_10k.unpack('X1', '')
cluster_id=0
cluster_hash_10model = {}
for col in resps_10k.column_names():
cluster_10k= np.array(resps_10k[col])
print "cluster ",cluster_id , "assignments: ", cluster_10k.sum()
cluster_hash_10model[cluster_id] =cluster_10k.nonzero()
cluster_id+=1
# No. of articles in each cluster for second model with 20 clusters
resps_20k= sf.SFrame(model_em_20k['resp'])
resps_20k= resps_20k.unpack('X1', '')
cluster_id=0
cluster_hash_20model = {}
for col in resps_20k.column_names():
cluster_20k= np.array(resps_20k[col])
print "cluster ",cluster_id , "assignments: ", cluster_20k.sum()
cluster_hash_20model[cluster_id] =cluster_20k.nonzero()
cluster_id+=1
# Articles' categories in model 1 with 10 clusters
clusters_10k_idx=[]
for col in resps_10k.column_names():
cluster_10k= np.array(resps_10k[col])
cluster_10k= cluster_10k.nonzero()[0]
clusters_10k_idx.append(cluster_10k)
clusters_report(clusters_10k_idx)
# Articles' categories in model 2 with 20 clusters
clusters_20k_idx=[]
for col in resps_20k.column_names():
cluster_20k= np.array(resps_20k[col])
cluster_20k= cluster_20k.nonzero()[0]
clusters_20k_idx.append(cluster_20k)
clusters_report(clusters_20k_idx)
Explanation: Section 3:
Evaluation report for each cluster (Interpreting clusters)
Evaluation report is divided into two partitions the first one is the word representation for each cluster the really interpret the cluster, the second one is for the variety of article types in one cluster counting each category for each cluster.
End of explanation
def articles_inds(article_id , cluster_hash_model):
for cluster_id in cluster_hash_model:
np_array = np.array(cluster_hash_model[cluster_id])
if article_id in np_array:
return cluster_id, np_array
def recommender(article_id ,cluster_hash_model, no_articles, data_articles):
start_time = time.time()
cid , inds = articles_inds(article_id ,cluster_hash_model)
cluster_articles= data_articles.filter_by(inds[0] , 'X1')
cluster_articles = cluster_articles.add_row_number()
recom_vec= TfidfVectorizer(stop_words='english')
tfidf_recommend= recom_vec.fit_transform(cluster_articles['text'])
tfidf_recommend = normalize(tfidf_recommend)
row_id = cluster_articles[cluster_articles['X1']==article_id]['id'][0]
NN_model = NearestNeighbors(n_neighbors=no_articles).fit(tfidf_recommend)
distances, indices = NN_model.kneighbors(tfidf_recommend[row_id])
recommended_ids=[]
for i in indices[0]:
recommended_ids.append(cluster_articles[cluster_articles['id']==i]['X1'][0])
del cluster_articles
del tfidf_recommend
del recom_vec
#print("--- %s seconds ---" % (time.time() - start_time))
#print len(inds[0])
return recommended_ids
def predict_cluster(articles,em_model):
article_tfidf= tfidfvec.transform(articles['text'])
mu= deepcopy(em_model['means'])
sigma= deepcopy(em_model['covs'])
assignments=[]
for j in range(article_tfidf.shape[0]):
resps=[]
for i in range(len(em_model['weights'])):
predict= np.log(em_model['weights'][i]) + logpdf_diagonal_gaussian(article_tfidf[j], mu[i],sigma[i])
resps.append(predict)
assignments.append(resps.index(np.max(resps)))
return assignments
# Recommend articles for all dataset then append it into the SFrame database then export it.
recommended_inds = []
start_time = time.time()
for i in range(len(dataset)):
recommended_inds.append(recommender(i,cluster_hash_20model,11,dataset))
print("--- %s seconds (Final time complexity): ---" % (time.time() - start_time))
rec_inds= sf.SArray(recommended_inds)
dataset.add_column(rec_inds,name='recommendations')
dataset.save('Articles_with_recommendations.csv',format='csv')
#Saving each cluster data in a seperate CSV file
for cluster_id in cluster_hash_20model:
ind= np.array(cluster_hash_20model[cluster_id])
#print ind
cluster_articles= dataset.filter_by(ind[0] , 'X1')
cluster_articles.save('Clusters_model20/cluster_'+str(cluster_id)+'.csv',format='csv')
del cluster_articles
Explanation: Section 4
Recommendation and predictions for Articles
Recommendation method:
A method for recommending articles by retrieving the cluster that the article belong to, then fetch all the articles in that cluster articles passed to nearest neighbour model to find the best 10 articles recommended for this article.
Predicting method:
Sending set of articles to predict the cluster it belong based on the trained data
Using the test dataset to predict cluster for each one using two different models.
End of explanation
testset = sf.SFrame('Dataset/KO_articles_test.csv')
test_tfidf= tfidfvec.transform(testset['text'])
# Predict Using model with 10 clusters.
test_predictions= predict_cluster(testset,model_em_10k)
test_predictions= np.array(test_predictions)
test_predictions
# Predict Using model with 20 clusters.
test_predictions= predict_cluster(testset,model_em_20k)
test_predictions= np.array(test_predictions)
test_predictions
Explanation: Testing data for cluster assigning.
End of explanation |
6,208 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step7: Code to fit GMM
Step8: Load validation experiment dataframe
Step9: Validate that N->F columns computed as expected
Step10: Filter sequences with insufficient plasmids
Find zero-plasmid sequences
Step11: Find low-plasmid count sequences
These selection values are unreliable, more noisy
Step12: Drop sequences that don't meet the plasmid count bars
Step13: Add pseudocounts
Step14: Compute viral selection
Step15: Compute GMM threshold
Step16: De-dupe model-designed sequences
Partition the sequences that should not be de-deduped
Split off the partitions for which we want to retain replicates, such as controls/etc.
Step17: Concatenate de-deduped ML-generated seqs with rest
Step18: Compute edit distance for chip
Step19: Concat with training data chip | Python Code:
import os
import numpy
import pandas
from six.moves import zip
from sklearn import mixture
import gzip
!pip install python-Levenshtein
import Levenshtein
Explanation: Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
End of explanation
R1_TILE21_WT_SEQ = 'DEEEIRTTNPVATEQYGSVSTNLQRGNR'
# Covariance type to use in Gaussian Mixture Model.
_COVAR_TYPE = 'full'
# Number of components to use in Gaussian Mixture Model.
_NUM_COMPONENTS = 2
class BinningLabeler(object):
Emits class labels from provided cutoff values.
Input cutoffs are encoded as 1-D arrays. Given a cutoffs array of
size n, creates n+1 labels for cutoffs, where the first bin is
[-inf, cutoffs[0]], and last bin is (cutoffs[-1], inf].
def __init__(self, cutoffs):
Constructor.
Args:
cutoffs: (numpy.ndarray or list or numeric) values to bin data at. First bin
is [-inf, cutoffs[0]], and last bin is (cutoffs[-1], inf].
Raises:
ValueError: If no cutoff(s) (i.e. an empty list) is provided.
cutoffs = numpy.atleast_1d(cutoffs)
if cutoffs.size:
self._cutoffs = numpy.sort(cutoffs)
else:
raise ValueError('Invalid cutoffs. At least one cutoff value required.')
def predict(self, values):
Provides model labels for input value(s) using the cutoff bins.
Args:
values: (numpy.ndarray or numeric) Value(s) to infer a label on.
Returns:
A numpy array with length len(values) and labels corresponding to
categories defined by the cutoffs array intervals. The labels are
[0, 1, . . ., n], where n = len(cutoffs). Note, labels correspond to bins
in sorted order from smallest to largest cutoff value.
return numpy.digitize(values, self._cutoffs)
class TwoGaussianMixtureModelLabeler(object):
Emits class labels from Gaussian Mixture given input data.
Input data is encoded as 1-D arrays. Allows for an optional ambiguous label
between the two modelled Gaussian distributions. Without the optional
ambigouous category, the two labels are:
0 - For values more likely derived from the Gaussian with smaller mean
2 - For values more likely derived from the Gaussian with larger mean
When allowing for an ambiguous category the three labels are:
0 - For values more likely derived from the Gaussian with smaller mean
1 - For values which fall within an ambiguous probability cutoff.
2 - For values more likely derived from the Gaussian with larger mean
def __init__(self, data):
Constructor.
Args:
data: (numpy.ndarray or list) Input data to model with Gaussian Mixture.
Input data is presumed to be in the form [x1, x2, ...., xn].
self._data = numpy.array([data]).T
self._gmm = mixture.GaussianMixture(
n_components=_NUM_COMPONENTS,
covariance_type=_COVAR_TYPE).fit(self._data)
# Re-map the gaussian with smaller mean to the "0" label.
self._label_by_index = dict(
list(zip([0, 1],
numpy.argsort(self._gmm.means_[:, 0]).tolist())))
self._label_by_index_fn = numpy.vectorize(lambda x: self._label_by_index[x])
def predict(self, values, probability_cutoff=0.):
Provides model labels for input value(s) using the GMM.
Args:
values: (array or single float value) Value(s) to infer a label on.
When values=None, predictions are run on self._data.
probability_cutoff: (float) Proability between 0 and 1 to identify which
values correspond to ambiguous labels. At probablity_cutoff=0 (default)
it only returns the original two state predictions.
Returns:
A numpy array with length len(values) and labels corresponding to 0,1 if
probability_cutoff = 0 and 0, 1, 2 otherwise. In the latter, 0
corresponds to the gaussian with smaller mean, 1 corresponds to the
ambiguous label, and 2 corresponds to the gaussian with larger mean.
values = numpy.atleast_1d(values)
values = numpy.array([values]).T
predictions = self._label_by_index_fn(self._gmm.predict(values))
# Re-map the initial 0,1 predictions to 0,2.
predictions *= 2
if probability_cutoff > 0:
probas = self._gmm.predict_proba(values)
max_probas = numpy.max(probas, axis=1)
ambiguous_values = max_probas < probability_cutoff
# Set ambiguous label as 1.
predictions[ambiguous_values] = 1
return predictions
Explanation: Code to fit GMM
End of explanation
with gzip.open('GAS1_target_20190516.csv.gz', 'rb') as f:
gas1 = pandas.read_csv(f, index_col=None)
gas1 = gas1.rename({
'aa': 'sequence',
'mask': 'mutation_sequence',
'mut': 'num_mutations',
'category': 'partition',
}, axis=1)
gas1_orig = gas1.copy() ## for comparison below if needed
gas1.head()
Explanation: Load validation experiment dataframe
End of explanation
numpy.testing.assert_allclose(
gas1.GAS1_plasmid_F,
gas1.GAS1_plasmid_N / gas1.GAS1_plasmid_N.sum())
numpy.testing.assert_allclose(
gas1.GAS1_virus_F,
gas1.GAS1_virus_N / gas1.GAS1_virus_N.sum())
Explanation: Validate that N->F columns computed as expected
End of explanation
zero_plasmids_mask = gas1.GAS1_plasmid_N == 0
zero_plasmids_mask.sum()
Explanation: Filter sequences with insufficient plasmids
Find zero-plasmid sequences
End of explanation
low_plasmids_mask = (gas1.GAS1_plasmid_N < 10) & ~zero_plasmids_mask
low_plasmids_mask.sum()
Explanation: Find low-plasmid count sequences
These selection values are unreliable, more noisy
End of explanation
seqs_to_remove = (low_plasmids_mask | zero_plasmids_mask)
seqs_to_remove.sum()
num_seqs_before_plasmid_filter = len(gas1)
num_seqs_before_plasmid_filter
gas1 = gas1[~seqs_to_remove].copy()
num_seqs_before_plasmid_filter - len(gas1)
len(gas1)
Explanation: Drop sequences that don't meet the plasmid count bars
End of explanation
PSEUDOCOUNT = 1
def counts_to_frequency(counts):
return counts / counts.sum()
gas1['virus_N'] = gas1.GAS1_virus_N + PSEUDOCOUNT
gas1['plasmid_N'] = gas1.GAS1_plasmid_N + PSEUDOCOUNT
gas1['virus_F'] = counts_to_frequency(gas1.virus_N)
gas1['plasmid_F'] = counts_to_frequency(gas1.plasmid_N)
Explanation: Add pseudocounts
End of explanation
gas1['viral_selection'] = numpy.log2(gas1.virus_F / gas1.plasmid_F)
assert 0 == gas1.viral_selection.isna().sum()
assert not numpy.any(numpy.isinf(gas1.viral_selection))
gas1.viral_selection.describe()
Explanation: Compute viral selection
End of explanation
# Classify the selection coeff series after fitting to a GMM
gmm_model = TwoGaussianMixtureModelLabeler(
gas1[gas1.partition.isin(['stop', 'wild_type'])].viral_selection)
gas1['viral_selection_gmm'] = gmm_model.predict(gas1.viral_selection)
# Compute the threshold for the viable class from the GMM labels
selection_coeff_threshold = gas1.loc[gas1.viral_selection_gmm == 2, 'viral_selection'].min()
print('selection coeff cutoff = %.3f' % selection_coeff_threshold)
# Add a label column
def is_viable_mutant(mutant_data):
return mutant_data['viral_selection'] > selection_coeff_threshold
gas1['is_viable'] = gas1.apply(is_viable_mutant, axis=1)
print(gas1.is_viable.mean())
Explanation: Compute GMM threshold
End of explanation
ml_generated_seqs = [
'cnn_designed_plus_rand_train_seed',
'cnn_designed_plus_rand_train_walked',
'cnn_rand_doubles_plus_single_seed',
'cnn_rand_doubles_plus_single_walked',
'cnn_standard_seed',
'cnn_standard_walked',
'lr_designed_plus_rand_train_seed',
'lr_designed_plus_rand_train_walked',
'lr_rand_doubles_plus_single_seed',
'lr_rand_doubles_plus_single_walked',
'lr_standard_seed',
'lr_standard_walked',
'rnn_designed_plus_rand_train_seed',
'rnn_designed_plus_rand_train_walked',
'rnn_rand_doubles_plus_singles_seed',
'rnn_rand_doubles_plus_singles_walked',
'rnn_standard_seed',
'rnn_standard_walked',
]
is_ml_generated_mask = gas1.partition.isin(ml_generated_seqs)
ml_gen_df = gas1[is_ml_generated_mask].copy()
non_ml_gen_df = gas1[~is_ml_generated_mask].copy()
ml_gen_df.partition.value_counts()
ml_gen_deduped = ml_gen_df.groupby('sequence').apply(
lambda dupes: dupes.loc[dupes.plasmid_N.idxmax()]).copy()
display(ml_gen_deduped.shape)
ml_gen_deduped.head()
Explanation: De-dupe model-designed sequences
Partition the sequences that should not be de-deduped
Split off the partitions for which we want to retain replicates, such as controls/etc.
End of explanation
gas1_deduped = pandas.concat([ml_gen_deduped, non_ml_gen_df], axis=0)
print(gas1_deduped.shape)
gas1_deduped.partition.value_counts()
Explanation: Concatenate de-deduped ML-generated seqs with rest
End of explanation
gas1 = gas1_deduped
gas1['num_edits'] = gas1.sequence.apply(
lambda s: Levenshtein.distance(R1_TILE21_WT_SEQ, s))
gas1.num_edits.describe()
COLUMN_SCHEMA = [
'sequence',
'partition',
'mutation_sequence',
'num_mutations',
'num_edits',
'viral_selection',
'is_viable',
]
gas1a = gas1[COLUMN_SCHEMA].copy()
Explanation: Compute edit distance for chip
End of explanation
harvard = pandas.read_csv('r0r1_with_partitions_and_labels.csv', index_col=None)
harvard = harvard.rename({
'S': 'viral_selection',
'aa_seq': 'sequence',
'mask': 'mutation_sequence',
'mut': 'num_mutations',
}, axis=1)
designed_mask = harvard.partition.isin(['min_fit', 'thresh', 'temp'])
harvard.loc[designed_mask, ['partition']] = 'designed'
harvard['num_edits'] = harvard.sequence.apply(
lambda s: Levenshtein.distance(R1_TILE21_WT_SEQ, s))
harvard.num_edits.describe()
harvard1 = harvard[COLUMN_SCHEMA].copy()
harvard1.head(3)
harvard1['chip'] = 'harvard'
gas1a['chip'] = 'gas1'
combined = pandas.concat([
harvard1,
gas1a,
], axis=0, sort=False)
print(combined.shape)
combined.partition.value_counts()
combined.head()
Explanation: Concat with training data chip
End of explanation |
6,209 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using the library
This code tutorial shows how to estimate a 1-RDM and perform variational optimization
Step1: Generate the input files, set up quantum resources, and set up the OpdmFunctional to make measurements.
Step2: The displayed text is the output of the gradient based restricted Hartree-Fock. We define the gradient in rhf_objective and use the conjugate-gradient optimizer to optimize the basis rotation parameters. This is equivalent to doing Hartree-Fock theory from the canonical transformation perspective.
Next, we will do the following
Step3: This should print out the various energies estimated from the 1-RDM along with error bars. Generated from resampling the 1-RDM based on the estimated covariance.
Optimization
We use the sampling functionality to variationally relax the parameters of
my ansatz such that the energy is decreased.
For this we will need the augmented Hessian optimizer
The optimizerer code we have takes
Step4: Each interation prints out a variety of information that the user might find useful. Watching energies go down is known to be one of the best forms of entertainment during a shelter-in-place order.
After the optimization we can print the energy as a function of iteration number to see close the energy gets to the true minium. | Python Code:
# Import library functions and define a helper function
import numpy as np
import cirq
from openfermioncirq.experiments.hfvqe.gradient_hf import rhf_func_generator
from openfermioncirq.experiments.hfvqe.opdm_functionals import OpdmFunctional
from openfermioncirq.experiments.hfvqe.analysis import (compute_opdm,
mcweeny_purification,
resample_opdm,
fidelity_witness,
fidelity)
from openfermioncirq.experiments.hfvqe.third_party.higham import fixed_trace_positive_projection
from openfermioncirq.experiments.hfvqe.molecular_example import make_h6_1_3
Explanation: Using the library
This code tutorial shows how to estimate a 1-RDM and perform variational optimization
End of explanation
rhf_objective, molecule, parameters, obi, tbi = make_h6_1_3()
ansatz, energy, gradient = rhf_func_generator(rhf_objective)
# settings for quantum resources
qubits = [cirq.GridQubit(0, x) for x in range(molecule.n_orbitals)]
sampler = cirq.Simulator(dtype=np.complex128) # this can be a QuantumEngine
# OpdmFunctional contains an interface for running experiments
opdm_func = OpdmFunctional(qubits=qubits,
sampler=sampler,
constant=molecule.nuclear_repulsion,
one_body_integrals=obi,
two_body_integrals=tbi,
num_electrons=molecule.n_electrons // 2, # only simulate spin-up electrons
clean_xxyy=True,
purification=True
)
Explanation: Generate the input files, set up quantum resources, and set up the OpdmFunctional to make measurements.
End of explanation
# 1.
# default to 250_000 shots for each circuit.
# 7 circuits total, printed for your viewing pleasure
# return value is a dictionary with circuit results for each permutation
measurement_data = opdm_func.calculate_data(parameters)
# 2.
opdm, var_dict = compute_opdm(measurement_data,
return_variance=True)
opdm_pure = mcweeny_purification(opdm)
# 3.
raw_energies = []
raw_fidelity_witness = []
purified_eneriges = []
purified_fidelity_witness = []
purified_fidelity = []
true_unitary = ansatz(parameters)
nocc = molecule.n_electrons // 2
nvirt = molecule.n_orbitals - nocc
initial_fock_state = [1] * nocc + [0] * nvirt
for _ in range(1000): # 1000 repetitions of the measurement
new_opdm = resample_opdm(opdm, var_dict)
raw_energies.append(opdm_func.energy_from_opdm(new_opdm))
raw_fidelity_witness.append(
fidelity_witness(target_unitary=true_unitary,
omega=initial_fock_state,
measured_opdm=new_opdm)
)
# fix positivity and trace of sampled 1-RDM if strictly outside
# feasible set
w, v = np.linalg.eigh(new_opdm)
if len(np.where(w < 0)[0]) > 0:
new_opdm = fixed_trace_positive_projection(new_opdm, nocc)
new_opdm_pure = mcweeny_purification(new_opdm)
purified_eneriges.append(opdm_func.energy_from_opdm(new_opdm_pure))
purified_fidelity_witness.append(
fidelity_witness(target_unitary=true_unitary,
omega=initial_fock_state,
measured_opdm=new_opdm_pure)
)
purified_fidelity.append(
fidelity(target_unitary=true_unitary,
measured_opdm=new_opdm_pure)
)
print('\n\n\n\n')
print("Canonical Hartree-Fock energy ", molecule.hf_energy)
print("True energy ", energy(parameters))
print("Raw energy ", opdm_func.energy_from_opdm(opdm),
"+- ", np.std(raw_energies))
print("Raw fidelity witness ", np.mean(raw_fidelity_witness).real,
"+- ", np.std(raw_fidelity_witness))
print("purified energy ", opdm_func.energy_from_opdm(opdm_pure),
"+- ", np.std(purified_eneriges))
print("Purified fidelity witness ", np.mean(purified_fidelity_witness).real,
"+- ", np.std(purified_fidelity_witness))
print("Purified fidelity ", np.mean(purified_fidelity).real,
"+- ", np.std(purified_fidelity))
Explanation: The displayed text is the output of the gradient based restricted Hartree-Fock. We define the gradient in rhf_objective and use the conjugate-gradient optimizer to optimize the basis rotation parameters. This is equivalent to doing Hartree-Fock theory from the canonical transformation perspective.
Next, we will do the following:
Do measurements for a given set of parameters
Compute 1-RDM, variances, and purification
Compute energy, fidelities, and errorbars
End of explanation
from openfermioncirq.experiments.hfvqe.mfopt import moving_frame_augmented_hessian_optimizer
from openfermioncirq.experiments.hfvqe.opdm_functionals import RDMGenerator
import matplotlib.pyplot as plt
rdm_generator = RDMGenerator(opdm_func, purification=True)
opdm_generator = rdm_generator.opdm_generator
result = moving_frame_augmented_hessian_optimizer(
rhf_objective=rhf_objective,
initial_parameters=parameters + 1.0E-1,
opdm_aa_measurement_func=opdm_generator,
verbose=True, delta=0.03,
max_iter=20,
hessian_update='diagonal',
rtol=0.50E-2)
Explanation: This should print out the various energies estimated from the 1-RDM along with error bars. Generated from resampling the 1-RDM based on the estimated covariance.
Optimization
We use the sampling functionality to variationally relax the parameters of
my ansatz such that the energy is decreased.
For this we will need the augmented Hessian optimizer
The optimizerer code we have takes:
rhf_objective object, initial parameters,
a function that takes a n x n unitary and returns an opdm
maximum iterations,
hassian_update which indicates how much of the hessian to use
rtol which is the gradient stopping condition.
A natural thing that we will want to save is the variance dictionary of
the non-purified 1-RDM. This is accomplished by wrapping the 1-RDM
estimation code in another object that keeps track of the variance
dictionaries.
End of explanation
plt.semilogy(range(len(result.func_vals)),
np.abs(np.array(result.func_vals) - energy(parameters)),
'C0o-')
plt.xlabel("Optimization Iterations", fontsize=18)
plt.ylabel(r"$|E - E^{*}|$", fontsize=18)
plt.tight_layout()
plt.show()
Explanation: Each interation prints out a variety of information that the user might find useful. Watching energies go down is known to be one of the best forms of entertainment during a shelter-in-place order.
After the optimization we can print the energy as a function of iteration number to see close the energy gets to the true minium.
End of explanation |
6,210 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building a pandas Cheat Sheet, Part 1
Import pandas with the right name
Step1: Set all graphics from matplotlib to display inline
Step2: Display the names of the columns in the csv
Step3: Display the first 3 animals.
Step4: Sort the animals to see the 3 longest animals.
Step5: What are the counts of the different values of the "animal" column? a.k.a. how many cats and how many dogs.
Step6: Only select the dogs.
Step7: Display all of the animals that are greater than 40 cm.
Step8: 'length' is the animal's length in cm. Create a new column called inches that is the length in inches.
1 inch = 2.54 cm
Step9: Save the cats to a separate variable called "cats." Save the dogs to a separate variable called "dogs."
Step10: Display all of the animals that are cats and above 12 inches long. First do it using the "cats" variable, then do it using your normal dataframe.
Step11: What's the mean length of a cat?
What's the mean length of a dog
Cats are mean but dogs are not
Step12: Use groupby to accomplish both of the above tasks at once.
Step13: Make a histogram of the length of dogs. I apologize that it is so boring.
Step14: Change your graphing style to be something else (anything else!)
Step15: Make a horizontal bar graph of the length of the animals, with their name as the label (look at the billionaires notebook I put on Slack!)
Step16: Make a sorted horizontal bar graph of the cats, with the larger cats on top. | Python Code:
import pandas as pd
df = pd.read_csv("07-hw-animals.csv")
Explanation: Building a pandas Cheat Sheet, Part 1
Import pandas with the right name
End of explanation
#!pip install matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
#This lets your graph show you in your notebook
df
Explanation: Set all graphics from matplotlib to display inline
End of explanation
df['name']
Explanation: Display the names of the columns in the csv
End of explanation
df.head(3)
Explanation: Display the first 3 animals.
End of explanation
df.sort_values('length', ascending=False).head(3)
Explanation: Sort the animals to see the 3 longest animals.
End of explanation
df['animal'].value_counts()
Explanation: What are the counts of the different values of the "animal" column? a.k.a. how many cats and how many dogs.
End of explanation
dog_df = df['animal'] == 'dog'
df[dog_df]
Explanation: Only select the dogs.
End of explanation
long_animals = df['length'] > 40
df[long_animals]
Explanation: Display all of the animals that are greater than 40 cm.
End of explanation
df['length_inches'] = df['length'] / 2.54
df
Explanation: 'length' is the animal's length in cm. Create a new column called inches that is the length in inches.
1 inch = 2.54 cm
End of explanation
cats = df['animal'] == 'cat'
dogs = df['animal'] == 'dog'
Explanation: Save the cats to a separate variable called "cats." Save the dogs to a separate variable called "dogs."
End of explanation
long_animals = df['length_inches'] > 12
df[cats & long_animals]
df[(df['length_inches'] > 12) & (df['animal'] == 'cat')]
#Amazing!
Explanation: Display all of the animals that are cats and above 12 inches long. First do it using the "cats" variable, then do it using your normal dataframe.
End of explanation
df[cats].mean()
df[dogs].mean()
Explanation: What's the mean length of a cat?
What's the mean length of a dog
Cats are mean but dogs are not
End of explanation
df.groupby('animal').mean()
#groupby
Explanation: Use groupby to accomplish both of the above tasks at once.
End of explanation
df[dogs].plot.hist(y='length_inches')
Explanation: Make a histogram of the length of dogs. I apologize that it is so boring.
End of explanation
df[dogs].plot.bar(x='name', y='length_inches')
Explanation: Change your graphing style to be something else (anything else!)
End of explanation
df[dogs].plot.barh(x='name', y='length_inches')
#Fontaine is such an annoying name for a dog
Explanation: Make a horizontal bar graph of the length of the animals, with their name as the label (look at the billionaires notebook I put on Slack!)
End of explanation
df[cats].sort(['length_inches'], ascending=False).plot(kind='barh', x='name', y='length_inches')
#df[df['animal']] == 'cat'].sort_values(by='length).plot(kind='barh', x='name', y='length', legend=False)
Explanation: Make a sorted horizontal bar graph of the cats, with the larger cats on top.
End of explanation |
6,211 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The Cirq Developers
Step1: Notebook template
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Make a copy of this template
You will need to have access to Quantum Computing Service before running this colab.
This notebook can serve as a starter kit for you to run programs on Google's quantum hardware. You can download it using the directions below, open it in colab (or Jupyter), and modify it to begin your experiments.
How to download iPython notebooks from GitHub
You can retrieve iPython notebooks in the Cirq repository by
going to the docs/ directory. For instance, this Colab template is found here. Select the file that you would like to download and then click the Raw button in the upper-right part of the window
Step3: Create an Engine variable
The following creates an engine variable which can be used to run programs under the project ID you entered above.
Step4: Example | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The Cirq Developers
End of explanation
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq --pre
print("installed cirq.")
Explanation: Notebook template
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/tutorials/google/colab"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/google/colab.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/colab.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/google/colab.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
Setup
Note: this notebook relies on unreleased Cirq features. If you want to try these features, make sure you install cirq via pip install cirq --pre.
End of explanation
import cirq_google as cg
# The Google Cloud Project id to use.
project_id = '' #@param {type:"string"}
processor_id = "" #@param {type:"string"}
from cirq_google.engine.qcs_notebook import get_qcs_objects_for_notebook
device_sampler = get_qcs_objects_for_notebook(project_id, processor_id)
Explanation: Make a copy of this template
You will need to have access to Quantum Computing Service before running this colab.
This notebook can serve as a starter kit for you to run programs on Google's quantum hardware. You can download it using the directions below, open it in colab (or Jupyter), and modify it to begin your experiments.
How to download iPython notebooks from GitHub
You can retrieve iPython notebooks in the Cirq repository by
going to the docs/ directory. For instance, this Colab template is found here. Select the file that you would like to download and then click the Raw button in the upper-right part of the window:
<img src="https://raw.githubusercontent.com/quantumlib/Cirq/master/docs/images/colab_github.png" alt="GitHub UI button to view raw file">
This will show the entire file contents. Right-click and select Save as to save this file to your computer. Make sure to save to a file with a .ipynb extension (you may need to select All files from the format dropdown instead of text). You can also get to this Colab's raw content directly
You can also retrieve the entire Cirq repository by running the following command in a terminal that has git installed:
git checkout https://github.com/quantumlib/Cirq.git
How to open Google Colab
You can open a new Colab notebook from your Google Drive window or by visiting the Colab site. From the Colaboratory site, you can use the menu to upload an iPython notebook:
<img src="https://raw.githubusercontent.com/quantumlib/Cirq/master/docs/images/colab_upload.png" alt="Google Colab's upload notebook entry in File menu.">
This will upload the ipynb file that you downloaded before. You can now run all the commands, modify it to suit your goals, and share it with others.
More Documentation Links
Quantum Engine concepts
Quantum Engine documentation
Cirq documentation
Colab documentation
Authenticate and install Cirq
For details of authentication and installation, please see Get started with Quantum Computing Service.
Note: The below code will install the latest stable release of cirq. If you need the latest and greatest features and don't mind if a few things aren't quite working correctly, you can install the pre-release version of cirq using pip install --pre cirq instead of pip install cirq to get the most up-to-date features of cirq.
Enter the Cloud project ID you'd like to use in the project_id field.
Then run the cell below (and go through the auth flow for access to the project id you entered).
<img src="https://raw.githubusercontent.com/quantumlib/Cirq/master/docs/images/run-code-block.png" alt="Quantum Engine console">
End of explanation
import cirq
import cirq_google
from google.auth.exceptions import DefaultCredentialsError
from google.api_core.exceptions import PermissionDenied
# Create an Engine object to use, providing the project id and the args
try:
engine = cirq_google.get_engine()
engine.list_processors()
print(f"Successful authentication using project {project_id}!")
except DefaultCredentialsError as err:
print("Could not authenticate to Google Quantum Computing Service.")
print(" Tips: If you are using Colab: make sure the previous cell was executed successfully.")
print(" If this notebook is not in Colab (e.g. Jupyter notebook), make sure gcloud is installed and `gcloud auth application-default login` was executed.")
print()
print("Error message:")
print(err)
except PermissionDenied as err:
print(f"While you are authenticated to Google Cloud it seems the project '{project_id}' does not exist or does not have the Quantum Engine API enabled.")
print("Error message:")
print(err)
Explanation: Create an Engine variable
The following creates an engine variable which can be used to run programs under the project ID you entered above.
End of explanation
# A simple example.
q = cirq.GridQubit(5, 2)
circuit = cirq.Circuit(cirq.X(q)**0.5, cirq.measure(q, key='m'))
job = engine.run_sweep(
program=circuit,
repetitions=10000,
processor_ids=['rainbow'],
gate_set=cirq_google.SYC_GATESET)
results = [str(int(b)) for b in job.results()[0].measurements['m'][:, 0]]
print('Success! Results:')
print(''.join(results))
Explanation: Example
End of explanation |
6,212 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create test datasets
In this case we are making a n=5 of "gaussian blobs" with st_dev=1.
We will look at 2 cases,
* easy case
Step1: Create ko
Step2: Create a Collection and add ko
Step3: Load a Collection and read ko members
Step4: Get ko By ID or by Query
Step5: Classifier Testing
Step6: Clustering
Find the optimal number of clusters
Automatic grouping of similar objects into sets.
Applications
Step7: Silhouette Averages
Step8: K-Means Clustering
Step9: Deep Neural Network | Python Code:
# Blobs with 4 -- slight overlaps
num_centers = 5
st_dev = 1
n_samples = 1000
noise = 0.12
#Easy Case
#x,y = make_blobs(n_samples=n_samples, centers=num_centers, cluster_std=st_dev, random_state=10)
#Medium Case
x,y = make_blobs(n_samples=n_samples, centers=num_centers, cluster_std=st_dev, random_state=0)
dataset = {"x":x.tolist(), "y":y.tolist()}
#Harder Case
# x,y = make_moons(n_samples=n_samples, noise=noise)
# dataset = {"x":x.tolist(), "y":y.tolist()}
Explanation: Create test datasets
In this case we are making a n=5 of "gaussian blobs" with st_dev=1.
We will look at 2 cases,
* easy case: The blobs are well separated
* medium case: Blobs overlap slightly
* harder case: Blobs are moon shaped and intersect
End of explanation
sample_ko = {"owner":"blaiszik","key":"test","object":"test","uri":["http://google.com"],"data":dataset}
r = create_ko(sample_ko)
result = r.json()
new_id = result['_id']
print "Created ko: %s"%(result['_id'])
Explanation: Create ko
End of explanation
sample_collection = {"owner":"blaiszik","name":"aps-tutorial", "uri":[], "tag":[{"key":"tutorial-new6", "value":None}]}
sample_collection['member'] = [{"data_type":"ko", "_id": result['_id']}
]
sample_collection['tag']
r = create_collection([sample_collection])
Explanation: Create a Collection and add ko
End of explanation
# Get a collection based on a tag search
r = get_collection(query=tag_search({"key":"tutorial-new6"}))
result = r.json()[0]
#Read collection ko members
ids = [member['_id'] if member['data_type']=='ko' else None
for member in result['member']]
r = get_ko(id=ids)
r.json()
Explanation: Load a Collection and read ko members
End of explanation
print "Getting ko %s"%(new_id)
r = get_ko(new_id)
# print "Getting ko by query:"
# query = {"owner":"wilde"}
# r = get_ko(query=query)
result = r.json()
#r = get_ko(id=["5679aad366304c16141be297","5679aad366304c16141be297"])
r.json()
df1 = pd.DataFrame(result['data']['x'], columns=["x1","x2"])
df2 = pd.DataFrame(result['data']['y'], columns=["y"])
df = pd.concat([df1,df2], axis=1)
plt.scatter(df['x1'], df['x2'], c=df['y'], s=50, cmap=plt.cm.RdBu_r);
sns.despine()
Explanation: Get ko By ID or by Query
End of explanation
clfs = [
(ExtraTreesClassifier(n_estimators=10), "Extra Trees"),
(RandomForestClassifier(n_estimators=10), "Random Forest"),
(GaussianNB(), "Gaussian Naive-Bayes")
]
for i, (clf,title) in enumerate(clfs):
clf.fit(df[['x1','x2']], df['y'])
fig = sns.lmplot(x="x1", y="x2", data=df, order=1, hue="y", fit_reg=False, scatter_kws={"s": 50});
fig.ax.set_ylabel('x1')
fig.ax.set_xlabel('x2')
fig.ax.set_title(title)
## Plot decision contour or probability function
h = .01 # step size in the mesh
x_min, x_max = df['x1'].min() - 1, df['x1'].max() + 1
y_min, y_max = df['x2'].min() - 1, df['x2'].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
if hasattr(clf, "decision_function"):
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
else:
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z = Z.reshape(xx.shape)
fig.ax.contourf(xx, yy, Z, alpha=0.2, cmap=plt.cm.RdBu_r)
Explanation: Classifier Testing
End of explanation
distortions = []
silhouette_range = range(1,10)
for i in silhouette_range:
km = KMeans(n_clusters=i,
init='k-means++',
n_init=10,
max_iter=300,
random_state=0)
km.fit(df[['x1','x2']])
distortions .append(km.inertia_)
plt.plot(silhouette_range, distortions , marker='o')
plt.xlabel('Number of clusters')
plt.ylabel('Distortion')
plt.tight_layout()
#plt.savefig('./figures/elbow.png', dpi=300)
sns.despine()
plt.show()
Explanation: Clustering
Find the optimal number of clusters
Automatic grouping of similar objects into sets.
Applications: Customer segmentation, Grouping experiment outcomes
End of explanation
import numpy as np
from matplotlib import cm
from sklearn.metrics import silhouette_samples
silhouette_avg = []
silhouette_range = range(2,10)
for i in silhouette_range:
km = KMeans(n_clusters=i,
init='k-means++',
n_init=10,
max_iter=300,
tol=1e-04,
random_state=0)
y_km = km.fit_predict(df[['x1','x2']])
silhouette_vals = silhouette_samples(df[['x1','x2']], y_km, metric='euclidean')
silhouette_avg.append(np.mean(silhouette_vals))
plt.plot(silhouette_range, silhouette_avg , marker='o')
sns.despine()
Explanation: Silhouette Averages
End of explanation
show_decision = True
show_centroids = True
#K-Means
clr = KMeans(n_clusters=num_centers)
y_pred = clr.fit_predict(df[['x1','x2']])
##Decision Boundary
if show_decision:
# Step size of the mesh. Decrease to increase the quality of the VQ.
h = .02 # point in the mesh [x_min, m_max]x[y_min, y_max].
# Plot the decision boundary. For that, we will assign a color to each
x_min, x_max = df['x1'].min() - 1, df['x1'].max() + 1
y_min, y_max = df['x2'].min() - 1, df['x2'].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = clr.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1)
plt.clf()
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.RdBu_r,
aspect='auto', origin='lower', alpha=0.25, zorder=0)
sns.despine()
plt.scatter(df['x1'],df['x2'], c=df['y'], cmap=plt.cm.RdBu_r, zorder=1)
ax = plt.gca()
ax.set_xlabel('x1')
ax.set_ylabel('x2')
if show_centroids:
# Plot the centroids as a green X
centroids = clr.cluster_centers_
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='o', s=100, linewidths=8,
color='darkblue', zorder=2)
Explanation: K-Means Clustering
End of explanation
df[['x1','x2']].info()
import skflow
from sklearn import datasets, metrics
iris = datasets.load_iris()
clf = skflow.TensorFlowDNNClassifier(hidden_units=[100, 200, 100], n_classes=4)
clf.fit(iris.data, iris.target)
score = metrics.accuracy_score(clf.predict(iris.data), iris.target)
print("Accuracy: %f" % score)
Explanation: Deep Neural Network
End of explanation |
6,213 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="AW&H2015.tiff" style="float
Step1: Setup a New Directory and Change Paths
For this tutorial, we will work in the 21_FlopyIntro directory, which is located up one folder and over to the Data folder. We can use some fancy Python tools to help us manage the directory creation. Note that if you encounter path problems with this workbook, you can stop and then restart the kernel and the paths will be reset.
Step2: Define the Model Extent, Grid Resolution, and Characteristics
It is normally good practice to group things that you might want to change into a single code block. This makes it easier to make changes and rerun the code.
Step3: Create the MODFLOW Model Object
Create a flopy MODFLOW object
Step4: Discretization Package
Create a flopy discretization package object
Step5: Basic Package
Create a flopy basic package object
Step6: Layer Property Flow Package
Create a flopy layer property flow package object
Step7: Output Control
Create a flopy output control object
Step8: Preconditioned Conjugate Gradient Solver
Create a flopy pcg package object
Step9: Writing the MODFLOW Input Files
Before we create the model input datasets, we can do some directory cleanup to make sure that we don't accidently use old files.
Step10: Yup. It's that simple, the model datasets are written using a single command (mf.write_input).
Check in the model working directory and verify that the input files have been created. Or if you might just add another cell, right after this one, that prints a list of all the files in our model directory. The path we are working in is returned from this next block.
Step11: Running the Model
Flopy has several methods attached to the model object that can be used to run the model. They are run_model, run_model2, and run_model3. Here we use run_model3, which will write output to the notebook.
Step12: Post Processing the Results
To read heads from the MODFLOW binary output file, we can use the flopy.utils.binaryfile module. Specifically, we can use the HeadFile object from that module to extract head data arrays. | Python Code:
%matplotlib inline
import sys
import os
import shutil
import numpy as np
from subprocess import check_output
# Import flopy
import flopy
Explanation: <img src="AW&H2015.tiff" style="float: left">
<img src="flopylogo.png" style="float: center">
Problem P4.1 Flopy Background and Toth (1962) Flow System
Pages 171-172 of Anderson Woessner and Hunt (2015) describe classic groundwater flow problem, that of Toth (1962). In this iPython notebook we will build, run, and visualize this problem. In order to do this with less programming, we will also take advantage of a powerful set of Python tools called "Flopy" developed for the USGS groundwater flow MODFLOW. Flopy is a set of python scripts for writing MODFLOW data sets and reading MODFLOW binary output files. Flopy Version 3 is served from Github and can be accessed here.
Some things to keep in mind:
* Flopy is still in development. It works for many MODFLOW packages. It also works for MT3DMS and SEAWAT. Some package options, and some packages, however, are not supported yet.
* Flopy is primarily a writer and reader of MODFLOW data sets. It does not intersect grids with spatial features, or perform time series interpolation. It is up to the user to do this. As part of this class, we will show how this can be done within the Python environment.
* Preliminary documentation for Flopy can be accessed here.
This tutorial is based on the Toth (1962). Anderson et al. simpflied the cross-sectional view to look like this:
<img src="P4.1_figure.tiff" style="float: center">
Below is an iPython Notebook that builds a MODFLOW model of the Toth (1962) flow system and plots results. See the Github wiki associated with this Chapter for information on one suggested installation and setup configuration for Python and iPython Notebook.
[Acknowledgements: This tutorial was created by Randy Hunt and all failings are mine. The exercise here has benefited greatly from the online Flopy tutorial and example notebooks developed by Chris Langevin and Joe Hughes for the USGS Spring 2015 Python Training course GW1774]
Creating the Model
In this example, we will create a simple groundwater flow model by following the tutorial included on the Flopy website. We will make a few small changes so that the tutorial works with our file structure.
Visit the tutorial website here.
Setup the Notebook Environment and Import Flopy
Load a few standard libraries, and then load flopy.
End of explanation
# Set the name of the path to the model working directory
dirname = "P4-1_Toth"
datapath = os.getcwd()
modelpath = os.path.join(datapath, dirname)
print 'Name of model path: ', modelpath
# Now let's check if this directory exists. If not, then we will create it.
if os.path.exists(modelpath):
print 'Model working directory already exists.'
else:
print 'Creating model working directory.'
os.mkdir(modelpath)
Explanation: Setup a New Directory and Change Paths
For this tutorial, we will work in the 21_FlopyIntro directory, which is located up one folder and over to the Data folder. We can use some fancy Python tools to help us manage the directory creation. Note that if you encounter path problems with this workbook, you can stop and then restart the kernel and the paths will be reset.
End of explanation
# model domain and grid definition
# for clarity, user entered variables are all caps; python syntax are lower case or mixed case
# we will use a layer orientation profile for easy plotting (see Box 4.2 on page 126)
LX = 200.
LY = 100.
ZTOP = 1. # the "thickness" of the profile will be 1 m (= ZTOP - ZBOT)
ZBOT = 0.
NLAY = 1
NROW = 5
NCOL = 10
DELR = LX / NCOL # recall that MODFLOW convention is DELR is along a row, thus has items = NCOL; see page XXX in AW&H (2015)
DELC = LY / NROW # recall that MODFLOW convention is DELC is along a column, thus has items = NROW; see page XXX in AW&H (2015)
DELV = (ZTOP - ZBOT) / NLAY
BOTM = np.linspace(ZTOP, ZBOT, NLAY + 1)
HK = 10.
VKA = 1.
print "DELR =", DELR, " DELC =", DELC, ' DELV =', DELV
print "BOTM =", BOTM
Explanation: Define the Model Extent, Grid Resolution, and Characteristics
It is normally good practice to group things that you might want to change into a single code block. This makes it easier to make changes and rerun the code.
End of explanation
# Assign name and create modflow model object
modelname = 'P4-1'
exe_name = os.path.join(datapath, 'mf2005')
print 'Model executable: ', exe_name
MF = flopy.modflow.Modflow(modelname, exe_name=exe_name, model_ws=modelpath)
Explanation: Create the MODFLOW Model Object
Create a flopy MODFLOW object: flopy.modflow.Modflow.
End of explanation
# Create the discretization object
TOP = np.ones((NROW, NCOL),dtype=np.float)
DIS_PACKAGE = flopy.modflow.ModflowDis(MF, NLAY, NROW, NCOL, delr=DELR, delc=DELC,
top=TOP, botm=BOTM[1:], laycbd=0)
# print DIS_PACKAGE uncomment this to see information about the flopy object
Explanation: Discretization Package
Create a flopy discretization package object: flopy.modflow.ModflowDis.
End of explanation
# Variables for the BAS package
IBOUND = np.ones((NLAY, NROW, NCOL), dtype=np.int32) # all nodes are active (IBOUND = 1)
# make the top of the profile specified head by setting the IBOUND = -1
IBOUND[:, 0, :] = -1 #don't forget arrays are zero-based!
print IBOUND
STRT = 100 * np.ones((NLAY, NROW, NCOL), dtype=np.float32) # set starting head to 100 through out model domain
STRT[:, 0, 0] = 100. # the function from Toth is h = 0.05x + 100, so
STRT[:, 0, 1] = 0.05*20+100
STRT[:, 0, 2] = 0.05*40+100
STRT[:, 0, 3] = 0.05*60+100
STRT[:, 0, 4] = 0.05*80+100
STRT[:, 0, 5] = 0.05*100+100
STRT[:, 0, 6] = 0.05*120+100
STRT[:, 0, 7] = 0.05*140+100
STRT[:, 0, 8] = 0.05*160+100
STRT[:, 0, 9] = 0.05*180+100
print STRT
BAS_PACKAGE = flopy.modflow.ModflowBas(MF, ibound=IBOUND, strt=STRT)
# print BAS_PACKAGE # uncomment this at far left to see the information about the flopy BAS object
Explanation: Basic Package
Create a flopy basic package object: flopy.modflow.ModflowBas.
End of explanation
LPF_PACKAGE = flopy.modflow.ModflowLpf(MF, hk=HK, vka=VKA) # we defined the K and anisotropy at top of file
# print LPF_PACKAGE # uncomment this at far left to see the information about the flopy LPF object
Explanation: Layer Property Flow Package
Create a flopy layer property flow package object: flopy.modflow.ModflowLpf.
End of explanation
OC_PACKAGE = flopy.modflow.ModflowOc(MF) # we'll use the defaults for the model output
# print OC_PACKAGE # uncomment this at far left to see the information about the flopy OC object
Explanation: Output Control
Create a flopy output control object: flopy.modflow.ModflowOc.
End of explanation
PCG_PACKAGE = flopy.modflow.ModflowPcg(MF) # we'll use the defaults for the PCG solver
# print PCG_PACKAGE # uncomment this at far left to see the information about the flopy PCG object
Explanation: Preconditioned Conjugate Gradient Solver
Create a flopy pcg package object: flopy.modflow.ModflowPcg.
End of explanation
#Before writing input, destroy all files in folder
#This will prevent us from reading old results
modelfiles = os.listdir(modelpath)
for filename in modelfiles:
f = os.path.join(modelpath, filename)
if modelname in f:
try:
os.remove(f)
print 'Deleted: ', filename
except:
print 'Unable to delete: ', filename
#Now write the model input files
MF.write_input()
Explanation: Writing the MODFLOW Input Files
Before we create the model input datasets, we can do some directory cleanup to make sure that we don't accidently use old files.
End of explanation
# return current working directory
modelpath
Explanation: Yup. It's that simple, the model datasets are written using a single command (mf.write_input).
Check in the model working directory and verify that the input files have been created. Or if you might just add another cell, right after this one, that prints a list of all the files in our model directory. The path we are working in is returned from this next block.
End of explanation
silent = False #Print model output to screen?
pause = False #Require user to hit enter? Doesn't mean much in Ipython notebook
report = True #Store the output from the model in buff
success, buff = MF.run_model(silent=silent, pause=pause, report=report)
Explanation: Running the Model
Flopy has several methods attached to the model object that can be used to run the model. They are run_model, run_model2, and run_model3. Here we use run_model3, which will write output to the notebook.
End of explanation
#imports for plotting and reading the MODFLOW binary output file
import matplotlib.pyplot as plt
import flopy.utils.binaryfile as bf
#Create the headfile object and grab the results for last time.
headfile = os.path.join(modelpath, modelname + '.hds')
headfileobj = bf.HeadFile(headfile)
#Get a list of times that are contained in the model
times = headfileobj.get_times()
print 'Headfile (' + modelname + '.hds' + ') contains the following list of times: ', times
#Get a numpy array of heads for totim = 1.0
#The get_data method will extract head data from the binary file.
HEAD = headfileobj.get_data(totim=1.0)
#Print statistics on the head
print 'Head statistics'
print ' min: ', HEAD.min()
print ' max: ', HEAD.max()
print ' std: ', HEAD.std()
#Create a contour plot of heads
FIG = plt.figure(figsize=(15,15))
#setup contour levels and plot extent
LEVELS = np.arange(100, 109, 0.5)
EXTENT = (DELR/2., LX - DELR/2., DELC/2., LY - DELC/2.)
print 'Contour Levels: ', LEVELS
print 'Extent of domain: ', EXTENT
#Make a contour plot on the first axis
AX1 = FIG.add_subplot(1, 2, 1, aspect='equal')
AX1.contour(np.flipud(HEAD[0, :, :]), levels=LEVELS, extent=EXTENT)
#Make a color flood on the second axis
AX2 = FIG.add_subplot(1, 2, 2, aspect='equal')
cax = AX2.imshow(HEAD[0, :, :], extent=EXTENT, interpolation='nearest')
cbar = FIG.colorbar(cax, orientation='vertical', shrink=0.25)
Explanation: Post Processing the Results
To read heads from the MODFLOW binary output file, we can use the flopy.utils.binaryfile module. Specifically, we can use the HeadFile object from that module to extract head data arrays.
End of explanation |
6,214 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<H2>Complex sine waves</H2>
To have an imaginary wave, we simply need to multiply the wave by the imaginary operator, as we do with the real numbers to have the equivalent imaginary numbers. The complex wave will also have a real part, and an imaginary part. In the complex plane, the real axis is the cosine, and the imaginary axis is the sine. Thereby, we can express the complex wave as the combination of real (cosine) and imaginary (sine) parts
Step1: A complex wave is a time series of imaginary numbers. As a such, it has a real part, and an imaginary part for every time. To visualized a complex wave, we can use a 3D representation. If we plot the imaginary part vs time, we will see a sine wave, and if we plot the real part vs time, we will see a cosine wave. Looking through the real axis you will see the cosine, and through the imaginary axis the sine.
Step2: If you try to compute the Fourier transform with only real-valued sines (or cosines), you will get a result
that depends on the phase offset between the sine wave and the signal. Very little variation in phase would have differnt values.
Using complex waves is the solution to it.
<H2>Dot product of sine waves</H2>
Step3: Same amplitudes and phases, but different frequencies yields to zero. Only when the steps are one or 0.5. It is
because of the sampling rate.
The same frequency but phases differe by PI over two yields zero. | Python Code:
t = np.arange(0,np.pi, 1/30000)
freq = 2 # in Hz
phi = 0
amp = 1
k = 2*np.pi*freq*t + phi
cwv = amp * np.exp(-1j* k) # complex sine wave
fig, ax = plt.subplots(2,1, figsize=(8,4), sharex=True)
ax[0].plot(t, np.real(cwv), lw=1.5)
ax[0].plot(t, np.imag(cwv), lw=0.5, color='orange')
ax[0].set_title('real (cosine)', color='C0')
ax[1].plot(t, np.imag(cwv), color='orange', lw=1.5)
ax[1].plot(t, np.real(cwv), lw=0.5, color='C0')
ax[1].set_title('imaginary (sine)', color='orange')
for myax in ax:
myax.set_yticks(range(-2,2,1))
myax.set_xlabel('Time (sec)')
myax.set_ylabel('Amplitude (AU)')
Explanation: <H2>Complex sine waves</H2>
To have an imaginary wave, we simply need to multiply the wave by the imaginary operator, as we do with the real numbers to have the equivalent imaginary numbers. The complex wave will also have a real part, and an imaginary part. In the complex plane, the real axis is the cosine, and the imaginary axis is the sine. Thereby, we can express the complex wave as the combination of real (cosine) and imaginary (sine) parts:
$$
\cos(k) + j\sin(k)
$$
We can use Eulers formula to have this expression in its exponential form:
$$
e^{j k} = \cos(k) + j\sin(k)
$$
and subtitute $k$ by $(2 \pi \upsilon t + \phi)$.
$$
e^{j (2 \pi \upsilon t + \phi)} = \cos(2 \pi \upsilon t + \phi) + j\sin(2 \pi \upsilon t + \phi)
$$
With it, we have another convenient way to express an irrational number, because we have the angle ($k$), and the distancte to the origin ($m$).
$$
me^{j k} = m\cos(k) + mj\sin(k)
$$
End of explanation
from mpl_toolkits.mplot3d import Axes3D # <--- This is important for 3d plotting
fig = plt.figure()
ax = fig.gca(projection ='3d')
ax.plot(t, cwv.real, cwv.imag)
ax.set_xlabel('Time (s)'), ax.set_ylabel('Real part'), ax.set_zlabel('Imaginary part')
Explanation: A complex wave is a time series of imaginary numbers. As a such, it has a real part, and an imaginary part for every time. To visualized a complex wave, we can use a 3D representation. If we plot the imaginary part vs time, we will see a sine wave, and if we plot the real part vs time, we will see a cosine wave. Looking through the real axis you will see the cosine, and through the imaginary axis the sine.
End of explanation
dt = 1/30000 # sampling interval in sec
t = np.arange(0,4, dt)
myparams1 = dict(amp = 2, freq = 5, phi = np.pi/2)
myparams2 = dict(amp = 2, freq = 5, phi = np.pi/2)
sinew1 = mysine(t, **myparams1)
sinew2 = mysine(t, **myparams2)
fig, ax = plt.subplots(1,1, figsize=(16,4))
ax.plot(t, sinew1, lw = 2)
ax.plot(t, sinew2, color='orange', lw=2)
ax.set_ylim(-10,10)
ax.text(3, 7.5, '{:2.4f}'.format(np.dot(sinew1, sinew2)), fontsize=15)
Explanation: If you try to compute the Fourier transform with only real-valued sines (or cosines), you will get a result
that depends on the phase offset between the sine wave and the signal. Very little variation in phase would have differnt values.
Using complex waves is the solution to it.
<H2>Dot product of sine waves</H2>
End of explanation
#
myparams1 = dict(amp = 2, freq = 5, phi = np.pi/2)
myparams2 = dict(amp = 2, freq = 5, phi = 2*np.pi/2) # ortogonal
sinew1 = mysine(t, **myparams1)
sinew2 = mysine(t, **myparams2)
fig, ax = plt.subplots(1,1, figsize=(16,4))
ax.plot(t, sinew1, lw = 2)
ax.plot(t, sinew2, color='orange', lw=2)
ax.set_ylim(-10,10)
ax.text(3, 7.5, '{:2.4f}'.format(np.dot(sinew1, sinew2)), fontsize=15);
t = np.arange(-1., 1., 1/1000.)
theta = 2*np.pi/4
morlet = lambda f : np.sin(2*np.pi*f*t + theta) * np.exp( (-t**2)/ 0.1) # Gaussian is exp(-t^2/stdev)
signal = morlet(5)
fval = np.arange(2,10,0.5)
fig, ax = plt.subplots(2,1, figsize=(16,8))
dotlist = list()
for i in fval:
dotlist.append(np.dot(signal,morlet(i)))
ax[0].plot(t, morlet(i), color='gray', alpha=.3)
ax[0].plot(t, signal, lw = 2)
ax[0].set_xlabel('Time (sec)')
ax[1].stem(fval, dotlist)
ax[1].set_ylabel('Dot product')
Explanation: Same amplitudes and phases, but different frequencies yields to zero. Only when the steps are one or 0.5. It is
because of the sampling rate.
The same frequency but phases differe by PI over two yields zero.
End of explanation |
6,215 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Word2Vec trained on recipe instructions
Objectives
Create word embeddings for recipe.
Use word vectors for (traditional) segmentation, classification, and retrieval of recipes.
Data Preparation
Step1: Input Normalization
does not need specific filtering of special character, stop words, etc.
Step2: Word2Vec Model
see http
Step3: Training CBOW model
takes about 3 minutes for example data.
Step4: Model Details
Step5: Word Similarity
Step6: Training skip-gram model
takes about 14 minutes for example data | Python Code:
import re # Regular Expressions
import pandas as pd # DataFrames & Manipulation
from gensim.models.word2vec import Word2Vec
train_input = "../data/recipes.tsv.bz2"
# preserve empty strings (http://pandas-docs.github.io/pandas-docs-travis/io.html#na-values)
train = pd.read_csv(train_input, delimiter="\t", quoting=3, encoding="utf-8", keep_default_na=False)
print "loaded %d documents." % len(train)
train[['title', 'instructions']].head()
Explanation: Word2Vec trained on recipe instructions
Objectives
Create word embeddings for recipe.
Use word vectors for (traditional) segmentation, classification, and retrieval of recipes.
Data Preparation
End of explanation
def normalize(text):
norm_text = text.lower()
for char in ['.', '"', ',', '(', ')', '!', '?', ';', ':']:
norm_text = norm_text.replace(char, ' ' + char + ' ')
return norm_text
sentences = [normalize(text).split() for text in train['instructions']]
print "%d documents in corpus" % len(sentences)
Explanation: Input Normalization
does not need specific filtering of special character, stop words, etc.
End of explanation
num_features = 100 # Word vector dimensionality
min_word_count = 10 # Minimum word count
num_workers = 4 # Number of threads to run in parallel
context = 10 # Context window size
downsampling = 1e-3 # Downsample setting for frequent words
# Import the built-in logging module and configure it so that Word2Vec creates nice output messages
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
Explanation: Word2Vec Model
see http://radimrehurek.com/gensim/models/word2vec.html
class gensim.models.word2vec.Word2Vec(
-> sentences=None, # iterable of sentences (list of words)
-> size=100, # feature vector dimension
alpha=0.025, # intial learning rate (drops to min_alpha during training)
-> window=5, # maximum distance between current and predicted word
-> min_count=5, # ignore words with lower total frequency
max_vocab_size=None, # limit RAM to most frequent words (1M words ~ 1GB)
sample=0.001, # threshold for random downsampling of high frequency words
seed=1, # for random number generator
-> workers=3, # number fo worker threads
min_alpha=0.0001, # used for linear learning-rate decay
-> sg=0, # training algorithm - (sg=0) CBOW, (sg=1) skip-gram
hs=0, # use hierarchical softmax (if 1), or negative sampling (default)
negative=5, # number of noise words used for negative sampling
cbow_mean=1, # use sum (0) of context word vector or mean (1, default)
hashfxn=<built-in function hash>,
iter=5, # number of iterations (epochs) over the corpus
null_word=0,
trim_rule=None, # custom vocabulary filtering
sorted_vocab=1, # sort vocab by descending word frequency
batch_words=10000 # size of batches (in words) passed to worker threads
)
Define model training parameters
End of explanation
print "Training CBOW model..."
model = Word2Vec(
sentences,
workers=num_workers,
size=num_features,
min_count = min_word_count,
window = context,
sample = downsampling)
# make the model much more memory-efficient.
model.init_sims(replace=True)
model_name = "model-w2v_cbow_%dfeatures_%dminwords_%dcontext" % (num_features, min_word_count, context)
model.save(model_name)
Explanation: Training CBOW model
takes about 3 minutes for example data.
End of explanation
print "%d words in vocabulary." % len(model.wv.vocab)
vocab = [(k, v.count) for k, v in model.wv.vocab.items()]
pd.DataFrame.from_records(vocab, columns=['word', 'count']).sort_values('count', ascending=False).reset_index(drop=True)
Explanation: Model Details
End of explanation
model.most_similar("pasta", topn=20)
model.most_similar("ofen")
Explanation: Word Similarity
End of explanation
print "Training skip-gram model..."
model2 = Word2Vec(
sentences,
sg = 1,
hs = 1,
workers=num_workers,
size=num_features,
min_count = min_word_count,
window = context,
sample = downsampling)
# make the model much more memory-efficient.
model2.init_sims(replace=True)
model_name = "recipes_skip-gram_%dfeatures_%dminwords_%dcontext" % (num_features, min_word_count, context)
model2.save(model_name)
model2.most_similar("pasta")
model2.most_similar("ofen")
Explanation: Training skip-gram model
takes about 14 minutes for example data
End of explanation |
6,216 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cheat Sheet
Step1: To print multiple strings, import print_function to prevent Py2 from interpreting it as a tuple
Step2: Raising exceptions
Step3: Raising exceptions with a traceback
Step4: Exception chaining (PEP 3134)
Step5: Catching exceptions
Step6: Division
Integer division (rounding down)
Step7: "True division" (float division)
Step8: "Old division" (i.e. compatible with Py2 behaviour)
Step9: Long integers
Short integers are gone in Python 3 and long has become int (without the trailing L in the repr).
Step10: To test whether a value is an integer (of any kind)
Step11: Octal constants
Step12: Backtick repr
Step13: Metaclasses
Step14: Strings and bytes
Unicode (text) string literals
If you are upgrading an existing Python 2 codebase, it may be preferable to mark up all string literals as unicode explicitly with u prefixes
Step15: The futurize and python-modernize tools do not currently offer an option to do this automatically.
If you are writing code for a new project or new codebase, you can use this idiom to make all string literals in a module unicode strings
Step16: See http
Step17: To loop over a byte-string with possible high-bit characters, obtaining each character as a byte-string of length 1
Step18: As an alternative, chr() and .encode('latin-1') can be used to convert an int into a 1-char byte string
Step19: basestring
Step20: unicode
Step21: StringIO
Step22: Imports relative to a package
Suppose the package is
Step23: Dictionaries
Step24: Iterating through dict keys/values/items
Iterable dict keys
Step25: Iterable dict values
Step26: Iterable dict items
Step27: dict keys/values/items as a list
dict keys as a list
Step28: dict values as a list
Step29: dict items as a list
Step30: Custom class behaviour
Custom iterators
Step31: Custom __str__ methods
Step32: Custom __nonzero__ vs __bool__ method
Step33: Lists versus iterators
xrange
Step34: range
Step35: map
Step36: imap
Step37: zip, izip
As above with zip and itertools.izip.
filter, ifilter
As above with filter and itertools.ifilter too.
Other builtins
File IO with open()
Step38: reduce()
Step39: raw_input()
Step40: input()
Step41: Warning
Step42: exec
Step43: But note that Py3's exec() is less powerful (and less dangerous) than Py2's exec statement.
execfile()
Step44: unichr()
Step45: intern()
Step46: apply()
Step47: chr()
Step48: cmp()
Step49: reload()
Step50: Standard library
dbm modules
Step51: commands / subprocess modules
Step52: subprocess.check_output()
Step53: collections
Step54: StringIO module
Step55: http module
Step56: xmlrpc module
Step57: html escaping and entities
Step58: html parsing
Step59: urllib module
urllib is the hardest module to use from Python 2/3 compatible code. You may like to use Requests (http
Step60: Tkinter
Step61: socketserver
Step62: copy_reg, copyreg
Step63: configparser
Step64: queue
Step65: repr, reprlib
Step66: UserDict, UserList, UserString
Step67: itertools | Python Code:
# Python 2 only:
print 'Hello'
# Python 2 and 3:
print('Hello')
Explanation: Cheat Sheet: Writing Python 2-3 compatible code
Copyright (c): 2013-2019 Python Charmers Pty Ltd, Australia.
Author: Ed Schofield.
Licence: Creative Commons Attribution.
A PDF version is here: http://python-future.org/compatible_idioms.pdf
This notebook shows you idioms for writing future-proof code that is compatible with both versions of Python: 2 and 3. It accompanies Ed Schofield's talk at PyCon AU 2014, "Writing 2/3 compatible code". (The video is here: http://www.youtube.com/watch?v=KOqk8j11aAI&t=10m14s.)
Minimum versions:
Python 2: 2.6+
Python 3: 3.3+
Setup
The imports below refer to these pip-installable packages on PyPI:
import future # pip install future
import builtins # pip install future
import past # pip install future
import six # pip install six
The following scripts are also pip-installable:
futurize # pip install future
pasteurize # pip install future
See http://python-future.org and https://pythonhosted.org/six/ for more information.
Essential syntax differences
print
End of explanation
# Python 2 only:
print 'Hello', 'Guido'
# Python 2 and 3:
from __future__ import print_function # (at top of module)
print('Hello', 'Guido')
# Python 2 only:
print >> sys.stderr, 'Hello'
# Python 2 and 3:
from __future__ import print_function
print('Hello', file=sys.stderr)
# Python 2 only:
print 'Hello',
# Python 2 and 3:
from __future__ import print_function
print('Hello', end='')
Explanation: To print multiple strings, import print_function to prevent Py2 from interpreting it as a tuple:
End of explanation
# Python 2 only:
raise ValueError, "dodgy value"
# Python 2 and 3:
raise ValueError("dodgy value")
Explanation: Raising exceptions
End of explanation
# Python 2 only:
traceback = sys.exc_info()[2]
raise ValueError, "dodgy value", traceback
# Python 3 only:
raise ValueError("dodgy value").with_traceback()
# Python 2 and 3: option 1
from six import reraise as raise_
# or
from future.utils import raise_
traceback = sys.exc_info()[2]
raise_(ValueError, "dodgy value", traceback)
# Python 2 and 3: option 2
from future.utils import raise_with_traceback
raise_with_traceback(ValueError("dodgy value"))
Explanation: Raising exceptions with a traceback:
End of explanation
# Setup:
class DatabaseError(Exception):
pass
# Python 3 only
class FileDatabase:
def __init__(self, filename):
try:
self.file = open(filename)
except IOError as exc:
raise DatabaseError('failed to open') from exc
# Python 2 and 3:
from future.utils import raise_from
class FileDatabase:
def __init__(self, filename):
try:
self.file = open(filename)
except IOError as exc:
raise_from(DatabaseError('failed to open'), exc)
# Testing the above:
try:
fd = FileDatabase('non_existent_file.txt')
except Exception as e:
assert isinstance(e.__cause__, IOError) # FileNotFoundError on Py3.3+ inherits from IOError
Explanation: Exception chaining (PEP 3134):
End of explanation
# Python 2 only:
try:
...
except ValueError, e:
...
# Python 2 and 3:
try:
...
except ValueError as e:
...
Explanation: Catching exceptions
End of explanation
# Python 2 only:
assert 2 / 3 == 0
# Python 2 and 3:
assert 2 // 3 == 0
Explanation: Division
Integer division (rounding down):
End of explanation
# Python 3 only:
assert 3 / 2 == 1.5
# Python 2 and 3:
from __future__ import division # (at top of module)
assert 3 / 2 == 1.5
Explanation: "True division" (float division):
End of explanation
# Python 2 only:
a = b / c # with any types
# Python 2 and 3:
from past.utils import old_div
a = old_div(b, c) # always same as / on Py2
Explanation: "Old division" (i.e. compatible with Py2 behaviour):
End of explanation
# Python 2 only
k = 9223372036854775808L
# Python 2 and 3:
k = 9223372036854775808
# Python 2 only
bigint = 1L
# Python 2 and 3
from builtins import int
bigint = int(1)
Explanation: Long integers
Short integers are gone in Python 3 and long has become int (without the trailing L in the repr).
End of explanation
# Python 2 only:
if isinstance(x, (int, long)):
...
# Python 3 only:
if isinstance(x, int):
...
# Python 2 and 3: option 1
from builtins import int # subclass of long on Py2
if isinstance(x, int): # matches both int and long on Py2
...
# Python 2 and 3: option 2
from past.builtins import long
if isinstance(x, (int, long)):
...
Explanation: To test whether a value is an integer (of any kind):
End of explanation
0644 # Python 2 only
0o644 # Python 2 and 3
Explanation: Octal constants
End of explanation
`x` # Python 2 only
repr(x) # Python 2 and 3
Explanation: Backtick repr
End of explanation
class BaseForm(object):
pass
class FormType(type):
pass
# Python 2 only:
class Form(BaseForm):
__metaclass__ = FormType
pass
# Python 3 only:
class Form(BaseForm, metaclass=FormType):
pass
# Python 2 and 3:
from six import with_metaclass
# or
from future.utils import with_metaclass
class Form(with_metaclass(FormType, BaseForm)):
pass
Explanation: Metaclasses
End of explanation
# Python 2 only
s1 = 'The Zen of Python'
s2 = u'きたないのよりきれいな方がいい\n'
# Python 2 and 3
s1 = u'The Zen of Python'
s2 = u'きたないのよりきれいな方がいい\n'
Explanation: Strings and bytes
Unicode (text) string literals
If you are upgrading an existing Python 2 codebase, it may be preferable to mark up all string literals as unicode explicitly with u prefixes:
End of explanation
# Python 2 and 3
from __future__ import unicode_literals # at top of module
s1 = 'The Zen of Python'
s2 = 'きたないのよりきれいな方がいい\n'
Explanation: The futurize and python-modernize tools do not currently offer an option to do this automatically.
If you are writing code for a new project or new codebase, you can use this idiom to make all string literals in a module unicode strings:
End of explanation
# Python 2 only
s = 'This must be a byte-string'
# Python 2 and 3
s = b'This must be a byte-string'
Explanation: See http://python-future.org/unicode_literals.html for more discussion on which style to use.
Byte-string literals
End of explanation
# Python 2 only:
for bytechar in 'byte-string with high-bit chars like \xf9':
...
# Python 3 only:
for myint in b'byte-string with high-bit chars like \xf9':
bytechar = bytes([myint])
# Python 2 and 3:
from builtins import bytes
for myint in bytes(b'byte-string with high-bit chars like \xf9'):
bytechar = bytes([myint])
Explanation: To loop over a byte-string with possible high-bit characters, obtaining each character as a byte-string of length 1:
End of explanation
# Python 3 only:
for myint in b'byte-string with high-bit chars like \xf9':
char = chr(myint) # returns a unicode string
bytechar = char.encode('latin-1')
# Python 2 and 3:
from builtins import bytes, chr
for myint in bytes(b'byte-string with high-bit chars like \xf9'):
char = chr(myint) # returns a unicode string
bytechar = char.encode('latin-1') # forces returning a byte str
Explanation: As an alternative, chr() and .encode('latin-1') can be used to convert an int into a 1-char byte string:
End of explanation
# Python 2 only:
a = u'abc'
b = 'def'
assert (isinstance(a, basestring) and isinstance(b, basestring))
# Python 2 and 3: alternative 1
from past.builtins import basestring # pip install future
a = u'abc'
b = b'def'
assert (isinstance(a, basestring) and isinstance(b, basestring))
# Python 2 and 3: alternative 2: refactor the code to avoid considering
# byte-strings as strings.
from builtins import str
a = u'abc'
b = b'def'
c = b.decode()
assert isinstance(a, str) and isinstance(c, str)
# ...
Explanation: basestring
End of explanation
# Python 2 only:
templates = [u"blog/blog_post_detail_%s.html" % unicode(slug)]
# Python 2 and 3: alternative 1
from builtins import str
templates = [u"blog/blog_post_detail_%s.html" % str(slug)]
# Python 2 and 3: alternative 2
from builtins import str as text
templates = [u"blog/blog_post_detail_%s.html" % text(slug)]
Explanation: unicode
End of explanation
# Python 2 only:
from StringIO import StringIO
# or:
from cStringIO import StringIO
# Python 2 and 3:
from io import BytesIO # for handling byte strings
from io import StringIO # for handling unicode strings
Explanation: StringIO
End of explanation
# Python 2 only:
import submodule2
# Python 2 and 3:
from . import submodule2
# Python 2 and 3:
# To make Py2 code safer (more like Py3) by preventing
# implicit relative imports, you can also add this to the top:
from __future__ import absolute_import
Explanation: Imports relative to a package
Suppose the package is:
mypackage/
__init__.py
submodule1.py
submodule2.py
and the code below is in submodule1.py:
End of explanation
heights = {'Fred': 175, 'Anne': 166, 'Joe': 192}
Explanation: Dictionaries
End of explanation
# Python 2 only:
for key in heights.iterkeys():
...
# Python 2 and 3:
for key in heights:
...
Explanation: Iterating through dict keys/values/items
Iterable dict keys:
End of explanation
# Python 2 only:
for value in heights.itervalues():
...
# Idiomatic Python 3
for value in heights.values(): # extra memory overhead on Py2
...
# Python 2 and 3: option 1
from builtins import dict
heights = dict(Fred=175, Anne=166, Joe=192)
for key in heights.values(): # efficient on Py2 and Py3
...
# Python 2 and 3: option 2
from future.utils import itervalues
# or
from six import itervalues
for key in itervalues(heights):
...
Explanation: Iterable dict values:
End of explanation
# Python 2 only:
for (key, value) in heights.iteritems():
...
# Python 2 and 3: option 1
for (key, value) in heights.items(): # inefficient on Py2
...
# Python 2 and 3: option 2
from future.utils import viewitems
for (key, value) in viewitems(heights): # also behaves like a set
...
# Python 2 and 3: option 3
from future.utils import iteritems
# or
from six import iteritems
for (key, value) in iteritems(heights):
...
Explanation: Iterable dict items:
End of explanation
# Python 2 only:
keylist = heights.keys()
assert isinstance(keylist, list)
# Python 2 and 3:
keylist = list(heights)
assert isinstance(keylist, list)
Explanation: dict keys/values/items as a list
dict keys as a list:
End of explanation
# Python 2 only:
heights = {'Fred': 175, 'Anne': 166, 'Joe': 192}
valuelist = heights.values()
assert isinstance(valuelist, list)
# Python 2 and 3: option 1
valuelist = list(heights.values()) # inefficient on Py2
# Python 2 and 3: option 2
from builtins import dict
heights = dict(Fred=175, Anne=166, Joe=192)
valuelist = list(heights.values())
# Python 2 and 3: option 3
from future.utils import listvalues
valuelist = listvalues(heights)
# Python 2 and 3: option 4
from future.utils import itervalues
# or
from six import itervalues
valuelist = list(itervalues(heights))
Explanation: dict values as a list:
End of explanation
# Python 2 and 3: option 1
itemlist = list(heights.items()) # inefficient on Py2
# Python 2 and 3: option 2
from future.utils import listitems
itemlist = listitems(heights)
# Python 2 and 3: option 3
from future.utils import iteritems
# or
from six import iteritems
itemlist = list(iteritems(heights))
Explanation: dict items as a list:
End of explanation
# Python 2 only
class Upper(object):
def __init__(self, iterable):
self._iter = iter(iterable)
def next(self): # Py2-style
return self._iter.next().upper()
def __iter__(self):
return self
itr = Upper('hello')
assert itr.next() == 'H' # Py2-style
assert list(itr) == list('ELLO')
# Python 2 and 3: option 1
from builtins import object
class Upper(object):
def __init__(self, iterable):
self._iter = iter(iterable)
def __next__(self): # Py3-style iterator interface
return next(self._iter).upper() # builtin next() function calls
def __iter__(self):
return self
itr = Upper('hello')
assert next(itr) == 'H' # compatible style
assert list(itr) == list('ELLO')
# Python 2 and 3: option 2
from future.utils import implements_iterator
@implements_iterator
class Upper(object):
def __init__(self, iterable):
self._iter = iter(iterable)
def __next__(self): # Py3-style iterator interface
return next(self._iter).upper() # builtin next() function calls
def __iter__(self):
return self
itr = Upper('hello')
assert next(itr) == 'H'
assert list(itr) == list('ELLO')
Explanation: Custom class behaviour
Custom iterators
End of explanation
# Python 2 only:
class MyClass(object):
def __unicode__(self):
return 'Unicode string: \u5b54\u5b50'
def __str__(self):
return unicode(self).encode('utf-8')
a = MyClass()
print(a) # prints encoded string
# Python 2 and 3:
from future.utils import python_2_unicode_compatible
@python_2_unicode_compatible
class MyClass(object):
def __str__(self):
return u'Unicode string: \u5b54\u5b50'
a = MyClass()
print(a) # prints string encoded as utf-8 on Py2
Explanation: Custom __str__ methods
End of explanation
# Python 2 only:
class AllOrNothing(object):
def __init__(self, l):
self.l = l
def __nonzero__(self):
return all(self.l)
container = AllOrNothing([0, 100, 200])
assert not bool(container)
# Python 2 and 3:
from builtins import object
class AllOrNothing(object):
def __init__(self, l):
self.l = l
def __bool__(self):
return all(self.l)
container = AllOrNothing([0, 100, 200])
assert not bool(container)
Explanation: Custom __nonzero__ vs __bool__ method:
End of explanation
# Python 2 only:
for i in xrange(10**8):
...
# Python 2 and 3: forward-compatible
from builtins import range
for i in range(10**8):
...
# Python 2 and 3: backward-compatible
from past.builtins import xrange
for i in xrange(10**8):
...
Explanation: Lists versus iterators
xrange
End of explanation
# Python 2 only
mylist = range(5)
assert mylist == [0, 1, 2, 3, 4]
# Python 2 and 3: forward-compatible: option 1
mylist = list(range(5)) # copies memory on Py2
assert mylist == [0, 1, 2, 3, 4]
# Python 2 and 3: forward-compatible: option 2
from builtins import range
mylist = list(range(5))
assert mylist == [0, 1, 2, 3, 4]
# Python 2 and 3: option 3
from future.utils import lrange
mylist = lrange(5)
assert mylist == [0, 1, 2, 3, 4]
# Python 2 and 3: backward compatible
from past.builtins import range
mylist = range(5)
assert mylist == [0, 1, 2, 3, 4]
Explanation: range
End of explanation
# Python 2 only:
mynewlist = map(f, myoldlist)
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 and 3: option 1
# Idiomatic Py3, but inefficient on Py2
mynewlist = list(map(f, myoldlist))
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 and 3: option 2
from builtins import map
mynewlist = list(map(f, myoldlist))
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 and 3: option 3
try:
from itertools import imap as map
except ImportError:
pass
mynewlist = list(map(f, myoldlist)) # inefficient on Py2
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 and 3: option 4
from future.utils import lmap
mynewlist = lmap(f, myoldlist)
assert mynewlist == [f(x) for x in myoldlist]
# Python 2 and 3: option 5
from past.builtins import map
mynewlist = map(f, myoldlist)
assert mynewlist == [f(x) for x in myoldlist]
Explanation: map
End of explanation
# Python 2 only:
from itertools import imap
myiter = imap(func, myoldlist)
assert isinstance(myiter, iter)
# Python 3 only:
myiter = map(func, myoldlist)
assert isinstance(myiter, iter)
# Python 2 and 3: option 1
from builtins import map
myiter = map(func, myoldlist)
assert isinstance(myiter, iter)
# Python 2 and 3: option 2
try:
from itertools import imap as map
except ImportError:
pass
myiter = map(func, myoldlist)
assert isinstance(myiter, iter)
Explanation: imap
End of explanation
# Python 2 only
f = open('myfile.txt')
data = f.read() # as a byte string
text = data.decode('utf-8')
# Python 2 and 3: alternative 1
from io import open
f = open('myfile.txt', 'rb')
data = f.read() # as bytes
text = data.decode('utf-8') # unicode, not bytes
# Python 2 and 3: alternative 2
from io import open
f = open('myfile.txt', encoding='utf-8')
text = f.read() # unicode, not bytes
Explanation: zip, izip
As above with zip and itertools.izip.
filter, ifilter
As above with filter and itertools.ifilter too.
Other builtins
File IO with open()
End of explanation
# Python 2 only:
assert reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) == 1+2+3+4+5
# Python 2 and 3:
from functools import reduce
assert reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) == 1+2+3+4+5
Explanation: reduce()
End of explanation
# Python 2 only:
name = raw_input('What is your name? ')
assert isinstance(name, str) # native str
# Python 2 and 3:
from builtins import input
name = input('What is your name? ')
assert isinstance(name, str) # native str on Py2 and Py3
Explanation: raw_input()
End of explanation
# Python 2 only:
input("Type something safe please: ")
# Python 2 and 3
from builtins import input
eval(input("Type something safe please: "))
Explanation: input()
End of explanation
# Python 2 only:
f = file(pathname)
# Python 2 and 3:
f = open(pathname)
# But preferably, use this:
from io import open
f = open(pathname, 'rb') # if f.read() should return bytes
# or
f = open(pathname, 'rt') # if f.read() should return unicode text
Explanation: Warning: using either of these is unsafe with untrusted input.
file()
End of explanation
# Python 2 only:
exec 'x = 10'
# Python 2 and 3:
exec('x = 10')
# Python 2 only:
g = globals()
exec 'x = 10' in g
# Python 2 and 3:
g = globals()
exec('x = 10', g)
# Python 2 only:
l = locals()
exec 'x = 10' in g, l
# Python 2 and 3:
exec('x = 10', g, l)
Explanation: exec
End of explanation
# Python 2 only:
execfile('myfile.py')
# Python 2 and 3: alternative 1
from past.builtins import execfile
execfile('myfile.py')
# Python 2 and 3: alternative 2
exec(compile(open('myfile.py').read()))
# This can sometimes cause this:
# SyntaxError: function ... uses import * and bare exec ...
# See https://github.com/PythonCharmers/python-future/issues/37
Explanation: But note that Py3's exec() is less powerful (and less dangerous) than Py2's exec statement.
execfile()
End of explanation
# Python 2 only:
assert unichr(8364) == '€'
# Python 3 only:
assert chr(8364) == '€'
# Python 2 and 3:
from builtins import chr
assert chr(8364) == '€'
Explanation: unichr()
End of explanation
# Python 2 only:
intern('mystring')
# Python 3 only:
from sys import intern
intern('mystring')
# Python 2 and 3: alternative 1
from past.builtins import intern
intern('mystring')
# Python 2 and 3: alternative 2
from six.moves import intern
intern('mystring')
# Python 2 and 3: alternative 3
from future.standard_library import install_aliases
install_aliases()
from sys import intern
intern('mystring')
# Python 2 and 3: alternative 2
try:
from sys import intern
except ImportError:
pass
intern('mystring')
Explanation: intern()
End of explanation
args = ('a', 'b')
kwargs = {'kwarg1': True}
# Python 2 only:
apply(f, args, kwargs)
# Python 2 and 3: alternative 1
f(*args, **kwargs)
# Python 2 and 3: alternative 2
from past.builtins import apply
apply(f, args, kwargs)
Explanation: apply()
End of explanation
# Python 2 only:
assert chr(64) == b'@'
assert chr(200) == b'\xc8'
# Python 3 only: option 1
assert chr(64).encode('latin-1') == b'@'
assert chr(0xc8).encode('latin-1') == b'\xc8'
# Python 2 and 3: option 1
from builtins import chr
assert chr(64).encode('latin-1') == b'@'
assert chr(0xc8).encode('latin-1') == b'\xc8'
# Python 3 only: option 2
assert bytes([64]) == b'@'
assert bytes([0xc8]) == b'\xc8'
# Python 2 and 3: option 2
from builtins import bytes
assert bytes([64]) == b'@'
assert bytes([0xc8]) == b'\xc8'
Explanation: chr()
End of explanation
# Python 2 only:
assert cmp('a', 'b') < 0 and cmp('b', 'a') > 0 and cmp('c', 'c') == 0
# Python 2 and 3: alternative 1
from past.builtins import cmp
assert cmp('a', 'b') < 0 and cmp('b', 'a') > 0 and cmp('c', 'c') == 0
# Python 2 and 3: alternative 2
cmp = lambda(x, y): (x > y) - (x < y)
assert cmp('a', 'b') < 0 and cmp('b', 'a') > 0 and cmp('c', 'c') == 0
Explanation: cmp()
End of explanation
# Python 2 only:
reload(mymodule)
# Python 2 and 3
from imp import reload
reload(mymodule)
Explanation: reload()
End of explanation
# Python 2 only
import anydbm
import whichdb
import dbm
import dumbdbm
import gdbm
# Python 2 and 3: alternative 1
from future import standard_library
standard_library.install_aliases()
import dbm
import dbm.ndbm
import dbm.dumb
import dbm.gnu
# Python 2 and 3: alternative 2
from future.moves import dbm
from future.moves.dbm import dumb
from future.moves.dbm import ndbm
from future.moves.dbm import gnu
# Python 2 and 3: alternative 3
from six.moves import dbm_gnu
# (others not supported)
Explanation: Standard library
dbm modules
End of explanation
# Python 2 only
from commands import getoutput, getstatusoutput
# Python 2 and 3
from future import standard_library
standard_library.install_aliases()
from subprocess import getoutput, getstatusoutput
Explanation: commands / subprocess modules
End of explanation
# Python 2.7 and above
from subprocess import check_output
# Python 2.6 and above: alternative 1
from future.moves.subprocess import check_output
# Python 2.6 and above: alternative 2
from future import standard_library
standard_library.install_aliases()
from subprocess import check_output
Explanation: subprocess.check_output()
End of explanation
# Python 2.7 and above
from collections import Counter, OrderedDict, ChainMap
# Python 2.6 and above: alternative 1
from future.backports import Counter, OrderedDict, ChainMap
# Python 2.6 and above: alternative 2
from future import standard_library
standard_library.install_aliases()
from collections import Counter, OrderedDict, ChainMap
Explanation: collections: Counter, OrderedDict, ChainMap
End of explanation
# Python 2 only
from StringIO import StringIO
from cStringIO import StringIO
# Python 2 and 3
from io import BytesIO
# and refactor StringIO() calls to BytesIO() if passing byte-strings
Explanation: StringIO module
End of explanation
# Python 2 only:
import httplib
import Cookie
import cookielib
import BaseHTTPServer
import SimpleHTTPServer
import CGIHttpServer
# Python 2 and 3 (after ``pip install future``):
import http.client
import http.cookies
import http.cookiejar
import http.server
Explanation: http module
End of explanation
# Python 2 only:
import DocXMLRPCServer
import SimpleXMLRPCServer
# Python 2 and 3 (after ``pip install future``):
import xmlrpc.server
# Python 2 only:
import xmlrpclib
# Python 2 and 3 (after ``pip install future``):
import xmlrpc.client
Explanation: xmlrpc module
End of explanation
# Python 2 and 3:
from cgi import escape
# Safer (Python 2 and 3, after ``pip install future``):
from html import escape
# Python 2 only:
from htmlentitydefs import codepoint2name, entitydefs, name2codepoint
# Python 2 and 3 (after ``pip install future``):
from html.entities import codepoint2name, entitydefs, name2codepoint
Explanation: html escaping and entities
End of explanation
# Python 2 only:
from HTMLParser import HTMLParser
# Python 2 and 3 (after ``pip install future``)
from html.parser import HTMLParser
# Python 2 and 3 (alternative 2):
from future.moves.html.parser import HTMLParser
Explanation: html parsing
End of explanation
# Python 2 only:
from urlparse import urlparse
from urllib import urlencode
from urllib2 import urlopen, Request, HTTPError
# Python 3 only:
from urllib.parse import urlparse, urlencode
from urllib.request import urlopen, Request
from urllib.error import HTTPError
# Python 2 and 3: easiest option
from future.standard_library import install_aliases
install_aliases()
from urllib.parse import urlparse, urlencode
from urllib.request import urlopen, Request
from urllib.error import HTTPError
# Python 2 and 3: alternative 2
from future.standard_library import hooks
with hooks():
from urllib.parse import urlparse, urlencode
from urllib.request import urlopen, Request
from urllib.error import HTTPError
# Python 2 and 3: alternative 3
from future.moves.urllib.parse import urlparse, urlencode
from future.moves.urllib.request import urlopen, Request
from future.moves.urllib.error import HTTPError
# or
from six.moves.urllib.parse import urlparse, urlencode
from six.moves.urllib.request import urlopen
from six.moves.urllib.error import HTTPError
# Python 2 and 3: alternative 4
try:
from urllib.parse import urlparse, urlencode
from urllib.request import urlopen, Request
from urllib.error import HTTPError
except ImportError:
from urlparse import urlparse
from urllib import urlencode
from urllib2 import urlopen, Request, HTTPError
Explanation: urllib module
urllib is the hardest module to use from Python 2/3 compatible code. You may like to use Requests (http://python-requests.org) instead.
End of explanation
# Python 2 only:
import Tkinter
import Dialog
import FileDialog
import ScrolledText
import SimpleDialog
import Tix
import Tkconstants
import Tkdnd
import tkColorChooser
import tkCommonDialog
import tkFileDialog
import tkFont
import tkMessageBox
import tkSimpleDialog
import ttk
# Python 2 and 3 (after ``pip install future``):
import tkinter
import tkinter.dialog
import tkinter.filedialog
import tkinter.scrolledtext
import tkinter.simpledialog
import tkinter.tix
import tkinter.constants
import tkinter.dnd
import tkinter.colorchooser
import tkinter.commondialog
import tkinter.filedialog
import tkinter.font
import tkinter.messagebox
import tkinter.simpledialog
import tkinter.ttk
Explanation: Tkinter
End of explanation
# Python 2 only:
import SocketServer
# Python 2 and 3 (after ``pip install future``):
import socketserver
Explanation: socketserver
End of explanation
# Python 2 only:
import copy_reg
# Python 2 and 3 (after ``pip install future``):
import copyreg
Explanation: copy_reg, copyreg
End of explanation
# Python 2 only:
from ConfigParser import ConfigParser
# Python 2 and 3 (after ``pip install future``):
from configparser import ConfigParser
Explanation: configparser
End of explanation
# Python 2 only:
from Queue import Queue, heapq, deque
# Python 2 and 3 (after ``pip install future``):
from queue import Queue, heapq, deque
Explanation: queue
End of explanation
# Python 2 only:
from repr import aRepr, repr
# Python 2 and 3 (after ``pip install future``):
from reprlib import aRepr, repr
Explanation: repr, reprlib
End of explanation
# Python 2 only:
from UserDict import UserDict
from UserList import UserList
from UserString import UserString
# Python 3 only:
from collections import UserDict, UserList, UserString
# Python 2 and 3: alternative 1
from future.moves.collections import UserDict, UserList, UserString
# Python 2 and 3: alternative 2
from six.moves import UserDict, UserList, UserString
# Python 2 and 3: alternative 3
from future.standard_library import install_aliases
install_aliases()
from collections import UserDict, UserList, UserString
Explanation: UserDict, UserList, UserString
End of explanation
# Python 2 only:
from itertools import ifilterfalse, izip_longest
# Python 3 only:
from itertools import filterfalse, zip_longest
# Python 2 and 3: alternative 1
from future.moves.itertools import filterfalse, zip_longest
# Python 2 and 3: alternative 2
from six.moves import filterfalse, zip_longest
# Python 2 and 3: alternative 3
from future.standard_library import install_aliases
install_aliases()
from itertools import filterfalse, zip_longest
Explanation: itertools: filterfalse, zip_longest
End of explanation |
6,217 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numba Demo 2
Fibonacci Series
Lets try with the classic Fibonacci series
Step1: But the recursive implementation is not efficient, it is recursive, and at each recursion call itself twice
Step2: Iterative approach have a better CPU complexity
Step3: The iterative aproach, being linear, is already hundred thousand times faster for $x=30$
Lets jit them up!
Lets try to apply JIT optimizations on both implementations
Step4: This means that an algorithmical optimization gives the same relative improvement being it compiled nativelly or not.
Pure Python vs jitted | Python Code:
def fibonacci_r(x):
assert x >= 0, 'x must be a positive integer'
if x <= 1: # First 2 cases.
return x
return fibonacci_r(x - 1) + fibonacci_r(x - 2)
X = [x for x in range(10)]
print('X = ' + repr(X))
Y = [fibonacci_r(x) for x in X]
print('Y = ' + repr(Y))
Explanation: Numba Demo 2
Fibonacci Series
Lets try with the classic Fibonacci series:
$$f(x) = \begin{cases}
x & {0 <= x <= 1}\
f(x-1) + f(x-2) & x > 1
\end{cases}$$
It's quite easy to convert it to pure Python code.
End of explanation
def fibonacci_i(x):
assert x >= 0, 'x must be a positive integer'
if x <= 1: # First 2 cases.
return x
y_2 = 0
y_1 = 1
y_0 = 0
for n in range(x - 1):
y_0 = y_1 + y_2
y_1, y_2 = y_0, y_1
return y_0
X = [x for x in range(10)]
print('X = ' + repr(X))
Y = [fibonacci_i(x) for x in X]
print('Y = ' + repr(Y))
Explanation: But the recursive implementation is not efficient, it is recursive, and at each recursion call itself twice:
$$O(n) = 2^{n}$$
Lets convert it to an iteractive function by memoizing former results at each iteration:
End of explanation
from IPython.display import display
from itertools import combinations
import time
import matplotlib
import os
import pandas as pd
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot')
%matplotlib inline
# Different platforms require different functions to properly measure current timestamp:
if os.name == 'nt':
now = time.clock
else:
now = time.time
def run_benchmarks(minX, maxX, step=1, num_runs=10, functions=None):
assert functions, 'Please pass a sequence of functions to be tested'
assert num_runs > 0, 'Number of runs must be strictly positive'
def _name(function):
return '${%s(x)}$' % function.__name__
def _measure(x, function):
T_0 = now()
for i in range(num_runs):
function(x)
return (now() - T_0) / num_runs
df = pd.DataFrame()
for function in functions:
function(minX) # This is necessary to let JIT produce the native function.
X = [x
for x in range(minX, maxX + 1, step)]
Y = [_measure(x, function) for x in X]
df[_name(function)] = pd.Series(data=Y, index=X)
plt.figure()
df.plot(figsize=(10,5),
title='$y=Log_{10}(T[f(x)])$',
style='o-',
logy=True)
if len(functions) >= 2:
comb = combinations(((_name(function), df[_name(function)].values)
for function in functions), 2)
for (nameA, timesA), (nameB, timesB) in comb:
title = '$y=\\frac{T[%s]}{T[%s]}$' % (nameA[1:-1], nameB[1:-1])
plt.figure()
(df[nameA] / df[nameB]).plot(figsize=(10,3.5),
title=title,
style='o-')
run_benchmarks(0, 30, functions=[fibonacci_r, fibonacci_i])
Explanation: Iterative approach have a better CPU complexity:
$$O(n) = n$$
Benchmark
Lets define a benchmark and use it to compare the two functions:
End of explanation
from numba import jit
@jit
def fibonacciJit_r(x):
assert x >= 0, 'x must be a positive integer'
if x <= 1: # First 2 cases.
return x
return fibonacciJit_r(x - 1) + fibonacciJit_r(x - 2)
@jit
def fibonacciJit_i(x):
assert x >= 0, 'x must be a positive integer'
if x <= 1: # First 2 cases.
return x
y_2 = 0
y_1 = 1
y_0 = 0
for n in range(x - 1):
y_0 = y_1 + y_2
y_1, y_2 = y_0, y_1
return y_0
run_benchmarks(0, 30, functions=[fibonacciJit_r, fibonacciJit_i])
Explanation: The iterative aproach, being linear, is already hundred thousand times faster for $x=30$
Lets jit them up!
Lets try to apply JIT optimizations on both implementations:
End of explanation
run_benchmarks(100000, 1000000, step=200000, num_runs=5, functions=[fibonacci_i, fibonacciJit_i])
Explanation: This means that an algorithmical optimization gives the same relative improvement being it compiled nativelly or not.
Pure Python vs jitted:
Lets compare instead the optimized $fibonacci_i(x)$ against $fibonacciJit_i(x)$ just to see a row optimization (native, vs dynamic code) can improve things:
End of explanation |
6,218 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analysis with median derived value. Checking for jump between (blue to green) and (green to red) chips
Method of persistence determination
Step1: First let's examine the range of persistence values
Step2: Now let's see which fibers are being affected
Step3: Let's see if the persistence flags in apStar match the stars I've found
STARFLAG with bit 9 corresponds to >20% of spectrum falling in high persistence region
... persistence(bluegreen) > 3
Step4: ... persistence(bluegreen) > 8
Step5: ... persistence(greenred) > 3
Step6: ... persistence(greenred) > 8 | Python Code:
f = '/home/spiffical/data/stars/apStar_visits_quantifypersist_med.txt'
persist_vals_med1=[]
persist_vals_med2=[]
fibers_med = []
snr_combined_med = []
starflags_indiv_med = []
loc_ids_med = []
ap_ids_med = []
fi = open(f)
for j, line in enumerate(fi):
# Get values
line = line.split()
persist1 = float(line[-2])
persist2 = float(line[-1])
fiber = int(line[-3])
snr_comb = float(line[-6])
starflag_indiv = float(line[0])
loc_id = line[-4]
ap_id = line[-5]
# Append to lists
persist_vals_med1.append(persist1)
persist_vals_med2.append(persist2)
fibers_med.append(fiber)
snr_combined_med.append(snr_comb)
starflags_indiv_med.append(starflag_indiv)
loc_ids_med.append(loc_id)
ap_ids_med.append(ap_id)
fi.close()
Explanation: Analysis with median derived value. Checking for jump between (blue to green) and (green to red) chips
Method of persistence determination:
found median of last 100 points in chip 1
found median and std. dev. of first 100 points in chip 2
persist_val = ( median(chip1) - median(chip2) ) / std(chip2)
Each line in file has format:
(starflag_indiv, starflag_comb, aspcapflag, targflag_1, targflag_2, SNR_visit, SNR_combined, ap_id, loc_id, fiber, (bluegreen)persist, (greenred)persist)
End of explanation
# Get rid of nans and infs in (bluegreen) jump
nan_list2 = np.isnan(persist_vals_med1)
inf_list2 = np.isinf(persist_vals_med1)
comb_list = np.invert([a or b for a,b in zip(nan_list2, inf_list2)]) # invert so we keep non-nans
persist_vals_med1 = np.asarray(persist_vals_med1)[comb_list]
fibers_med1 = np.asarray(fibers_med)[comb_list]
snr_combined_med1 = np.asarray(snr_combined_med)[comb_list]
starflags_indiv_med1 = np.asarray(starflags_indiv_med)[comb_list]
loc_ids_med1 = np.asarray(loc_ids_med)[comb_list]
ap_ids_med1 = np.asarray(ap_ids_med)[comb_list]
# Get rid of nans and infs in (greenred) jump
nan_list3 = np.isnan(persist_vals_med2)
inf_list3 = np.isinf(persist_vals_med2)
comb_list2 = np.invert([a or b for a,b in zip(nan_list3, inf_list3)]) # invert so we keep non-nans
persist_vals_med2 = np.asarray(persist_vals_med2)[comb_list2]
fibers_med2 = np.asarray(fibers_med)[comb_list2]
snr_combined_med2 = np.asarray(snr_combined_med)[comb_list2]
starflags_indiv_med2 = np.asarray(starflags_indiv_med)[comb_list2]
loc_ids_med2 = np.asarray(loc_ids_med)[comb_list2]
ap_ids_med2 = np.asarray(ap_ids_med)[comb_list2]
fig, ax = plt.subplots(figsize=(12,12))
ax.hist(persist_vals_med1, bins=4000, alpha=0.6, label='Blue to green')
ax.hist(persist_vals_med2, bins=4000, alpha=0.6, label='Green to red' )
ax.set_xlim((min(persist_vals_med2), 30))
ax.set_xlabel(r'Persistence value ($\sigma$)', size=20)
ax.set_ylabel('# of spectra', size=20)
ax.legend(loc=0, prop={'size':15})
plt.show()
Explanation: First let's examine the range of persistence values
End of explanation
# High (>3sigma) blue to green persistence
high_persist_med1_indx3 = np.abs(persist_vals_med1)>3
high_persist_med1_vals3 = persist_vals_med1[high_persist_med1_indx3]
high_persist_med_fibers3 = fibers_med[high_persist_med1_indx3]
high_persist_med_snr3 = snr_combined_med[high_persist_med1_indx3]
high_persist_med_starflag3 = starflags_indiv_med[high_persist_med1_indx3]
high_persist_med_ap3 = ap_ids_med[high_persist_med1_indx3]
high_persist_med_loc3 = loc_ids_med[high_persist_med1_indx3]
# Really high (>8sigma) blue to green persistence
high_persist_med1_indx8 = np.abs(persist_vals_med1)>8
high_persist_med1_vals8 = persist_vals_med1[high_persist_med1_indx8]
high_persist_med_fibers8 = fibers_med[high_persist_med1_indx8]
high_persist_med_snr8 = snr_combined_med[high_persist_med1_indx8]
high_persist_med_starflag8 = starflags_indiv_med[high_persist_med1_indx8]
high_persist_med_ap8 = ap_ids_med[high_persist_med1_indx8]
high_persist_med_loc8 = loc_ids_med[high_persist_med1_indx8]
# High (>3sigma) green to red persistence
high_persist_med2_indx3 = np.abs(persist_vals_med2)>3
high_persist_med2_vals3 = persist_vals_med2[high_persist_med2_indx3]
high_persist_med2_fibers3 = fibers_med2[high_persist_med2_indx3]
high_persist_med2_snr3 = snr_combined_med2[high_persist_med2_indx3]
high_persist_med2_starflag3 = starflags_indiv_med2[high_persist_med2_indx3]
high_persist_med2_ap3 = ap_ids_med2[high_persist_med2_indx3]
high_persist_med2_loc3 = loc_ids_med2[high_persist_med2_indx3]
# Really high (>8sigma) green to red persistence
high_persist_med2_indx8 = np.abs(persist_vals_med2)>8
high_persist_med2_vals8 = persist_vals_med2[high_persist_med2_indx8]
high_persist_med2_fibers8 = fibers_med2[high_persist_med2_indx8]
high_persist_med2_snr8 = snr_combined_med2[high_persist_med2_indx8]
high_persist_med2_starflag8 = starflags_indiv_med2[high_persist_med2_indx8]
high_persist_med2_ap8 = ap_ids_med2[high_persist_med2_indx8]
high_persist_med2_loc8 = loc_ids_med2[high_persist_med2_indx8]
fig, ax = plt.subplots(figsize=(12,12))
ax.hist(high_persist_med_fibers3, bins=300, alpha=0.6, label='Blue to green')
ax.hist(high_persist_med2_fibers3, bins=300, alpha=0.6, label='Green to red')
ax.set_xlabel('Fiber #', size=20)
ax.set_ylabel(r'# of spectra with persistence > 3$\sigma$', size=20)
ax.set_xlim((-5,305))
ax.annotate("Total # of affected spectra: "+str(len(high_persist_med2_fibers3) + len(high_persist_med_fibers3)),
xy=(0.3, 0.9), xycoords="axes fraction", size=15)
ax.legend(loc=0, prop={'size':15})
plt.show()
fig, ax = plt.subplots(figsize=(12,12))
ax.hist(high_persist_med_fibers8, bins=300, alpha=0.6, label='Blue to green')
ax.hist(high_persist_med2_fibers8, bins=300, alpha=0.6, label='Green to red')
ax.set_xlabel('Fiber #', size=20)
ax.set_ylabel(r'# of spectra with persistence > 8$\sigma$', size=20)
ax.annotate("Total # of affected spectra: "+str(len(high_persist_med2_fibers8) + len(high_persist_med_fibers3)),
xy=(0.3, 0.9), xycoords="axes fraction", size=15)
ax.set_xlim((-5,305))
ax.legend(loc=0, prop={'size':15})
plt.show()
Explanation: Now let's see which fibers are being affected
End of explanation
bit=9
print '# with bit %s in apogee (from my database): %s' %(bit, len(high_persist_med_starflag3[high_persist_med_starflag3==2**bit]))
print 'total in my database: %s ' %len(high_persist_med_starflag3)
print 'fraction: ', len(high_persist_med_starflag3[high_persist_med_starflag3==2**bit])*1./len(high_persist_med_starflag3)
Explanation: Let's see if the persistence flags in apStar match the stars I've found
STARFLAG with bit 9 corresponds to >20% of spectrum falling in high persistence region
... persistence(bluegreen) > 3
End of explanation
bit=9
print '# with bit %s in apogee (from my database): %s' %(bit, len(high_persist_med_starflag8[high_persist_med_starflag8==2**bit]))
print 'total in my database: %s ' %len(high_persist_med_starflag8)
print 'fraction: ', len(high_persist_med_starflag8[high_persist_med_starflag8==2**bit])*1./len(high_persist_med_starflag8)
Explanation: ... persistence(bluegreen) > 8
End of explanation
bit=9
print '# with bit %s in apogee (from my database): %s' %(bit, len(high_persist_med2_starflag3[high_persist_med2_starflag3==2**bit]))
print 'total in my database: %s ' %len(high_persist_med2_starflag3)
print 'fraction: ', len(high_persist_med2_starflag3[high_persist_med2_starflag3==2**bit])*1./len(high_persist_med2_starflag3)
Explanation: ... persistence(greenred) > 3
End of explanation
bit=9
print '# with bit %s in apogee (from my database): %s' %(bit, len(high_persist_med2_starflag8[high_persist_med2_starflag8==2**bit]))
print 'total in my database: %s ' %len(high_persist_med2_starflag8)
print 'fraction: ', len(high_persist_med2_starflag8[high_persist_med2_starflag8==2**bit])*1./len(high_persist_med2_starflag8)
Explanation: ... persistence(greenred) > 8
End of explanation |
6,219 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
text plot
This notebook is designed to demonstrate (and so document) how to use the shap.plots.text function. It uses a distilled PyTorch BERT model from the transformers package to do sentiment analysis of IMDB movie reviews.
Note that the prediction function we define takes a list of strings and returns a logit value for the positive class.
Step1: Single instance text plot
When we pass a single instance to the text plot we get the importance of each token overlayed on the original text that corresponds to that token. Red regions correspond to parts of the text that increase the output of the model when they are included, while blue regions decrease the output of the model when they are included. In the context of the sentiment analysis model here red corresponds to a more positive review and blue a more negative review.
Note that importance values returned for text models are often hierarchical and follow the structure of the text. Nonlinear interactions between groups of tokens are often saved and can be used during the plotting process. If the Explanation object passed to the text plot has a .hierarchical_values attribute, then small groups of tokens with strong non-linear effects among them will be auto-merged together to form coherent chunks. When the .hierarchical_values attribute is present it also means that the explainer may not have completely enumerated all possible token perturbations and so has treated chunks of the text as essentially a single unit. This happens since we often want to explain a text model while evaluating it fewer times than the numbers of tokens in the document. Whenever a region of the input text is not split by the explainer, it is show by the text plot as a single unit.
The force plot above the text is designed to provide an overview of how all the parts of the text combine to produce the model's output. See the force plot notebook for more details, but the general structure of the plot is positive red features "pushing" the model output higher while negative blue features "push" the model output lower. The force plot provides much more quantitative information than the text coloring. Hovering over a chuck of text will underline the portion of the force plot that corresponds to that chunk of text, and hovering over a portion of the force plot will underline the corresponding chunk of text.
Note that clicking on any chunk of text will show the sum of the SHAP values attributed to the tokens in that chunk (clicked again will hide the value).
Step2: Multiple instance text plot
When we pass a multi-row explanation object to the text plot we get the single instance plots for each input instance scaled so they have consistent comparable x-axis and color ranges.
Step3: Summarizing text explanations
While plotting several instance-level explanations using the text plot can be very informative, sometime you want global summaries of the impact of tokens over the a large set of instances. See the Explanation object documentation for more details, but you can easily summarize the importance of tokens in a dataset by collapsing a multi-row explanation object over all it's rows (in this case by summing). Doing this treats every text input token type as a feature, so the collapsed Explanation object will have as many columns as there were unique tokens in the orignal multi-row explanation object. If there are hierarchical values present in the Explanation object then any large groups are divided up and each token in the gruop is given an equal share of the overall group importance value.
Step4: Note that how you summarize the importance of features can make a big difference. In the plot above the a token was very importance both because it had an impact on the model, and because it was very common. Below we instead summize the instances using the max function to see the largest impact of a token in any instance.
Step5: You can also slice out a single token from all the instances by using that token as an input name (note that the gray values to the left of the input names are the original text that the token was generated from).
Step6: Text-To-Text Visualization
Step7: Text-To-Text Visualization contains the input text to the model on the left side and output text on the right side (in the default layout). On hovering over a token on the right (output) side the importance of each input token is overlayed on it, and is signified by the background color of the token. Red regions correspond to parts of the text that increase the output of the model when they are included, while blue regions decrease the output of the model when they are included. The explanation for a particular output token can be anchored by clickling on the output token (it can be un-anchored by clicking again).
Note that similar to the single output plots described above, importance values returned for text models are often hierarchical and follow the structure of the text. Small groups of tokens with strong non-linear effects among them will be auto-merged together to form coherent chunks. Similarly, The explainer may not have completely enumerated all possible token perturbations and so has treated chunks of the text as essentially a single unit. This preprocessing is done for each output token, and the merging behviour can differ for each output token, since the interation effects might be different for each output token. The merged chunks can be viewed by hovering over the input text, once an output token is anchored. All the tokens of a merged chunk are made bold.
Once the ouput text is anchored the input tokens can be clicked on to view the exact shap value (Hovering over input token also brings up a tooltip with the values). Auto merged tokens show the total values divided over the number of tokens in that chunk.
Hovering over the input text shows the SHAP value for that token for each output token. This is again signified by the background color of the output token. This can be anchored by clicking on the input token.
Note | Python Code:
import shap
import transformers
import nlp
import torch
import numpy as np
import scipy as sp
# load a BERT sentiment analysis model
tokenizer = transformers.DistilBertTokenizerFast.from_pretrained("distilbert-base-uncased")
model = transformers.DistilBertForSequenceClassification.from_pretrained(
"distilbert-base-uncased-finetuned-sst-2-english"
).cuda()
# define a prediction function
def f(x):
tv = torch.tensor([tokenizer.encode(v, padding='max_length', max_length=500, truncation=True) for v in x]).cuda()
outputs = model(tv)[0].detach().cpu().numpy()
scores = (np.exp(outputs).T / np.exp(outputs).sum(-1)).T
val = sp.special.logit(scores[:,1]) # use one vs rest logit units
return val
# build an explainer using a token masker
explainer = shap.Explainer(f, tokenizer)
# explain the model's predictions on IMDB reviews
imdb_train = nlp.load_dataset("imdb")["train"]
shap_values = explainer(imdb_train[:10], fixed_context=1)
Explanation: text plot
This notebook is designed to demonstrate (and so document) how to use the shap.plots.text function. It uses a distilled PyTorch BERT model from the transformers package to do sentiment analysis of IMDB movie reviews.
Note that the prediction function we define takes a list of strings and returns a logit value for the positive class.
End of explanation
# plot the first sentence's explanation
shap.plots.text(shap_values[3])
Explanation: Single instance text plot
When we pass a single instance to the text plot we get the importance of each token overlayed on the original text that corresponds to that token. Red regions correspond to parts of the text that increase the output of the model when they are included, while blue regions decrease the output of the model when they are included. In the context of the sentiment analysis model here red corresponds to a more positive review and blue a more negative review.
Note that importance values returned for text models are often hierarchical and follow the structure of the text. Nonlinear interactions between groups of tokens are often saved and can be used during the plotting process. If the Explanation object passed to the text plot has a .hierarchical_values attribute, then small groups of tokens with strong non-linear effects among them will be auto-merged together to form coherent chunks. When the .hierarchical_values attribute is present it also means that the explainer may not have completely enumerated all possible token perturbations and so has treated chunks of the text as essentially a single unit. This happens since we often want to explain a text model while evaluating it fewer times than the numbers of tokens in the document. Whenever a region of the input text is not split by the explainer, it is show by the text plot as a single unit.
The force plot above the text is designed to provide an overview of how all the parts of the text combine to produce the model's output. See the force plot notebook for more details, but the general structure of the plot is positive red features "pushing" the model output higher while negative blue features "push" the model output lower. The force plot provides much more quantitative information than the text coloring. Hovering over a chuck of text will underline the portion of the force plot that corresponds to that chunk of text, and hovering over a portion of the force plot will underline the corresponding chunk of text.
Note that clicking on any chunk of text will show the sum of the SHAP values attributed to the tokens in that chunk (clicked again will hide the value).
End of explanation
# plot the first sentence's explanation
shap.plots.text(shap_values[:3])
Explanation: Multiple instance text plot
When we pass a multi-row explanation object to the text plot we get the single instance plots for each input instance scaled so they have consistent comparable x-axis and color ranges.
End of explanation
shap.plots.bar(shap_values.abs.sum(0))
Explanation: Summarizing text explanations
While plotting several instance-level explanations using the text plot can be very informative, sometime you want global summaries of the impact of tokens over the a large set of instances. See the Explanation object documentation for more details, but you can easily summarize the importance of tokens in a dataset by collapsing a multi-row explanation object over all it's rows (in this case by summing). Doing this treats every text input token type as a feature, so the collapsed Explanation object will have as many columns as there were unique tokens in the orignal multi-row explanation object. If there are hierarchical values present in the Explanation object then any large groups are divided up and each token in the gruop is given an equal share of the overall group importance value.
End of explanation
shap.plots.bar(shap_values.abs.max(0))
Explanation: Note that how you summarize the importance of features can make a big difference. In the plot above the a token was very importance both because it had an impact on the model, and because it was very common. Below we instead summize the instances using the max function to see the largest impact of a token in any instance.
End of explanation
shap.plots.bar(shap_values[:,"but"])
shap.plots.bar(shap_values[:,"but"])
Explanation: You can also slice out a single token from all the instances by using that token as an input name (note that the gray values to the left of the input names are the original text that the token was generated from).
End of explanation
import numpy as np
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import shap
import torch
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-es")
model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-es").cuda()
s=["In this picture, there are four persons: my father, my mother, my brother and my sister."]
explainer = shap.Explainer(model,tokenizer)
shap_values = explainer(s)
Explanation: Text-To-Text Visualization
End of explanation
shap.plots.text(shap_values)
Explanation: Text-To-Text Visualization contains the input text to the model on the left side and output text on the right side (in the default layout). On hovering over a token on the right (output) side the importance of each input token is overlayed on it, and is signified by the background color of the token. Red regions correspond to parts of the text that increase the output of the model when they are included, while blue regions decrease the output of the model when they are included. The explanation for a particular output token can be anchored by clickling on the output token (it can be un-anchored by clicking again).
Note that similar to the single output plots described above, importance values returned for text models are often hierarchical and follow the structure of the text. Small groups of tokens with strong non-linear effects among them will be auto-merged together to form coherent chunks. Similarly, The explainer may not have completely enumerated all possible token perturbations and so has treated chunks of the text as essentially a single unit. This preprocessing is done for each output token, and the merging behviour can differ for each output token, since the interation effects might be different for each output token. The merged chunks can be viewed by hovering over the input text, once an output token is anchored. All the tokens of a merged chunk are made bold.
Once the ouput text is anchored the input tokens can be clicked on to view the exact shap value (Hovering over input token also brings up a tooltip with the values). Auto merged tokens show the total values divided over the number of tokens in that chunk.
Hovering over the input text shows the SHAP value for that token for each output token. This is again signified by the background color of the output token. This can be anchored by clicking on the input token.
Note: The color scaling for all token (input and output) are consistent and the brightest red is assigned to the maximum SHAP value of input tokens for any output token.
Note: The layout of the two pieces of text can be changed by using the 'Layout' Drop down menu.
End of explanation |
6,220 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
In the previous two lessons, we learned about the three operations that carry out feature extraction from an image
Step1: Stride
The distance the window moves at each step is called the stride. We need to specify the stride in both dimensions of the image
Step2: The VGG architecture is fairly simple. It uses convolution with strides of 1 and maximum pooling with $2 \times 2$ windows and strides of 2. We've included a function in the visiontools utility script that will show us all the steps.
Step3: And that works pretty well! The kernel was designed to detect horizontal lines, and we can see that in the resulting feature map the more horizontal parts of the input end up with the greatest activation.
What would happen if we changed the strides of the convolution to 3? | Python Code:
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential([
layers.Conv2D(filters=64,
kernel_size=3,
strides=1,
padding='same',
activation='relu'),
layers.MaxPool2D(pool_size=2,
strides=1,
padding='same')
# More layers follow
])
Explanation: Introduction
In the previous two lessons, we learned about the three operations that carry out feature extraction from an image:
1. filter with a convolution layer
2. detect with ReLU activation
3. condense with a maximum pooling layer
The convolution and pooling operations share a common feature: they are both performed over a sliding window. With convolution, this "window" is given by the dimensions of the kernel, the parameter kernel_size. With pooling, it is the pooling window, given by pool_size.
<figure>
<img src="https://i.imgur.com/LueNK6b.gif" width=400 alt="A 2D sliding window.">
</figure>
There are two additional parameters affecting both convolution and pooling layers -- these are the strides of the window and whether to use padding at the image edges. The strides parameter says how far the window should move at each step, and the padding parameter describes how we handle the pixels at the edges of the input.
With these two parameters, defining the two layers becomes:
End of explanation
#$HIDE_INPUT$
import tensorflow as tf
import matplotlib.pyplot as plt
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('image', cmap='magma')
image = circle([64, 64], val=1.0, r_shrink=3)
image = tf.reshape(image, [*image.shape, 1])
# Bottom sobel
kernel = tf.constant(
[[-1, -2, -1],
[0, 0, 0],
[1, 2, 1]],
)
show_kernel(kernel)
Explanation: Stride
The distance the window moves at each step is called the stride. We need to specify the stride in both dimensions of the image: one for moving left to right and one for moving top to bottom. This animation shows strides=(2, 2), a movement of 2 pixels each step.
<figure>
<img src="https://i.imgur.com/Tlptsvt.gif" width=400 alt="Sliding window with a stride of (2, 2).">
</figure>
What effect does the stride have? Whenever the stride in either direction is greater than 1, the sliding window will skip over some of the pixels in the input at each step.
Because we want high-quality features to use for classification, convolutional layers will most often have strides=(1, 1). Increasing the stride means that we miss out on potentially valuble information in our summary. Maximum pooling layers, however, will almost always have stride values greater than 1, like (2, 2) or (3, 3), but not larger than the window itself.
Finally, note that when the value of the strides is the same number in both directions, you only need to set that number; for instance, instead of strides=(2, 2), you could use strides=2 for the parameter setting.
Padding
When performing the sliding window computation, there is a question as to what to do at the boundaries of the input. Staying entirely inside the input image means the window will never sit squarely over these boundary pixels like it does for every other pixel in the input. Since we aren't treating all the pixels exactly the same, could there be a problem?
What the convolution does with these boundary values is determined by its padding parameter. In TensorFlow, you have two choices: either padding='same' or padding='valid'. There are trade-offs with each.
When we set padding='valid', the convolution window will stay entirely inside the input. The drawback is that the output shrinks (loses pixels), and shrinks more for larger kernels. This will limit the number of layers the network can contain, especially when inputs are small in size.
The alternative is to use padding='same'. The trick here is to pad the input with 0's around its borders, using just enough 0's to make the size of the output the same as the size of the input. This can have the effect however of diluting the influence of pixels at the borders. The animation below shows a sliding window with 'same' padding.
<figure>
<img src="https://i.imgur.com/RvGM2xb.gif" width=400 alt="Illustration of zero (same) padding.">
</figure>
The VGG model we've been looking at uses same padding for all of its convolutional layers. Most modern convnets will use some combination of the two. (Another parameter to tune!)
Example - Exploring Sliding Windows
To better understand the effect of the sliding window parameters, it can help to observe a feature extraction on a low-resolution image so that we can see the individual pixels. Let's just look at a simple circle.
This next hidden cell will create an image and kernel for us.
End of explanation
show_extraction(
image, kernel,
# Window parameters
conv_stride=1,
pool_size=2,
pool_stride=2,
subplot_shape=(1, 4),
figsize=(14, 6),
)
Explanation: The VGG architecture is fairly simple. It uses convolution with strides of 1 and maximum pooling with $2 \times 2$ windows and strides of 2. We've included a function in the visiontools utility script that will show us all the steps.
End of explanation
show_extraction(
image, kernel,
# Window parameters
conv_stride=3,
pool_size=2,
pool_stride=2,
subplot_shape=(1, 4),
figsize=(14, 6),
)
Explanation: And that works pretty well! The kernel was designed to detect horizontal lines, and we can see that in the resulting feature map the more horizontal parts of the input end up with the greatest activation.
What would happen if we changed the strides of the convolution to 3?
End of explanation |
6,221 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with auxi's Ideal Gas Models
Purpose
The purpose of this example is to introduce and demonstrate the idealgas model classes in auxi's material physical property tools package.
Background
The idealgas models provide tools to calculate material physical properties of ideal gases by making use of gas states variables such as temperature, pressure and compostion. It is important to keep in mind that a gas behaves ideally at high temperatures and low pressures.
Items Covered
The following items in auxi are discussed and demonstrated in this example
Step1: Demonstrations
Using the BetaT model
This model describes the variation in the thermal expansion coefficient of an ideal gas as a function of temperature.
Calculating BetaT for a Single Temperature
As a basic example, let's calculate the thermal expansion coefficient at a single temperature
Step2: Calculating BetaT for Mutliple Temperatures
Now to show the potensial of this model, let's calculate at multiple temperatures and plot it.
Step3: Using the RhoT model
This model describes the variation in density of an ideal gas as a function of temperature.
Calculating RhoT at a Single Temperature
As a basic example let's calculate the density at a single temperature
Step4: Calculating RhoT for Mutliple temperatures
Now to show the potensial of this model, let's calculate at mutliple temperatures and plot it.
Step5: Using the RhoTP model
This model describes the variation in density of an ideal gas as a function of temperature and pressure.
Calculating RhoTP at a Single Temperature and Pressure
As a basic example, let's calculate the density at a single temperature and pressure
Step6: Calculating RhoTP at Mutliple Pressures
Now to show the potensial of this model, let's calculate at mutliple pressures and plot it.
Step7: RhoTPx model
This model describes the variation in density of an ideal gas as a function of temperature, pressure, and molar composition.
Calculating RhoTP at a Single Gas State
As a basic example lets calculate the density at a single temperature, pressure and molar compostion for a mixture of two gases
Step8: Calculating RhoTP as a Function of Composition
Now let's calculate the density at a single temperature and pressure, for a range of molar compostions for a mixture of two gases | Python Code:
# import some tools to use in this example
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Working with auxi's Ideal Gas Models
Purpose
The purpose of this example is to introduce and demonstrate the idealgas model classes in auxi's material physical property tools package.
Background
The idealgas models provide tools to calculate material physical properties of ideal gases by making use of gas states variables such as temperature, pressure and compostion. It is important to keep in mind that a gas behaves ideally at high temperatures and low pressures.
Items Covered
The following items in auxi are discussed and demonstrated in this example:
* auxi.tools.materialphysicalproperties.idealgas.BetaT
* auxi.tools.materialphysicalproperties.idealgas.RhoT
* auxi.tools.materialphysicalproperties.idealgas.RhoTP
* auxi.tools.materialphysicalproperties.idealgas.RhoTPx
Example Scope
In this example we will address the following aspects:
1. Using the BetaT model
1. Using the RhoT model
1. Using the RhoTP model
1. Using the RhoTPx model
End of explanation
# import the model class
from auxi.tools.materialphysicalproperties.idealgas import BetaT
# create a model object
βT = BetaT()
# define the state of the gas
T = 500.0 # [K]
# calculate the gas density
β = βT(T=T)
print ("β =", β, βT.units)
Explanation: Demonstrations
Using the BetaT model
This model describes the variation in the thermal expansion coefficient of an ideal gas as a function of temperature.
Calculating BetaT for a Single Temperature
As a basic example, let's calculate the thermal expansion coefficient at a single temperature:
End of explanation
# calculate the gas density
Ts = list(range(400, 1550, 50)) # [K]
β = [βT(T=T) for T in Ts]
# plot a graph
plt.plot(Ts, β, "bo", alpha = 0.5)
plt.xlabel('$T$ [K]')
plt.ylabel('$%s$ [%s]' % (βT.display_symbol, βT.units))
plt.show()
Explanation: Calculating BetaT for Mutliple Temperatures
Now to show the potensial of this model, let's calculate at multiple temperatures and plot it.
End of explanation
# import the molar mass function
from auxi.tools.chemistry.stoichiometry import molar_mass as mm
# import the model class
from auxi.tools.materialphysicalproperties.idealgas import RhoT
# create a model object
# Since the model only calculates as a function temperature, we need to specify
# pressure and average molar mass when we create it.
ρT = RhoT(molar_mass=mm('CO2'), P=101325.0)
# define the state of the gas
T = 500.0 # [K]
# calculate the gas density
ρ = ρT.calculate(T=T)
print(ρT.symbol, "=", ρ, ρT.units)
Explanation: Using the RhoT model
This model describes the variation in density of an ideal gas as a function of temperature.
Calculating RhoT at a Single Temperature
As a basic example let's calculate the density at a single temperature:
End of explanation
# calculate the gas density
Ts = list(range(400, 1550, 50)) # [K]
ρs = [ρT(T=T) for T in Ts]
# plot a graph
plt.plot(Ts, ρs, "bo", alpha = 0.7)
plt.xlabel('$T$ [K]')
plt.ylabel('$%s$ [%s]' % (ρT.display_symbol, ρT.units))
plt.show()
Explanation: Calculating RhoT for Mutliple temperatures
Now to show the potensial of this model, let's calculate at mutliple temperatures and plot it.
End of explanation
# import the model class
from auxi.tools.materialphysicalproperties.idealgas import RhoTP
# create a model object
# Since the model only calculates as a function of temperature and pressure,
# we need to specify an average molar mass when we create it.
ρTP = RhoTP(mm('CO2'))
# define the state of the gas
T = 500.0 # [K]
P = 101325.0 # [Pa]
# calculate the gas density
ρ = ρTP.calculate(T=T,P=P)
print(ρTP.symbol, "=", ρ, ρTP.units)
Explanation: Using the RhoTP model
This model describes the variation in density of an ideal gas as a function of temperature and pressure.
Calculating RhoTP at a Single Temperature and Pressure
As a basic example, let's calculate the density at a single temperature and pressure:
End of explanation
# define the state of the gas
T = 700.0 # [K]
Ps = np.linspace(0.5*101325, 5*101325) # [Pa]
# calculate the gas density
ρs = [ρTP(T=T, P=P) for P in Ps]
# plot a graph
plt.plot(Ps, ρs, "bo", alpha = 0.7)
plt.xlabel('$P$ [Pa]')
plt.ylabel('$%s$ [%s]' % (ρTP.display_symbol, ρTP.units))
plt.show()
Explanation: Calculating RhoTP at Mutliple Pressures
Now to show the potensial of this model, let's calculate at mutliple pressures and plot it.
End of explanation
# import the model class
from auxi.tools.materialphysicalproperties.idealgas import RhoTPx
# create a model object
ρTPx = RhoTPx()
# define the state of the gas
T = 700.0 # [K]
P = 101325.0 # [Pa]
x = {'H2':0.5, 'Ar':0.5} # [mole fraction]
# calculate the gas density
ρ = ρTPx(T=700, P=101000, x=x)
print(ρTPx.symbol, "=", ρ, ρTPx.units)
Explanation: RhoTPx model
This model describes the variation in density of an ideal gas as a function of temperature, pressure, and molar composition.
Calculating RhoTP at a Single Gas State
As a basic example lets calculate the density at a single temperature, pressure and molar compostion for a mixture of two gases:
End of explanation
# define the state of the gas
T = 700.0 # [K]
P = 101325.0 # [Pa]
xs_h2 = np.arange(0,1.1,0.1) # [mole fraction H2]
# calculate density as a function of composition for a binary Ar-H2 gas mixture
ρs = [ρTPx(T=700, P=101325 ,x={'Ar':1-x, 'H2':x}) for x in xs_h2]
# plot a graph
plt.plot(xs_h2, ρs, "bo", alpha = 0.7)
plt.xlim((0,1))
plt.xlabel('$x_{H_2}$ [mol]')
plt.ylabel('$%s$ [%s]' % (ρTPx.display_symbol, ρTPx.units))
plt.show()
Explanation: Calculating RhoTP as a Function of Composition
Now let's calculate the density at a single temperature and pressure, for a range of molar compostions for a mixture of two gases:
End of explanation |
6,222 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Motivation
<br>
<font color=red size=+3>Know what you eat, </font>
<font color=green size=+3> Gain insight into food.</font>
<a href=https
Step1: Import libraries
Step7: User-defined functions
Step8: Load dataset
https
Step9: Pre-processing data
Drop less useful columns
Step10: Fix missing value
Step11: Standardize country code
Step12: Extract serving_size into gram value
Step13: Parse additives
Step14: Organic or Not
[TODO]
pick up word 'Organic' from product_name column
pick up word 'Organic','org' from ingredients_text column
Add creation_date
Step15: Visualize Food features
Food labels yearly trend
Step16: Top countries
Step17: Nutrition grade
Step18: Nutrition score
Step19: Serving size
Step20: Energy, fat, ...
Energy
Fat
Saturated-Fat
Trans-Fat
Step21: Carbohydrates, protein, fiber
Carbohydrates
Cholesterol
Proteins
Fiber
Step22: Sugar, Vitamins
Sugars
Salt
Vitamin-A
Vitamin-C
Step23: Minerals
Calcium
Iron
Sodium
Step24: Explore food label
Are Amercan and French food different?
Step25: Who eats less sweet food? | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: Motivation
<br>
<font color=red size=+3>Know what you eat, </font>
<font color=green size=+3> Gain insight into food.</font>
<a href=https://world.openfoodfacts.org/>
<img src=https://static.openfoodfacts.org/images/misc/openfoodfacts-logo-en-178x150.png width=300 height=200>
</a>
<br>
<font color=blue size=+2>What can be learned from the Open Food Facts dataset? </font>
End of explanation
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
%matplotlib inline
Explanation: Import libraries
End of explanation
def remove_na_rows(df, cols=None):
remove row with NaN in any column
if cols is None:
cols = df.columns
return df[np.logical_not(np.any(df[cols].isnull().values, axis=1))]
def trans_country_name(x):
translate country name to code (2-char)
try:
country_name = x.split(',')[0]
if country_name in dictCountryName2Code:
return dictCountryName2Code[country_name]
except:
return None
def parse_additives(x):
parse additives column values into a list
try:
dict = {}
for item in x.split(']'):
token = item.split('->')[0].replace("[", "").strip()
if token: dict[token] = 1
return [len(dict.keys()), sorted(dict.keys())]
except:
return None
def trans_serving_size(x):
pick up gram value from serving_size column
try:
serving_g = float((x.split('(')[0]).replace("g", "").strip())
return serving_g
except:
return 0.0
def distplot2x2(cols):
make dist. plot on 2x2 grid for up to 4 features
sb.set(style="white", palette="muted")
f, axes = plt.subplots(2, 2, figsize=(7, 7), sharex=False)
b, g, r, p = sb.color_palette("muted", 4)
colors = [b, g, r, p]
axis = [axes[0,0],axes[0,1],axes[1,0],axes[1,1]]
for n,col in enumerate(cols):
sb.distplot(food[col].dropna(), hist=True, rug=False, color=colors[n], ax=axis[n])
Explanation: User-defined functions
End of explanation
food = pd.read_excel("data/openfoodfacts_5k.xlsx")
food.shape
food.columns
food.head()
Explanation: Load dataset
https://www.kaggle.com/openfoodfacts/world-food-facts
This dataset contains Food Nutrition Fact for 100 000+ food products from 150 countries.
<img src=https://static.openfoodfacts.org/images/products/00419796/front_en.3.full.jpg>
End of explanation
# columns_to_keep = ['code','product_name','created_datetime','brands','categories','origins','manufacturing_places','energy_100g','fat_100g','saturated-fat_100g','trans-fat_100g','cholesterol_100g','carbohydrates_100g','sugars_100g','omega-3-fat_100g','omega-6-fat_100g','fiber_100g','proteins_100g','salt_100g','sodium_100g','alcohol_100g','vitamin-a_100g','vitamin-c_100g','potassium_100g','chloride_100g','calcium_100g','phosphorus_100g','iron_100g','magnesium_100g','zinc_100g','copper_100g','manganese_100g','fluoride_100g','ingredients_text','countries','countries_en','serving_size','additives','nutrition_grade_fr','nutrition_grade_uk','nutrition-score-fr_100g','nutrition-score-uk_100g','url','image_url','image_small_url']
columns_to_keep = ['code','product_name','created_datetime','brands','energy_100g','fat_100g','saturated-fat_100g','trans-fat_100g','cholesterol_100g','carbohydrates_100g','sugars_100g','fiber_100g','proteins_100g','salt_100g','sodium_100g','vitamin-a_100g','vitamin-c_100g','calcium_100g','iron_100g','ingredients_text','countries','countries_en','serving_size','additives','nutrition_grade_fr','nutrition-score-fr_100g','url']
food = food[columns_to_keep]
Explanation: Pre-processing data
Drop less useful columns
End of explanation
columns_numeric_all = ['energy_100g','fat_100g','saturated-fat_100g','trans-fat_100g','cholesterol_100g','carbohydrates_100g','sugars_100g','omega-3-fat_100g','omega-6-fat_100g','fiber_100g','proteins_100g','salt_100g','sodium_100g','alcohol_100g','vitamin-a_100g','vitamin-c_100g','potassium_100g','chloride_100g','calcium_100g','phosphorus_100g','iron_100g','magnesium_100g','zinc_100g','copper_100g','manganese_100g','fluoride_100g','nutrition-score-fr_100g','nutrition-score-uk_100g']
columns_numeric = set(columns_numeric_all) & set(columns_to_keep)
columns_categoric = set(columns_to_keep) - set(columns_numeric)
# turn off
if False:
for col in columns_numeric:
if not col in ['nutrition-score-fr_100g', 'nutrition-score-uk_100g']:
food[col] = food[col].fillna(0)
for col in columns_categoric:
if col in ['nutrition_grade_fr', 'nutrition_grade_uk']:
food[col] = food[col].fillna('-')
else:
food[col] = food[col].fillna('')
# list column names: categoric vs numeric
columns_categoric, columns_numeric
food.head(3)
Explanation: Fix missing value
End of explanation
# standardize country
country_lov = pd.read_excel("../../0.0-Datasets/country_cd.xlsx")
# country_lov.shape
# country_lov.head()
# country_lov[country_lov['GEOGRAPHY_NAME'].str.startswith('United')].head()
# country_lov['GEOGRAPHY_CODE'].tolist()
# country_lov.ix[0,'GEOGRAPHY_CODE'], country_lov.ix[0,'GEOGRAPHY_NAME']
# create 2 dictionaries
dictCountryCode2Name = {}
dictCountryName2Code = {}
for i in country_lov.index:
dictCountryCode2Name[country_lov.ix[i,'GEOGRAPHY_CODE']] = country_lov.ix[i,'GEOGRAPHY_NAME']
dictCountryName2Code[country_lov.ix[i,'GEOGRAPHY_NAME']] = country_lov.ix[i,'GEOGRAPHY_CODE']
# add Country_Code column - pick 1st country from list
food['countries_en'] = food['countries_en'].fillna('')
food['country_code'] = food['countries_en'].apply(str).apply(lambda x: trans_country_name(x))
# add country_code to columns_categoric set
columns_categoric.add('country_code')
# verify bad country
food[food['country_code'] != food['countries']][['country_code', 'countries']].head(20)
food['ingredients_text'].head() # leave as is
Explanation: Standardize country code
End of explanation
# add serving_size in gram column
food['serving_size'].head(10)
food['serving_size'] = food['serving_size'].fillna('')
food['serving_size_gram'] = food['serving_size'].apply(lambda x: trans_serving_size(x))
# add serving_size_gram
columns_numeric.add('serving_size_gram')
food[['serving_size_gram', 'serving_size']].head()
Explanation: Extract serving_size into gram value
End of explanation
food['additives'].head(10)
food['additives'] = food['additives'].fillna('')
food['additive_list'] = food['additives'].apply(lambda x: parse_additives(x))
# add additive_list
columns_categoric.add('additive_list')
food[['additive_list', 'additives']].head()
Explanation: Parse additives
End of explanation
food["creation_date"] = food["created_datetime"].apply(str).apply(lambda x: x[:x.find("T")])
food["year_added"] = food["created_datetime"].dropna().apply(str).apply(lambda x: int(x[:x.find("-")]))
# add creation_date
columns_categoric.add('creation_date')
columns_numeric.add('year_added')
food[['created_datetime', 'creation_date', 'year_added']].head()
# food['product_name']
food.head(3)
columns_numeric
Explanation: Organic or Not
[TODO]
pick up word 'Organic' from product_name column
pick up word 'Organic','org' from ingredients_text column
Add creation_date
End of explanation
year_added = food['year_added'].value_counts().sort_index()
#year_added
year_i = [int(x) for x in year_added.index]
x_pos = np.arange(len(year_i))
year_added.plot.bar()
plt.xticks(x_pos, year_i)
plt.title("Food labels added per year")
Explanation: Visualize Food features
Food labels yearly trend
End of explanation
TOP_N = 10
dist_country = food['country_code'].value_counts()
top_country = dist_country[:TOP_N][::-1]
country_s = [dictCountryCode2Name[x] for x in top_country.index]
y_pos = np.arange(len(country_s))
top_country.plot.barh()
plt.yticks(y_pos, country_s)
plt.title("Top {} Country Distribution".format(TOP_N))
Explanation: Top countries
End of explanation
# dist_nutri_grade = food['nutrition_grade_uk'].value_counts()
# no value
dist_nutri_grade = food['nutrition_grade_fr'].value_counts()
dist_nutri_grade.sort_index(ascending=False).plot.barh()
plt.title("Nutrition Grade Dist")
Explanation: Nutrition grade
End of explanation
food['nutrition-score-fr_100g'].dropna().plot.hist()
plt.title("{} Dist.".format("Nutri-Score"))
Explanation: Nutrition score
End of explanation
food['serving_size_gram'].dropna().plot.hist()
plt.title("{} Dist.".format("Serving Size (g)"))
Explanation: Serving size
End of explanation
distplot2x2([ 'energy_100g','fat_100g','saturated-fat_100g','trans-fat_100g'])
Explanation: Energy, fat, ...
Energy
Fat
Saturated-Fat
Trans-Fat
End of explanation
distplot2x2(['carbohydrates_100g', 'cholesterol_100g', 'proteins_100g', 'fiber_100g'])
Explanation: Carbohydrates, protein, fiber
Carbohydrates
Cholesterol
Proteins
Fiber
End of explanation
distplot2x2([ 'sugars_100g', 'salt_100g', 'vitamin-a_100g', 'vitamin-c_100g'])
Explanation: Sugar, Vitamins
Sugars
Salt
Vitamin-A
Vitamin-C
End of explanation
distplot2x2(['calcium_100g', 'iron_100g', 'sodium_100g'])
Explanation: Minerals
Calcium
Iron
Sodium
End of explanation
df = food[food["country_code"].isin(['US','FR'])][['energy_100g', 'carbohydrates_100g', 'sugars_100g','country_code']]
df = remove_na_rows(df)
df.head()
sb.pairplot(df, hue="country_code", size=2.5)
Explanation: Explore food label
Are Amercan and French food different?
End of explanation
# prepare a small dataframe for ['US', 'FR']
df2 = food[food["country_code"].isin(['US','FR'])][['energy_100g', 'sugars_100g','country_code','nutrition_grade_fr']]
df2 = df2[df2["nutrition_grade_fr"].isin(['a','b','c','d','e'])]
df2 = df2.sort_values(by="nutrition_grade_fr")
# df2.head()
# create a grid of scatter plot
g = sb.FacetGrid(df2, row="nutrition_grade_fr", col="country_code", margin_titles=True)
g.map(plt.scatter, "sugars_100g", "energy_100g", color="steelblue")
g.set(xlim=(0, 100), ylim=(0, 3000))
Explanation: Who eats less sweet food?
End of explanation |
6,223 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to Build a RuleBasedProfiler
This Notebook will demonstrate the steps we need to take to generate a simple RuleBasedProfiler by initializing the components in memory.
We will start from a new Great Expectations Data Context (ie great_expectations folder after running great_expectations init), and begin by adding the Datasource, and progressively adding more components
Step1: Set-up
Step2: BatchRequests
In this example, we will be using two BatchRequests using our Datasource.
single_batch_batch_request
Step3: Example 1
Step4: To continue our example, we will continue building a RuleBasedProfiler using our ColumnDomainBuilder
Build Rule
The first Rule that we build will output expect_column_values_to_not_be_null because it does not take in additional information other than Domain. We will add ParameterBuilders in a subsequent example.
Step5: Create RuleBasedProfiler and add Rule
We create a simple RuleBasedProfiler and add the Rule that we added in the previous step is added to the Profiler. When we run the Profiler, the output is an ExpectationSuite with 4 Expectations, which we expect.
Step6: As expected our simple RuleBasedProfiler will output 4 Expectations, one for each of our 4 columns.
Example 2
Step7: Build a ParameterBuilder
ParameterBuilders help calcluate "reasonable" parameters for Expectations based on data that is specified by a BatchRequest.
The largest categories include
Step8: Build an ExpectationConfigurationBuilder
ExpectationConfigurationBuilder is being built for expect_column_values_to_be_greater_than which will use the column.min values that are calculated using the ParameterBuilder. These are now accessible using the fully qualified parameter $parameter.my_column_min.value[-1]. The [-1] indicates that we will use the min value from the latest Batch (the only Batch in this case since our BatchRequest only returns a single Batch).
Step9: Build a Rule, RuleBasedProfiler, and run
Now we build a rule with our ParameterBuilder, DomainBuilder and ExpectationConfigurationBuilder.
Step10: Add the Rule to our RuleBasedProfiler and run.
Step11: The resulting ExpectationSuite now contain values (-80.0, 0.0 etc) that were calculated from the Batch of data defined by the BatchRequest.
Example 3
Step12: Instantiating RuleBasedProfiler with variables
Pass the variables dictionary into the RuleBasedProfiler constructor.
Step13: Instantiating ColumnDomainBuilder
The ColumnDomainBuilder is instantiated using column names tip_amount and fare_amount. The BatchRequest is passed in as a $variable.
Step14: Instantiating ParameterBuilders
Our Rule will contain 2 NumericMetricRangeMultiBatchParameterBuilders, one for each of our 2 Expectation types. One will be estimating the Parameter values for the column.min Metric, and the other will be estimating Parameter values for the column.max Metric. metric_domain_kwargs are passed in from our DomainBuilder using $domain.domain_kwargs.
Also note the use of 3 Variables we defined above
Step15: Instantiating ExpectationConfigurationBuilders
Our Rule will contain 2 ExpectationConfigurationBuilders, one for each of our 2 Expectation types
Step16: Instantiating RuleBasedProfiler and Running
We instantiate a Rule with our DomainBuilder, ParameterBuilders and ExpectationConfigurationBuilders and load into our RuleBasedProfiler.
Step17: As expected, the resulting ExpectationSuite contains our minimum and maximum values, with tip_amount ranging from $-2.16 to $195.05 (a generous tip), and fare_amount ranging from $-98.90 (a refund) to $405,904.54 (a very very long trip).
Appendix
Here we have additional example configuration of DomainBuilder and ParameterBuilders that were not included in the previous 3 Examples.
DomainBuilders
ColumnDomainBuilder
This DomainBuilder outputs column Domains, which are required by ColumnExpectations like (expect_column_median_to_be_between). There are a few ways that the ColumnDomainBuilder can be used.
In the simplest usecase, the ColumnDomainBuilder can output all columns in the dataset as a Domain, or include/exclude columns if you already know which ones you would like. Column suffixes (like _amount) can be used to select columns of interest, as we saw in our examples above.
The ColumnDomainBuilder also allows you to choose columns based on their semantic types (such as numeric, or text).
Semantic types are defined as an Enum object called SemanticDomainTypes, which can be found here
Step18: In the simplest usecase, the ColumnDomainBuilder can output all of the columns in yellow_tripdata_sample_2018
Step19: Columns can also be included or excluded by name
Step20: As described above, the ColumnDomainBuilder also allows you to choose columns based on their semantic types (such as numeric, or text). This is passed in as part of the include_semantic_types parameter.
Step21: MultiColumnDomainBuilder
This DomainBuilder outputs multicolumn Domains by taking in a column list in the include_column_names parameter.
Step22: ColumnPairDomainBuilder
This DomainBuilder outputs columnpair domains by taking in a column pair list in the include_column_names parameter.
Step23: TableDomainBuilder
This DomainBuilder outputs table Domains, which is required by Expectations that act on tables, like (expect_table_row_count_to_equal, or expect_table_columns_to_match_set).
Step24: MapMetricColumnDomainBuilder
This DomainBuilder allows you to choose columns based on Map Metrics, which give a yes/no answer for individual values or rows. In this example, we use the Map Metrics column_values.nonnull to filter out a column that was all None from taxi_data.
Step25: CategoricalColumnDomainBuilder
This DomainBuilder allows you to choose columns based on their cardinality (number of unique values).The CategoricalColumnDomainBuilder will take in various cardinality_limit_mode values for cardinality, and in this example we are only interested in columns that have "very_few" (less than 10) unique values. For a full of valid modes, along with the associated values, please refer to the CardinalityLimitMode enum in
Step26: ParameterBuilders
ParameterBuilders work under the hood by populating a ParameterContainer, which can also be shared by multiple ParameterBuilders. It requires a Domain, and metric_name, with domain_kwargs accessible from the DomainBuilder using the fully qualified parameter $domain.domain_kwargs.
For the sake of simplicity, we will define a Domain object directly using the Domain() constructor, and pass in a column name within domain_kwargs.
Step27: MetricMultiBatchParameterBuilder
The MetricMultiBatchParameterBuilder computes a Metric on data from one or more batches. It takes domain_kwargs, value_kwargs, and metric_name as arguments.
Step28: my_column_min[value] now contains a list of 12 values, which are the minimum values the total_amount column for each of the 12 Batches associated with 2018 taxi_data data. If we were to use the values in a ExpectationConfigurationBuilder, it would be accessible through the fully-qualified parameter
Step29: my_value_set[value] now contains a list of 3 values, which is a list of all unique vendor_ids across 12 Batches in the 2018 taxi_data dataset.
RegexPatternStringParameterBuilder
The RegexPatternStringParameterBuilder contains a set of default regex patterns and builds a value set of the best-matching patterns. Users are also able to pass in new patterns as a parameter.
Step30: vendor_id is a single integer. Let's see if our default patterns can match it.
Step31: Looks like my_regex_set[value] is an empty list. This means that none of the evaluated_regexes matched our domain. Let's try the same thing again, but this time with a regex that will match our vendor_id column. ^\\d{1}$ and ^\\d{2}$ which will match 1 or 2 digit integers anchored at the beginning and end of the string.
Step32: Now my_regex_set[value] contains ^\\d{1}$.
SimpleDateFormatStringParameterBuilder
The SimpleDateFormatStringParameterBuilder contains a set of default Datetime format patterns and builds a value set of the best-matching patterns. Users are also able to pass in new patterns as a parameter.
Step33: The result contains our matching datetime pattern, which is '%Y-%m-%d %H
Step34: As we see, the mean value range for the total_amount column is 16.0 to 44.0
Optional | Python Code:
import great_expectations as ge
from ruamel import yaml
from great_expectations.core.batch import BatchRequest
from great_expectations.rule_based_profiler.rule.rule import Rule
from great_expectations.rule_based_profiler.rule_based_profiler import RuleBasedProfiler, RuleBasedProfilerResult
from great_expectations.rule_based_profiler.domain_builder import (
DomainBuilder,
ColumnDomainBuilder,
)
from great_expectations.rule_based_profiler.parameter_builder import (
MetricMultiBatchParameterBuilder,
)
from great_expectations.rule_based_profiler.expectation_configuration_builder import (
DefaultExpectationConfigurationBuilder,
)
data_context: ge.DataContext = ge.get_context()
Explanation: How to Build a RuleBasedProfiler
This Notebook will demonstrate the steps we need to take to generate a simple RuleBasedProfiler by initializing the components in memory.
We will start from a new Great Expectations Data Context (ie great_expectations folder after running great_expectations init), and begin by adding the Datasource, and progressively adding more components
End of explanation
data_path: str = "../../../../test_sets/taxi_yellow_tripdata_samples"
datasource_config = {
"name": "taxi_multi_batch_datasource",
"class_name": "Datasource",
"module_name": "great_expectations.datasource",
"execution_engine": {
"module_name": "great_expectations.execution_engine",
"class_name": "PandasExecutionEngine",
},
"data_connectors": {
"default_inferred_data_connector_name": {
"class_name": "InferredAssetFilesystemDataConnector",
"base_directory": data_path,
"default_regex": {
"group_names": ["data_asset_name", "month"],
"pattern": "(yellow_tripdata_sample_2018)-(\\d.*)\\.csv",
},
},
"default_inferred_data_connector_name_all_years": {
"class_name": "InferredAssetFilesystemDataConnector",
"base_directory": data_path,
"default_regex": {
"group_names": ["data_asset_name", "year", "month"],
"pattern": "(yellow_tripdata_sample)_(\\d.*)-(\\d.*)\\.csv",
},
},
},
}
data_context.test_yaml_config(yaml.dump(datasource_config))
# add_datasource only if it doesn't already exist in our configuration
try:
data_context.get_datasource(datasource_config["name"])
except ValueError:
data_context.add_datasource(**datasource_config)
Explanation: Set-up: Adding taxi_data Datasource
Add taxi_data as a new Datasource
We are using an InferredAssetFilesystemDataConnector to connect to data in the test_sets/taxi_yellow_tripdata_samples folder and get one DataAsset (yellow_tripdata_sample_2018) that has 12 Batches (1 Batch/month).
End of explanation
single_batch_batch_request: BatchRequest = BatchRequest(
datasource_name="taxi_multi_batch_datasource",
data_connector_name="default_inferred_data_connector_name",
data_asset_name="yellow_tripdata_sample_2018",
data_connector_query={"index": -1},
)
multi_batch_batch_request: BatchRequest = BatchRequest(
datasource_name="taxi_multi_batch_datasource",
data_connector_name="default_inferred_data_connector_name",
data_asset_name="yellow_tripdata_sample_2018",
)
Explanation: BatchRequests
In this example, we will be using two BatchRequests using our Datasource.
single_batch_batch_request : which gives the most recent (December) data from the 2018 taxi_data dataset.
multi_batch_batch_request: which gives all 12 Batches of data from the 2018 taxi_data datataset.
End of explanation
domain_builder: DomainBuilder = ColumnDomainBuilder(
include_column_name_suffixes=["_amount"],
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
# assert that the domains we get are the ones we expect
assert len(domains) == 4
assert domains == [
{"rule_name": "my_rule", "domain_type": "column", "domain_kwargs": {"column": "fare_amount"}, "details": {"inferred_semantic_domain_type": {"fare_amount": "numeric",}},},
{"rule_name": "my_rule", "domain_type": "column", "domain_kwargs": {"column": "tip_amount"}, "details": {"inferred_semantic_domain_type": {"tip_amount": "numeric",}},},
{"rule_name": "my_rule", "domain_type": "column", "domain_kwargs": {"column": "tolls_amount"}, "details": {"inferred_semantic_domain_type": {"tolls_amount": "numeric",}},},
{"rule_name": "my_rule", "domain_type": "column", "domain_kwargs": {"column": "total_amount"}, "details": {"inferred_semantic_domain_type": {"total_amount": "numeric",}},},
]
Explanation: Example 1: RuleBasedProfiler with just a DomainBuilder and ExpectationConfigurationBuilder
Build a DomainBuilder
In the process of building a RuleBasedProfiler, one of the first components we want to build/test
is a DomainBuilder, which returns the Domains (tables, columns, set of columns, etc) that the our resulting Expectations will be run on. In our example, the DomainBuilder will output a list of columns that follow a certain pattern, namely have '_amount' in their suffix. To this end we will be using a ColumnDomainBuilder which allows you to choose columns based on their suffix, name, or semantic type (like numeric or string) and our DomainBuilder will output a list of 4 columns : fare_amount, tip_amount, tolls_amount and total_amount.
The RuleBasedProfiler also contains additional DomainBuilders that allow you to do more sophisticated filtering on your data.
These include:
* CategoricalColumnDomainBuilder: which allows you to choose columns based on their cardinality (number of unique values).
* MapMetricColumnDomainBuilder: which allows you to choose columns based on Map Metrics, which give a yes/no answer for individual values or rows.
In addition, there are DomainBuilders that do not perform any additional filtering, but are required by the Expectations that are being built by the RuleBasedProfiler.
* TableDomainBuilder: Outputs Table Domain, which is required by Expectations that act on tables, like (expect_table_row_count_to_equal, or expect_table_columns_to_match_set).
ColumnDomainBuilder
End of explanation
default_expectation_configuration_builder = DefaultExpectationConfigurationBuilder(
expectation_type="expect_column_values_to_not_be_null",
column="$domain.domain_kwargs.column", # Get the column from domain_kwargs that are retrieved from the DomainBuilder
)
simple_rule: Rule = Rule(
name="rule_with_no_parameters",
variables=None,
domain_builder=domain_builder,
expectation_configuration_builders=[default_expectation_configuration_builder],
)
Explanation: To continue our example, we will continue building a RuleBasedProfiler using our ColumnDomainBuilder
Build Rule
The first Rule that we build will output expect_column_values_to_not_be_null because it does not take in additional information other than Domain. We will add ParameterBuilders in a subsequent example.
End of explanation
from great_expectations.core import ExpectationSuite
from great_expectations.rule_based_profiler.rule_based_profiler import RuleBasedProfiler
my_rbp: RuleBasedProfiler = RuleBasedProfiler(
name="my_simple_rbp", data_context=data_context, config_version=1.0
)
my_rbp.add_rule(rule=simple_rule)
profiler_result: RuleBasedProfilerResult
profiler_result = my_rbp.run(batch_request=single_batch_batch_request)
assert len(profiler_result.expectation_configurations) == 4
profiler_result.expectation_configurations
Explanation: Create RuleBasedProfiler and add Rule
We create a simple RuleBasedProfiler and add the Rule that we added in the previous step is added to the Profiler. When we run the Profiler, the output is an ExpectationSuite with 4 Expectations, which we expect.
End of explanation
domain_builder: DomainBuilder = ColumnDomainBuilder(
include_column_name_suffixes=["_amount"],
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
domains
Explanation: As expected our simple RuleBasedProfiler will output 4 Expectations, one for each of our 4 columns.
Example 2: RuleBasedProfiler with DomainBuilder, ParameterBuilder ExpectationConfigurationBuilder
Build a DomainBuilder
Using the same ColumnDomainBuilder from our previous example.
End of explanation
numeric_range_parameter_builder: MetricMultiBatchParameterBuilder = (
MetricMultiBatchParameterBuilder(
data_context=data_context,
metric_name="column.min",
metric_domain_kwargs="$domain.domain_kwargs", # domain kwarg values are accessible using fully qualified parameters
name="my_column_min",
)
)
Explanation: Build a ParameterBuilder
ParameterBuilders help calcluate "reasonable" parameters for Expectations based on data that is specified by a BatchRequest.
The largest categories include:
- metric_multi_batch_parameter_builder: Which is able to calculate a numeric Metric (like column.min) across multiple Batches (or just one Batch).
- value_set_multi_batch_parameter_builder: Which is able to build a value set across multiple Batches (or just one Batch).
In some cases, there is a better way to build a value set using regex or dates.
- regex_pattern_string_parameter_builder: Which contains a set of default regex patterns and builds a value set of the best-matching patterns. Users are also able to pass in new patterns as a parameter.
- simple_date_format_string_parameter_builder: Which contains a set of default datetime-format patterns and builds a value set of the best-matching patterns. Users are also able to pass in new patterns as a parameter.
Across multiple-Batches, we can build more-sophisticated parameters by using sampling methods.
- numeric_range_multi_batch_parameter_builder: Which is able to provide range estimations across Batches using sampling methods. For instance, if we expect a table's row_count to change between Batches, we could calculate the min / max values of row_count by using the NumericMetricRangeMultiBatchParameterBuilder. These parameters could then be used by ExpectTableRowCountToBeBetween
In our example we will be using a MetricMultiBatchParameterBuilder to estimate the column.min Metric for the 4 columns defined by our Domain Builder. These are passed in as metric_domain_kwargs and are accessible using the fully qualified parameter $domain.domain_kwargs.
End of explanation
config_builder: DefaultExpectationConfigurationBuilder = (
DefaultExpectationConfigurationBuilder(
expectation_type="expect_column_values_to_be_greater_than",
value="$parameter.my_column_min.value[-1]", # the parameter is accessible using a fully qualified parameter
column="$domain.domain_kwargs.column", # domain kwarg values are accessible using fully qualified parameters
name="my_column_min",
)
)
Explanation: Build an ExpectationConfigurationBuilder
ExpectationConfigurationBuilder is being built for expect_column_values_to_be_greater_than which will use the column.min values that are calculated using the ParameterBuilder. These are now accessible using the fully qualified parameter $parameter.my_column_min.value[-1]. The [-1] indicates that we will use the min value from the latest Batch (the only Batch in this case since our BatchRequest only returns a single Batch).
End of explanation
simple_rule: Rule = Rule(
name="rule_with_parameters",
variables=None,
domain_builder=domain_builder,
parameter_builders=[numeric_range_parameter_builder],
expectation_configuration_builders=[config_builder],
)
my_rbp = RuleBasedProfiler(name="my_rbp", data_context=data_context, config_version=1.0)
Explanation: Build a Rule, RuleBasedProfiler, and run
Now we build a rule with our ParameterBuilder, DomainBuilder and ExpectationConfigurationBuilder.
End of explanation
my_rbp.add_rule(rule=simple_rule)
profiler_result = my_rbp.run(batch_request=single_batch_batch_request)
assert len(profiler_result.expectation_configurations) == 4
profiler_result.expectation_configurations
Explanation: Add the Rule to our RuleBasedProfiler and run.
End of explanation
variables: dict = {
"multi_batch_batch_request": multi_batch_batch_request,
"estimator_name": "bootstrap",
"false_positive_rate": 5.0e-2,
}
Explanation: The resulting ExpectationSuite now contain values (-80.0, 0.0 etc) that were calculated from the Batch of data defined by the BatchRequest.
Example 3: RuleBasedProfiler with multiple ParameterBuilders, ExpectationConfigurationBuilders and Variables
The third example is more complex, using multiple batches, multiple ParameterBuilders, ExpectationConfigurationBuilders and also introducing the use of variables.
The goal of this example is to build a RuleBasedProfiler that outputs an ExpectationSuite containing 2 Expectation types
- expect_column_min_to_be_between : Defined as "Expect the column minimum to be between a min and max value".
- expect_column_max_to_be_between : Defined as "Expect the column maxmimum to be between a min and max value".
for 2 columns in our taxi_data dataset
- fare_amount
- tip_amount
with the min_value and max_value parameters for each of the Expectations estimated over 12 Batches of taxi_data, for a total of 4 Expectations.
To estimate the parameters, we will be using a NumericMetricRangeMultiBatchParameterBuilder, which is able to provide range estimations across Batches using sampling methods. We will also be using a variables dictionary to share defined variables across Rule components like DomainBuilders, ParameterBuilders and ExpectationConfigurationBuilders.
Instantiating variables dictionary
RuleBasedProfilers allow for the definition of variables, which can be shared across Rules and Rule components. When building a complex RuleBasedProfiler with multiple Rules or components, using variables will help you keep track of values without having to input them multiple times.
Once loaded into the RuleBasedProfiler configuration, the variables are accessible using the fully qualified variable name $variables.[key_in_variables_dictionary], similar to how domain kwarg values and parameter values are accessible using a fully qualified name that begins with $.
In the example below, the estimator_name is accessible using $variables.estimator_name.
End of explanation
my_rbp = RuleBasedProfiler(name="my_complex_rbp", data_context=data_context, variables=variables, config_version=1.0)
Explanation: Instantiating RuleBasedProfiler with variables
Pass the variables dictionary into the RuleBasedProfiler constructor.
End of explanation
from great_expectations.rule_based_profiler.domain_builder import ColumnDomainBuilder
domain_builder: DomainBuilder = ColumnDomainBuilder(
include_column_names=["tip_amount", "fare_amount"],
data_context=data_context,
)
Explanation: Instantiating ColumnDomainBuilder
The ColumnDomainBuilder is instantiated using column names tip_amount and fare_amount. The BatchRequest is passed in as a $variable.
End of explanation
from great_expectations.rule_based_profiler.parameter_builder import NumericMetricRangeMultiBatchParameterBuilder
min_range_parameter_builder: NumericMetricRangeMultiBatchParameterBuilder = NumericMetricRangeMultiBatchParameterBuilder(
name="min_range_parameter_builder",
metric_name="column.min",
metric_domain_kwargs="$domain.domain_kwargs",
false_positive_rate='$variables.false_positive_rate',
estimator="$variables.estimator_name",
data_context=data_context,
)
max_range_parameter_builder: NumericMetricRangeMultiBatchParameterBuilder = NumericMetricRangeMultiBatchParameterBuilder(
name="max_range_parameter_builder",
metric_name="column.max",
metric_domain_kwargs="$domain.domain_kwargs",
false_positive_rate="$variables.false_positive_rate",
estimator="$variables.estimator_name",
data_context=data_context,
)
Explanation: Instantiating ParameterBuilders
Our Rule will contain 2 NumericMetricRangeMultiBatchParameterBuilders, one for each of our 2 Expectation types. One will be estimating the Parameter values for the column.min Metric, and the other will be estimating Parameter values for the column.max Metric. metric_domain_kwargs are passed in from our DomainBuilder using $domain.domain_kwargs.
Also note the use of 3 Variables we defined above:
$variables.estimator_name: This is "oneshot" in our case.
$variables.false_positive_rate: This is 5.0e-2 or 5% in our case.
$variables.multi_batch_batch_request: This the multi_batch_batch_request, which gives all 12 Batches of data from the 2018 taxi_data datataset.
End of explanation
expect_column_min: DefaultExpectationConfigurationBuilder = DefaultExpectationConfigurationBuilder(
expectation_type="expect_column_min_to_be_between",
column="$domain.domain_kwargs.column",
min_value="$parameter.min_range_parameter_builder.value[0]",
max_value="$parameter.min_range_parameter_builder.value[1]",
)
expect_column_max: DefaultExpectationConfigurationBuilder = DefaultExpectationConfigurationBuilder(
expectation_type="expect_column_max_to_be_between",
column="$domain.domain_kwargs.column",
min_value="$parameter.max_range_parameter_builder.value[0]",
max_value="$parameter.max_range_parameter_builder.value[1]",
)
Explanation: Instantiating ExpectationConfigurationBuilders
Our Rule will contain 2 ExpectationConfigurationBuilders, one for each of our 2 Expectation types:
expect_column_min_to_be_between
expect_column_max_to_be_between
The Expectations are both ColumnExpectations, so the column parameter will be accessed from the Domain kwargs using $domain.domain_kwargs.column.
The Expectations also take in a min_value and max_value parameter, which our NumericMetricRangeMultiBatchParameterBuilders are estimating. For expect_column_min_to_be_between, these estimated values are accessible using
$parameter.min_range_parameter_builder.value[0] for the min_value, with min_range_parameter_builder being the name of our ParameterBuilder that estimates the column.min metric.
$parameter.min_range.value[1] for the max_value.
The equivalent $parameter for expect_column_max_to_be_between would be $parameter.max_range.value[0] and $parameter.max_range_parameter_builder.value[1] respectively.
End of explanation
more_complex_rule: Rule = Rule(
name="rule_with_parameters",
variables=None,
domain_builder=domain_builder,
parameter_builders=[min_range_parameter_builder, max_range_parameter_builder],
expectation_configuration_builders=[expect_column_min, expect_column_max],
)
my_rbp.add_rule(rule=more_complex_rule)
profiler_result = my_rbp.run(batch_request=multi_batch_batch_request)
profiler_result.expectation_configurations
Explanation: Instantiating RuleBasedProfiler and Running
We instantiate a Rule with our DomainBuilder, ParameterBuilders and ExpectationConfigurationBuilders and load into our RuleBasedProfiler.
End of explanation
from great_expectations.rule_based_profiler.domain_builder import ColumnDomainBuilder
Explanation: As expected, the resulting ExpectationSuite contains our minimum and maximum values, with tip_amount ranging from $-2.16 to $195.05 (a generous tip), and fare_amount ranging from $-98.90 (a refund) to $405,904.54 (a very very long trip).
Appendix
Here we have additional example configuration of DomainBuilder and ParameterBuilders that were not included in the previous 3 Examples.
DomainBuilders
ColumnDomainBuilder
This DomainBuilder outputs column Domains, which are required by ColumnExpectations like (expect_column_median_to_be_between). There are a few ways that the ColumnDomainBuilder can be used.
In the simplest usecase, the ColumnDomainBuilder can output all columns in the dataset as a Domain, or include/exclude columns if you already know which ones you would like. Column suffixes (like _amount) can be used to select columns of interest, as we saw in our examples above.
The ColumnDomainBuilder also allows you to choose columns based on their semantic types (such as numeric, or text).
Semantic types are defined as an Enum object called SemanticDomainTypes, which can be found here : https://github.com/great-expectations/great_expectations/blob/develop/great_expectations/rule_based_profiler/types/domain.py
End of explanation
domain_builder: DomainBuilder = ColumnDomainBuilder(
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
assert len(domains) == 18 # all columns in yellow_tripdata_sample_2018
Explanation: In the simplest usecase, the ColumnDomainBuilder can output all of the columns in yellow_tripdata_sample_2018
End of explanation
domain_builder: DomainBuilder = ColumnDomainBuilder(
include_column_names=["vendor_id"],
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
domains
domain_builder: DomainBuilder = ColumnDomainBuilder(
exclude_column_names=["vendor_id"],
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
assert len(domains) == 17 # all columns in yellow_tripdata_sample_2018 with vendor_id excluded
domains
Explanation: Columns can also be included or excluded by name
End of explanation
domain_builder: DomainBuilder = ColumnDomainBuilder(
include_semantic_types=['numeric'],
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
assert len(domains) == 15 # columns in yellow_trip_data_sample_2018 that are numeric
Explanation: As described above, the ColumnDomainBuilder also allows you to choose columns based on their semantic types (such as numeric, or text). This is passed in as part of the include_semantic_types parameter.
End of explanation
from great_expectations.rule_based_profiler.domain_builder import MultiColumnDomainBuilder
domain_builder: DomainBuilder = MultiColumnDomainBuilder(
include_column_names=["vendor_id", "fare_amount", "tip_amount"],
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
assert len(domains) == 1 # 3 columns are part of a single multi-column domain.
expected_columns: list = ["vendor_id", "fare_amount", "tip_amount"]
assert domains[0]["domain_kwargs"]["column_list"] == expected_columns
Explanation: MultiColumnDomainBuilder
This DomainBuilder outputs multicolumn Domains by taking in a column list in the include_column_names parameter.
End of explanation
from great_expectations.rule_based_profiler.domain_builder import ColumnPairDomainBuilder
domain_builder: DomainBuilder = ColumnPairDomainBuilder(
include_column_names=["vendor_id", "fare_amount"],
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
assert len(domains) == 1 # 2 columns are part of a single multi-column domain.
expect_columns_dict: dict = {'column_A': 'fare_amount', 'column_B': 'vendor_id'}
assert domains[0]["domain_kwargs"] == expect_columns_dict
Explanation: ColumnPairDomainBuilder
This DomainBuilder outputs columnpair domains by taking in a column pair list in the include_column_names parameter.
End of explanation
from great_expectations.rule_based_profiler.domain_builder import TableDomainBuilder
domain_builder: DomainBuilder = TableDomainBuilder(
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
domains
Explanation: TableDomainBuilder
This DomainBuilder outputs table Domains, which is required by Expectations that act on tables, like (expect_table_row_count_to_equal, or expect_table_columns_to_match_set).
End of explanation
from great_expectations.rule_based_profiler.domain_builder import MapMetricColumnDomainBuilder
domain_builder: DomainBuilder = MapMetricColumnDomainBuilder(
map_metric_name="column_values.nonnull",
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
len(domains) == 17 # filtered 1 column that was all None
Explanation: MapMetricColumnDomainBuilder
This DomainBuilder allows you to choose columns based on Map Metrics, which give a yes/no answer for individual values or rows. In this example, we use the Map Metrics column_values.nonnull to filter out a column that was all None from taxi_data.
End of explanation
from great_expectations.rule_based_profiler.domain_builder import CategoricalColumnDomainBuilder
domain_builder: DomainBuilder = CategoricalColumnDomainBuilder(
cardinality_limit_mode="very_few", # VERY_FEW = 10 or less
data_context=data_context,
)
domains: list = domain_builder.get_domains(rule_name="my_rule", batch_request=single_batch_batch_request)
assert len(domains) == 7
Explanation: CategoricalColumnDomainBuilder
This DomainBuilder allows you to choose columns based on their cardinality (number of unique values).The CategoricalColumnDomainBuilder will take in various cardinality_limit_mode values for cardinality, and in this example we are only interested in columns that have "very_few" (less than 10) unique values. For a full of valid modes, along with the associated values, please refer to the CardinalityLimitMode enum in:
https://github.com/great-expectations/great_expectations/blob/develop/great_expectations/rule_based_profiler/helpers/cardinality_checker.py
End of explanation
from great_expectations.rule_based_profiler.types.domain import Domain
from great_expectations.execution_engine.execution_engine import MetricDomainTypes
from great_expectations.rule_based_profiler.types import ParameterContainer
domain: Domain = Domain(rule_name="my_rule", domain_type=MetricDomainTypes.COLUMN, domain_kwargs = {'column': 'total_amount'})
Explanation: ParameterBuilders
ParameterBuilders work under the hood by populating a ParameterContainer, which can also be shared by multiple ParameterBuilders. It requires a Domain, and metric_name, with domain_kwargs accessible from the DomainBuilder using the fully qualified parameter $domain.domain_kwargs.
For the sake of simplicity, we will define a Domain object directly using the Domain() constructor, and pass in a column name within domain_kwargs.
End of explanation
from great_expectations.rule_based_profiler.parameter_builder import MetricMultiBatchParameterBuilder
numeric_range_parameter_builder: MetricMultiBatchParameterBuilder = (
MetricMultiBatchParameterBuilder(
data_context=data_context,
metric_name="column.min",
metric_domain_kwargs=domain.domain_kwargs,
name="my_column_min",
)
)
parameter_container: ParameterContainer = ParameterContainer(parameter_nodes=None)
parameters = {
domain.id: parameter_container,
}
numeric_range_parameter_builder.build_parameters(
domain=domain,
parameters=parameters,
batch_request=multi_batch_batch_request,
)
# we check the parameter container
print(parameter_container.parameter_nodes)
min(parameter_container.parameter_nodes["parameter"]["parameter"]["my_column_min"]["value"])
Explanation: MetricMultiBatchParameterBuilder
The MetricMultiBatchParameterBuilder computes a Metric on data from one or more batches. It takes domain_kwargs, value_kwargs, and metric_name as arguments.
End of explanation
from great_expectations.rule_based_profiler.parameter_builder import ValueSetMultiBatchParameterBuilder
domain: Domain = Domain(rule_name="my_rule", domain_type=MetricDomainTypes.COLUMN, domain_kwargs = {'column': 'vendor_id'})
# instantiating a new parameter container, since it can contain the results of more than one ParmeterBuilder.
parameter_container: ParameterContainer = ParameterContainer(parameter_nodes=None)
parameters[domain.id] = parameter_container
value_set_parameter_builder: ValueSetMultiBatchParameterBuilder = (
ValueSetMultiBatchParameterBuilder(
data_context=data_context,
metric_domain_kwargs=domain.domain_kwargs,
name="my_value_set",
)
)
value_set_parameter_builder.build_parameters(
domain=domain,
parameters=parameters,
batch_request=multi_batch_batch_request,
)
print(parameter_container.parameter_nodes)
Explanation: my_column_min[value] now contains a list of 12 values, which are the minimum values the total_amount column for each of the 12 Batches associated with 2018 taxi_data data. If we were to use the values in a ExpectationConfigurationBuilder, it would be accessible through the fully-qualified parameter: $parameter.my_column_min.value.
ValueSetMultiBatchParameterBuilder
The ValueSetMultiBatchParameterBuilder is able to build a value set across multiple Batches (or just one Batch).
End of explanation
from great_expectations.rule_based_profiler.parameter_builder import RegexPatternStringParameterBuilder
domain: Domain = Domain(rule_name="my_rule", domain_type=MetricDomainTypes.COLUMN, domain_kwargs = {'column': 'vendor_id'})
Explanation: my_value_set[value] now contains a list of 3 values, which is a list of all unique vendor_ids across 12 Batches in the 2018 taxi_data dataset.
RegexPatternStringParameterBuilder
The RegexPatternStringParameterBuilder contains a set of default regex patterns and builds a value set of the best-matching patterns. Users are also able to pass in new patterns as a parameter.
End of explanation
parameter_container: ParameterContainer = ParameterContainer(parameter_nodes=None)
parameters[domain.id] = parameter_container
regex_parameter_builder: RegexPatternStringParameterBuilder = (
RegexPatternStringParameterBuilder(
data_context=data_context,
metric_domain_kwargs=domain.domain_kwargs,
name="my_regex_set",
)
)
regex_parameter_builder.build_parameters(
domain=domain,
parameters=parameters,
batch_request=single_batch_batch_request,
)
print(parameter_container.parameter_nodes)
Explanation: vendor_id is a single integer. Let's see if our default patterns can match it.
End of explanation
regex_parameter_builder: RegexPatternStringParameterBuilder = (
RegexPatternStringParameterBuilder(
data_context=data_context,
metric_domain_kwargs=domain.domain_kwargs,
candidate_regexes=["^\\d{1}$"],
name="my_regex_set",
)
)
regex_parameter_builder.build_parameters(
domain=domain,
parameters=parameters,
batch_request=single_batch_batch_request,
)
print(parameter_container.parameter_nodes)
Explanation: Looks like my_regex_set[value] is an empty list. This means that none of the evaluated_regexes matched our domain. Let's try the same thing again, but this time with a regex that will match our vendor_id column. ^\\d{1}$ and ^\\d{2}$ which will match 1 or 2 digit integers anchored at the beginning and end of the string.
End of explanation
from great_expectations.rule_based_profiler.parameter_builder import SimpleDateFormatStringParameterBuilder
domain: Domain = Domain(rule_name="my_rule", domain_type=MetricDomainTypes.COLUMN, domain_kwargs = {'column': 'pickup_datetime'})
parameter_container: ParameterContainer = ParameterContainer(parameter_nodes=None)
parameters[domain.id] = parameter_container
simple_date_format_string_parameter_builder: SimpleDateFormatStringParameterBuilder = (
SimpleDateFormatStringParameterBuilder(
data_context=data_context,
metric_domain_kwargs=domain.domain_kwargs,
name="my_value_set",
)
)
simple_date_format_string_parameter_builder.build_parameters(
domain=domain,
parameters=parameters,
batch_request=single_batch_batch_request,
)
print(parameter_container.parameter_nodes)
parameter_container.parameter_nodes["parameter"]["parameter"]["my_value_set"]["value"]
Explanation: Now my_regex_set[value] contains ^\\d{1}$.
SimpleDateFormatStringParameterBuilder
The SimpleDateFormatStringParameterBuilder contains a set of default Datetime format patterns and builds a value set of the best-matching patterns. Users are also able to pass in new patterns as a parameter.
End of explanation
from great_expectations.rule_based_profiler.parameter_builder import NumericMetricRangeMultiBatchParameterBuilder
domain: Domain = Domain(rule_name="my_rule", domain_type=MetricDomainTypes.COLUMN, domain_kwargs = {'column': 'total_amount'})
parameter_container: ParameterContainer = ParameterContainer(parameter_nodes=None)
parameters[domain.id] = parameter_container
numeric_metric_range_parameter_builder: NumericMetricRangeMultiBatchParameterBuilder = NumericMetricRangeMultiBatchParameterBuilder(
name="column_mean_range",
metric_name="column.mean",
estimator="bootstrap",
metric_domain_kwargs=domain.domain_kwargs,
false_positive_rate=1.0e-2,
round_decimals=0,
data_context=data_context,
)
numeric_metric_range_parameter_builder.build_parameters(
domain=domain,
parameters=parameters,
batch_request=multi_batch_batch_request,
)
print(parameter_container.parameter_nodes)
Explanation: The result contains our matching datetime pattern, which is '%Y-%m-%d %H:%M:%S'
NumericMetricRangeMultiBatchParameterBuilder
The NumericMetricRangeMultiBatchParameterBuilder is able to provide range estimations across Batches using sampling methods. For instance, if we expect a table's row_count to change between Batches, we could calculate the min / max values of row_count by using the NumericMetricRangeMultiBatchParameterBuilder. These parameters could then be used by Expectations that take in ranges, like ExpectTableRowCountToBeBetween, or ExpectColumnValuesToBeBetween.
In this example, we will be taking a single Metric, column.mean and calculating it for a single column, total_amount. The parameter we will be building is the column mean-range, which are the min-max values of the total_amount column across random samples of 12 Batches of the 2018 taxi_data dataaset.
We will also be passing in specifications for estimator, namely bootstrap sampling with a false-positive rate of less than 0.01.
End of explanation
#import shutil
# clean up Expectations directory after running tests
#shutil.rmtree("great_expectations/expectations/tmp")
#os.remove("great_expectations/expectations/.ge_store_backend_id")
Explanation: As we see, the mean value range for the total_amount column is 16.0 to 44.0
Optional: Clean-up Directory
As part of running this notebook, the RuleBasedProfiler will create a number of ExpectationSuite configurations in the great_expectations/expectations/tmp directory. Optionally run the following cell to clean up the directory.
End of explanation |
6,224 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Q2
In this question, we'll explore the basics of using NumPy arrays. We'll also start using functions as they were intended
Step1: Part B
Write a function which takes a NumPy array and returns another NumPy array with all its elements squared.
Your function should
Step2: Part C
Write a function which computes the sum of the elements of a NumPy array.
Your function should
Step3: Part D
You may not have realized it yet, but in the previous three parts, you've implemented almost all of what's needed to compute the Euclidean distance between two vectors, as represented with NumPy arrays. All you have to do now is link the code you wrote in the previous three parts together in the right order.
Write a function which takes two NumPy arrays and computes their distance. Your function should
Step4: Part E
Now, you'll use your distance function to find the pair of vectors that are closest to each other. This is a very, very common problem in data science | Python Code:
import numpy as np
np.random.seed(578435)
x11 = np.random.random(10)
x12 = np.random.random(10)
d1 = np.array([ 0.24542374, 0.19098998, 0.20645088, 0.49097139, -0.56594091,
-0.13363814, 0.46859546, -0.32476466, -0.35938731, 0.17459786])
np.testing.assert_allclose(d1, difference(x11, x12))
np.random.seed(85743)
x21 = np.random.random(20)
x22 = np.random.random(20)
d2 = np.array([-0.17964925, -0.57573602, 0.00109792, -0.06535934, 0.51321497,
0.63854404, -0.17318834, 0.05553455, 0.08780665, -0.12503945,
0.08794238, -0.53157235, -0.1133253 , 0.34861933, 0.67987286,
0.01188672, 0.2099561 , -0.40800005, -0.28166673, -0.35814679])
np.testing.assert_allclose(d2, difference(x21, x22), rtol = 1e-05)
try:
difference(np.array([1, 2, 3]), np.array([4, 5, 6, 7]))
except ValueError:
assert True
else:
assert False
Explanation: Q2
In this question, we'll explore the basics of using NumPy arrays. We'll also start using functions as they were intended: incremental building blocks to simplify a larger task. This means the solution to each part may be used in future parts. I've tried to make these constituent components fairly easy, but if you run into problems, please ask for help!
Remember: when using a solution you wrote from a previous part, you DO NOT need to copy it from its original cell into the cell you're currently working on! By having clicked the "Play" button on the cell with the code you want to use, you've essentially "saved" it into Python, so all you have to do is call it like you would any other function; no need to copy/paste it!
Part A
NumPy arrays are wonderful improvements over native Python lists for many reasons, the biggest of which is its ability to perform "vectorized" operations over entire arrays without having to write loops.
Write a function which takes two NumPy arrays as arguments and returns their difference (in order of the arguments themselves; if you're getting an AssertionError, try flipping the ordering of the arguments in your function).
Your function should:
be named difference
take two arguments: both NumPy arrays of floats
return 1 NumPy array, containing the element-wise difference of the two vectors (second array from first array).
You will need to check if the arrays are the same length; if not, raise a ValueError.
You cannot use any loops, built-in functions, or NumPy functions.
End of explanation
import numpy as np
np.random.seed(13735)
x1 = np.random.random(10)
y1 = np.array([ 0.10729775, 0.01234453, 0.37878359, 0.12131263, 0.89916465,
0.50676134, 0.9927178 , 0.20673811, 0.88873398, 0.09033156])
np.testing.assert_allclose(y1, squares(x1), rtol = 1e-06)
np.random.seed(7853)
x2 = np.random.random(35)
y2 = np.array([ 7.70558043e-02, 1.85146792e-01, 6.98666869e-01,
9.93510847e-02, 1.94026134e-01, 8.43335268e-02,
1.84097846e-04, 3.74604155e-03, 7.52840504e-03,
9.34739871e-01, 3.15736597e-01, 6.73512540e-02,
9.61011706e-02, 7.99394100e-01, 2.18175433e-01,
4.87808337e-01, 5.36032332e-01, 3.26047002e-01,
8.86429452e-02, 5.66360150e-01, 9.06164054e-01,
1.73105310e-01, 5.02681242e-01, 3.07929118e-01,
7.08507520e-01, 4.95455022e-02, 9.89891434e-02,
8.94874125e-02, 4.56261817e-01, 9.46454001e-01,
2.62274636e-01, 1.79655411e-01, 3.81695141e-01,
5.66890651e-01, 8.03936029e-01])
np.testing.assert_allclose(y2, squares(x2))
Explanation: Part B
Write a function which takes a NumPy array and returns another NumPy array with all its elements squared.
Your function should:
be named squares
take 1 argument: a NumPy array
return 1 value: a NumPy array where each element is the squared version of the input array
You cannot use any loops, built-in functions, or NumPy functions.
End of explanation
import numpy as np
np.random.seed(7631)
x1 = np.random.random(483)
s1 = 233.48919473752667
np.testing.assert_allclose(s1, sum_of_elements(x1))
np.random.seed(13275)
x2 = np.random.random(23)
s2 = 12.146235770777777
np.testing.assert_allclose(s2, sum_of_elements(x2))
Explanation: Part C
Write a function which computes the sum of the elements of a NumPy array.
Your function should:
be named sum_of_elements
take 1 argument: a NumPy array
return 1 floating-point value: the sum of the elements in the NumPy array
You cannot use any loops, but you can use the numpy.sum function.
End of explanation
import numpy as np
import numpy.linalg as nla
np.random.seed(477582)
x11 = np.random.random(10)
x12 = np.random.random(10)
np.testing.assert_allclose(nla.norm(x11 - x12), distance(x11, x12))
np.random.seed(54782)
x21 = np.random.random(584)
x22 = np.random.random(584)
np.testing.assert_allclose(nla.norm(x21 - x22), distance(x21, x22))
Explanation: Part D
You may not have realized it yet, but in the previous three parts, you've implemented almost all of what's needed to compute the Euclidean distance between two vectors, as represented with NumPy arrays. All you have to do now is link the code you wrote in the previous three parts together in the right order.
Write a function which takes two NumPy arrays and computes their distance. Your function should:
be named distance
take 2 arguments: both NumPy arrays of the same length
return 1 number: a non-zero floating point value that is the distance between the two arrays
Remember how Euclidean distance $d$ between two vectors $\vec{a}$ and $\vec{b}$ is calculated:
$$
d(\vec{a}, \vec{b}) = \sqrt{(a_1 - b_1)^2 + (a_2 - b_2)^2 + ... + (a_n - b_n) ^2}
$$
where $a_1$ and $b_1$ are the first elements of the arrays $\vec{a}$ and $\vec{b}$; $a_2$ and $b_2$ are the second elements, and so on.
You've already implemented everything except the square root; in addition to that, you just need to arrange the functions you've written in the correct order inside your distance function. Aside from calling your functions from Parts A-C, there is VERY LITTLE ORIGINAL CODE you'll need to write here! The tricky part is understanding how to make all these parts work together.
You cannot use any functions aside from those you've already written.
End of explanation
import numpy as np
r1 = np.array([1, 1])
l1 = [np.array([1, 1]), np.array([2, 2]), np.array([3, 3])]
a1 = 0.0
np.testing.assert_allclose(a1, similarity_search(r1, l1))
np.random.seed(7643)
r2 = np.random.random(2) * 100
l2 = [np.random.random(2) * 100 for i in range(100)]
a2 = 1.6077074397123927
np.testing.assert_allclose(a2, similarity_search(r2, l2))
Explanation: Part E
Now, you'll use your distance function to find the pair of vectors that are closest to each other. This is a very, very common problem in data science: finding a data point that is most similar to another data point.
In this problem, you'll write a function that takes two arguments: the data point you have (we'll call this the "reference data point"), and a list of data points you want to search. You'll loop through this list and, using your distance() function defined in Part D, compute the distance between the reference data point and each data point in the list, hunting for the one that gives you the smallest distance (meaning here that it is most similar to your reference data point).
Your function should:
be named similarity_search
take 2 arguments: a reference point (NumPy array), and a list of data points (list of NumPy arrays)
return 1 value: the smallest distance you could find between your reference data point and one of the data points in the list
For example, similarity_search([1, 1], [ [1, 1], [2, 2], [3, 3] ]) should return 0, since the smallest distance that can be found between the reference data point [1, 1] and an data point in the list is the list's first element: an exact copy of the reference data point. The distance between a 2D point and itself will always be 0, so this is pretty much as small as you can get.
HINT: This really isn't much code at all! Conceptually it's nothing you haven't done before, either--it's very much like the question in Assignment 3 that asked you to write code to find the minimum value in a list. This just looks intimidating, because now you're dealing with NumPy arrays. If your solution goes beyond 10-15 lines of code, consider re-thinking the problem.
End of explanation |
6,225 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id='top'> </a>
Author
Step1: Data-MC comparison
Table of contents
Data preprocessing
Weight simulation events to spectrum
S125 verification
$\log_{10}(\mathrm{dE/dX})$ verification
Step2: Data preprocessing
[ back to top ]
1. Load simulation/data dataframe and apply specified quality cuts
2. Extract desired features from dataframe
3. Get separate testing and training datasets
4. Feature selection
Load simulation, format feature and target matrices
Step3: Weight simulation events to spectrum
[ back to top ]
For more information, see the IT73-IC79 Data-MC comparison wiki page.
First, we'll need to define a 'realistic' flux model
Step4: $\log_{10}(\mathrm{S_{125}})$ verification
[ back to top ]
Step5: $\log_{10}(\mathrm{dE/dX})$ verification
Step6: $\cos(\theta)$ verification | Python Code:
%load_ext watermark
%watermark -u -d -v -p numpy,scipy,pandas,sklearn,mlxtend
Explanation: <a id='top'> </a>
Author: James Bourbeau
End of explanation
from __future__ import division, print_function
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from icecube.weighting.weighting import from_simprod
from icecube import dataclasses
import comptools as comp
import comptools.analysis.plotting as plotting
color_dict = comp.analysis.get_color_dict()
%matplotlib inline
Explanation: Data-MC comparison
Table of contents
Data preprocessing
Weight simulation events to spectrum
S125 verification
$\log_{10}(\mathrm{dE/dX})$ verification
End of explanation
config = 'IC86.2012'
# comp_list = ['light', 'heavy']
comp_list = ['PPlus', 'Fe56Nucleus']
june_july_data_only = False
sim_df = comp.load_dataframe(datatype='sim', config=config, split=False)
data_df = comp.load_dataframe(datatype='data', config=config)
data_df = data_df[np.isfinite(data_df['log_dEdX'])]
if june_july_data_only:
print('Masking out all data events not in June or July')
def is_june_july(time):
i3_time = dataclasses.I3Time(time)
return i3_time.date_time.month in [6, 7]
june_july_mask = data_df.end_time_mjd.apply(is_june_july)
data_df = data_df[june_july_mask].reset_index(drop=True)
months = (6, 7) if june_july_data_only else None
livetime, livetime_err = comp.get_detector_livetime(config, months=months)
Explanation: Data preprocessing
[ back to top ]
1. Load simulation/data dataframe and apply specified quality cuts
2. Extract desired features from dataframe
3. Get separate testing and training datasets
4. Feature selection
Load simulation, format feature and target matrices
End of explanation
phi_0 = 3.5e-6
# phi_0 = 2.95e-6
gamma_1 = -2.7
gamma_2 = -3.1
eps = 100
def flux(E):
E = np.array(E) * 1e-6
return (1e-6) * phi_0 * E**gamma_1 *(1+(E/3.)**eps)**((gamma_2-gamma_1)/eps)
from icecube.weighting.weighting import PowerLaw
pl_flux = PowerLaw(eslope=-2.7, emin=1e5, emax=3e6, nevents=1e6) + \
PowerLaw(eslope=-3.1, emin=3e6, emax=1e10, nevents=1e2)
pl_flux.spectra
from icecube.weighting.fluxes import GaisserH3a, GaisserH4a, Hoerandel5
flux_h4a = GaisserH4a()
energy_points = np.logspace(6.0, 9.0, 100)
fig, ax = plt.subplots()
ax.plot(np.log10(energy_points), energy_points**2.7*flux_h4a(energy_points, 2212),
marker='None', ls='-', lw=2, label='H4a proton')
ax.plot(np.log10(energy_points), energy_points**2.7*flux_h4a(energy_points, 1000260560),
marker='None', ls='-', lw=2, label='H4a iron')
ax.plot(np.log10(energy_points), energy_points**2.7*flux(energy_points),
marker='None', ls='-', lw=2, label='Simple knee')
ax.plot(np.log10(energy_points), energy_points**2.7*pl_flux(energy_points),
marker='None', ls='-', lw=2, label='Power law (weighting)')
ax.set_yscale('log', nonposy='clip')
ax.set_xlabel('$\log_{10}(E/\mathrm{GeV})$')
ax.set_ylabel('$\mathrm{E}^{2.7} \ J(E) \ [\mathrm{GeV}^{1.7} \mathrm{m}^{-2} \mathrm{sr}^{-1} \mathrm{s}^{-1}]$')
ax.grid(which='both')
ax.legend()
plt.show()
simlist = np.unique(sim_df['sim'])
for i, sim in enumerate(simlist):
gcd_file, sim_files = comp.simfunctions.get_level3_sim_files(sim)
num_files = len(sim_files)
print('Simulation set {}: {} files'.format(sim, num_files))
if i == 0:
generator = num_files*from_simprod(int(sim))
else:
generator += num_files*from_simprod(int(sim))
energy = sim_df['MC_energy'].values
ptype = sim_df['MC_type'].values
num_ptypes = np.unique(ptype).size
cos_theta = np.cos(sim_df['MC_zenith']).values
weights = 1.0/generator(energy, ptype, cos_theta)
# weights = weights/num_ptypes
sim_df['weights'] = flux(sim_df['MC_energy'])*weights
# sim_df['weights'] = flux_h4a(sim_df['MC_energy'], sim_df['MC_type'])*weights
MC_comp_mask = {}
for composition in comp_list:
MC_comp_mask[composition] = sim_df['MC_comp'] == composition
# MC_comp_mask[composition] = sim_df['MC_comp_class'] == composition
def plot_rate(array, weights, bins, xlabel=None, color='C0',
label=None, legend=True, alpha=0.8, ax=None):
if ax is None:
ax = plt.gca()
rate = np.histogram(array, bins=bins, weights=weights)[0]
rate_err = np.sqrt(np.histogram(array, bins=bins, weights=weights**2)[0])
plotting.plot_steps(bins, rate, yerr=rate_err, color=color,
label=label, alpha=alpha, ax=ax)
ax.set_yscale('log', nonposy='clip')
ax.set_ylabel('Rate [Hz]')
if xlabel:
ax.set_xlabel(xlabel)
if legend:
ax.legend()
ax.grid(True)
return ax
def plot_data_MC_ratio(sim_array, sim_weights, data_array, data_weights, bins,
xlabel=None, color='C0', alpha=0.8, label=None,
legend=False, ylim=None, ax=None):
if ax is None:
ax = plt.gca()
sim_rate = np.histogram(sim_array, bins=bins, weights=sim_weights)[0]
sim_rate_err = np.sqrt(np.histogram(sim_array, bins=bins, weights=sim_weights**2)[0])
data_rate = np.histogram(data_array, bins=bins, weights=data_weights)[0]
data_rate_err = np.sqrt(np.histogram(data_array, bins=bins, weights=data_weights**2)[0])
ratio, ratio_err = comp.analysis.ratio_error(data_rate, data_rate_err, sim_rate, sim_rate_err)
plotting.plot_steps(bins, ratio, yerr=ratio_err,
color=color, label=label, alpha=alpha, ax=ax)
ax.grid(True)
ax.set_ylabel('Data/MC')
if xlabel:
ax.set_xlabel(xlabel)
if ylim:
ax.set_ylim(ylim)
if legend:
ax.legend()
ax.axhline(1, marker='None', ls='-.', color='k')
return ax
Explanation: Weight simulation events to spectrum
[ back to top ]
For more information, see the IT73-IC79 Data-MC comparison wiki page.
First, we'll need to define a 'realistic' flux model
End of explanation
sim_df['log_s125'].plot(kind='hist', bins=100, alpha=0.6, lw=1.5)
plt.xlabel('$\log_{10}(\mathrm{S}_{125})$')
plt.ylabel('Counts');
log_s125_bins = np.linspace(-0.5, 3.5, 75)
gs = gridspec.GridSpec(2, 1, height_ratios=[2,1], hspace=0.0)
ax1 = plt.subplot(gs[0])
ax2 = plt.subplot(gs[1], sharex=ax1)
for composition in comp_list:
sim_s125 = sim_df[MC_comp_mask[composition]]['log_s125']
sim_weights = sim_df[MC_comp_mask[composition]]['weights']
plot_rate(sim_s125, sim_weights, bins=log_s125_bins,
color=color_dict[composition], label=composition, ax=ax1)
data_weights = np.array([1/livetime]*len(data_df['log_s125']))
plot_rate(data_df['log_s125'], data_weights, bins=log_s125_bins,
color=color_dict['data'], label='Data', ax=ax1)
for composition in comp_list:
sim_s125 = sim_df[MC_comp_mask[composition]]['log_s125']
sim_weights = sim_df[MC_comp_mask[composition]]['weights']
ax2 = plot_data_MC_ratio(sim_s125, sim_weights,
data_df['log_s125'], data_weights, log_s125_bins,
xlabel='$\log_{10}(\mathrm{S}_{125})$', color=color_dict[composition],
label=composition, ax=ax2)
ax2.set_ylim((0, 2))
ax1.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=3, mode="expand", borderaxespad=0., frameon=False)
plt.savefig(os.path.join(comp.paths.figures_dir, 'data-MC-comparison', 's125.png'))
plt.show()
Explanation: $\log_{10}(\mathrm{S_{125}})$ verification
[ back to top ]
End of explanation
sim_df['log_dEdX'].plot(kind='hist', bins=100, alpha=0.6, lw=1.5)
plt.xlabel('$\log_{10}(\mathrm{dE/dX})$')
plt.ylabel('Counts');
log_dEdX_bins = np.linspace(-2, 4, 75)
gs = gridspec.GridSpec(2, 1, height_ratios=[2,1], hspace=0.0)
ax1 = plt.subplot(gs[0])
ax2 = plt.subplot(gs[1], sharex=ax1)
for composition in comp_list:
sim_dEdX = sim_df[MC_comp_mask[composition]]['log_dEdX']
sim_weights = sim_df[MC_comp_mask[composition]]['weights']
plot_rate(sim_dEdX, sim_weights, bins=log_dEdX_bins,
color=color_dict[composition], label=composition, ax=ax1)
data_weights = np.array([1/livetime]*len(data_df))
plot_rate(data_df['log_dEdX'], data_weights, bins=log_dEdX_bins,
color=color_dict['data'], label='Data', ax=ax1)
for composition in comp_list:
sim_dEdX = sim_df[MC_comp_mask[composition]]['log_dEdX']
sim_weights = sim_df[MC_comp_mask[composition]]['weights']
ax2 = plot_data_MC_ratio(sim_dEdX, sim_weights,
data_df['log_dEdX'], data_weights, log_dEdX_bins,
xlabel='$\log_{10}(\mathrm{dE/dX})$', color=color_dict[composition],
label=composition, ylim=[0, 5.5], ax=ax2)
ax1.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=3, mode="expand", borderaxespad=0., frameon=False)
plt.savefig(os.path.join(comp.paths.figures_dir, 'data-MC-comparison', 'dEdX.png'))
plt.show()
Explanation: $\log_{10}(\mathrm{dE/dX})$ verification
End of explanation
sim_df['lap_cos_zenith'].plot(kind='hist', bins=100, alpha=0.6, lw=1.5)
plt.xlabel('$\cos(\\theta_{\mathrm{reco}})$')
plt.ylabel('Counts');
cos_zenith_bins = np.linspace(0.8, 1.0, 75)
gs = gridspec.GridSpec(2, 1, height_ratios=[2,1], hspace=0.0)
ax1 = plt.subplot(gs[0])
ax2 = plt.subplot(gs[1], sharex=ax1)
for composition in comp_list:
sim_cos_zenith = sim_df[MC_comp_mask[composition]]['lap_cos_zenith']
sim_weights = sim_df[MC_comp_mask[composition]]['weights']
plot_rate(sim_cos_zenith, sim_weights, bins=cos_zenith_bins,
color=color_dict[composition], label=composition, ax=ax1)
data_weights = np.array([1/livetime]*len(data_df))
plot_rate(data_df['lap_cos_zenith'], data_weights, bins=cos_zenith_bins,
color=color_dict['data'], label='Data', ax=ax1)
for composition in comp_list:
sim_cos_zenith = sim_df[MC_comp_mask[composition]]['lap_cos_zenith']
sim_weights = sim_df[MC_comp_mask[composition]]['weights']
ax2 = plot_data_MC_ratio(sim_cos_zenith, sim_weights,
data_df['lap_cos_zenith'], data_weights, cos_zenith_bins,
xlabel='$\cos(\\theta_{\mathrm{reco}})$', color=color_dict[composition],
label=composition, ylim=[0, 3], ax=ax2)
ax1.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=3, mode="expand", borderaxespad=0., frameon=False)
plt.savefig(os.path.join(comp.paths.figures_dir, 'data-MC-comparison', 'zenith.png'))
plt.show()
sim_df['avg_inice_radius'].plot(kind='hist', bins=100, alpha=0.6, lw=1.5)
# plt.xlabel('$\cos(\\theta_{\mathrm{reco}})$')
plt.ylabel('Counts');
inice_radius_bins = np.linspace(0.0, 200, 75)
gs = gridspec.GridSpec(2, 1, height_ratios=[2,1], hspace=0.0)
ax1 = plt.subplot(gs[0])
ax2 = plt.subplot(gs[1], sharex=ax1)
for composition in comp_list:
sim_inice_radius = sim_df[MC_comp_mask[composition]]['avg_inice_radius']
sim_weights = sim_df[MC_comp_mask[composition]]['weights']
plot_rate(sim_inice_radius, sim_weights, bins=inice_radius_bins,
color=color_dict[composition], label=composition, ax=ax1)
data_weights = np.array([1/livetime]*len(data_df))
plot_rate(data_df['avg_inice_radius'], data_weights, bins=inice_radius_bins,
color=color_dict['data'], label='Data', ax=ax1)
for composition in comp_list:
sim_inice_radius = sim_df[MC_comp_mask[composition]]['avg_inice_radius']
sim_weights = sim_df[MC_comp_mask[composition]]['weights']
ax2 = plot_data_MC_ratio(sim_inice_radius, sim_weights,
data_df['avg_inice_radius'], data_weights, inice_radius_bins,
xlabel='$\cos(\\theta_{\mathrm{reco}})$', color=color_dict[composition],
label=composition, ylim=[0, 3], ax=ax2)
ax1.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=3, mode="expand", borderaxespad=0., frameon=False)
# plt.savefig(os.path.join(comp.paths.figures_dir, 'data-MC-comparison', 'zenith.png'))
plt.show()
sim_df.columns
Explanation: $\cos(\theta)$ verification
End of explanation |
6,226 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyzing the community activity for version control systems
Context
You are a new team member in a software company
The developers there are using CVS (Concurrent Versions System)
You propose Git as an alternative to the team.
Find evidence that shows that the software development community is mainly adopting the Git version control system!
The Dataset
There is a dataset Stack Overflow available with the following data
Step1: Step 2
Step2: Step 3
Step3: Step 4
Step4: Step 5
Step5: Step 6 | Python Code:
import pandas as pd
vcs_data = pd.read_csv('../dataset/stackoverflow_vcs_data_subset.gz')
vcs_data.head()
Explanation: Analyzing the community activity for version control systems
Context
You are a new team member in a software company
The developers there are using CVS (Concurrent Versions System)
You propose Git as an alternative to the team.
Find evidence that shows that the software development community is mainly adopting the Git version control system!
The Dataset
There is a dataset Stack Overflow available with the following data:
CreationDate: the timestamp of the creation date of a Stack Overflow post (= question)
TagName: the tag name for a technology (in our case for only 4 VCSes: "cvs", "svn", "git" and "mercurial")
ViewCount: the numbers of views of a post
These are the first 10 entries of this dataset:
CreationDate,TagName,ViewCount
2008-08-01 13:56:33,svn,10880
2008-08-01 14:41:24,svn,55075
2008-08-01 15:22:29,svn,15144
2008-08-01 18:00:13,svn,8010
2008-08-01 18:33:08,svn,92006
2008-08-01 23:29:32,svn,2444
2008-08-03 22:38:29,svn,871830
2008-08-03 22:38:29,git,871830
2008-08-04 11:37:24,svn,17969
Analysis
Step 1: Load in the dataset
End of explanation
vcs_data['CreationDate'] = pd.to_datetime(vcs_data['CreationDate'])
vcs_data.head()
Explanation: Step 2: Convert the CreationDate column to a real datetime datatype
End of explanation
number_of_views = vcs_data.groupby(['CreationDate', 'TagName']).sum()
number_of_views.head()
Explanation: Step 3: Sum up the number of views in ViewCount by the timestamp and the VCSes
End of explanation
views_per_vcs = number_of_views.unstack()['ViewCount']
views_per_vcs.head()
Explanation: Step 4: List the number of views for each VCS in separate columns
End of explanation
monythly_views = views_per_vcs.resample("1M").sum().cumsum()
monythly_views.head()
Explanation: Step 5: Accumulate the number of views for the VCSes for every month
End of explanation
%matplotlib inline
monythly_views.plot(title="monthly stackoverflow post views");
Explanation: Step 6: Visualize the monthly views over time for all VCSes
End of explanation |
6,227 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create 3D boolean masks
In this tutorial we will show how to create 3D boolean masks for arbitrary latitude and longitude grids. It uses the same algorithm to determine if a gridpoint is in a region as for the 2D mask. However, it returns a xarray.Dataset with shape region x lat x lon, gridpoints that do not fall in a region are False, the gridpoints that fall in a region are True.
3D masks are convenient as they can be used to directly calculate weighted regional means (over all regions) using xarray v0.15.1 or later. Further, the mask includes the region names and abbreviations as non-dimension coordinates.
Import regionmask and check the version
Step1: Load xarray and numpy
Step2: Creating a mask
Define a lon/ lat grid with a 1° grid spacing, where the points define the center of the grid
Step3: We will create a mask with the SREX regions (Seneviratne et al., 2012).
Step4: The function mask_3D determines which gripoints lie within the polygon making up each region
Step5: As mentioned, mask is a boolean xarray.Dataset with shape region x lat x lon. It contains region (=numbers) as dimension coordinate as well as abbrevs and names as non-dimension coordinates (see the xarray docs for the details on the terminology).
Plotting
Plotting individual layers
The four first layers look as follows
Step6: Plotting flattened masks
A 3D mask cannot be directly plotted - it needs to be flattened first. To do this regionmask offers a convenience function
Step7: Working with a 3D mask
masks can be used to select data in a certain region and to calculate regional averages - let's illustrate this with a 'real' dataset
Step8: The example data is a temperature field over North America. Let's plot the first time step
Step9: An xarray object can be passed to the mask_3D function
Step10: Per default this creates a mask containing one layer (slice) for each region containing (at least) one gridpoint. As the example data only has values over Northern America we only get only 6 layers even though there are 26 SREX regions. To obtain all layers specify drop=False
Step11: Note mask_full now has 26 layers.
Select a region
As mask_3D contains region, abbrevs, and names as (non-dimension) coordinates we can use each of those to select an individual region
Step12: This also applies to the regionally-averaged data below.
It is currently not possible to use sel with a non-dimension coordinate - to directly select abbrev or name you need to create a MultiIndex
Step13: Mask out a region
Using where a specific region can be 'masked out' (i.e. all data points outside of the region become NaN)
Step14: Which looks as follows
Step15: We could now use airtemps_cna to calculate the regional average for 'Central North America'. However, there is a more elegant way.
Calculate weighted regional averages
Using the 3-dimensional mask it is possible to calculate weighted averages of all regions in one go, using the weighted method (requires xarray 0.15.1 or later). As proxy of the grid cell area we use cos(lat).
Step16: Let's break down what happens here. By multiplying mask_3D * weights we get a DataArray where gridpoints not in the region get a weight of 0. Gridpoints within a region get a weight proportional to the gridcell area. airtemps.weighted(mask_3D * weights) creates an xarray object which can be used for weighted operations. From this we calculate the weighted mean over the lat and lon dimensions. The resulting dataarray has the dimensions region x time
Step17: The regionally-averaged time series can be plotted
Step18: Restrict the mask to land points
Combining the mask of the regions with a land-sea mask we can create a land-only mask using the land_110 region from NaturalEarth.
With this caveat in mind we can create the land-sea mask
Step19: and plot it
Step20: To create the combined mask we multiply the two
Step21: Note the .squeeze(drop=True). This is required to remove the region dimension from land_mask.
Finally, we compare the original mask with the one restricted to land points | Python Code:
import regionmask
regionmask.__version__
Explanation: Create 3D boolean masks
In this tutorial we will show how to create 3D boolean masks for arbitrary latitude and longitude grids. It uses the same algorithm to determine if a gridpoint is in a region as for the 2D mask. However, it returns a xarray.Dataset with shape region x lat x lon, gridpoints that do not fall in a region are False, the gridpoints that fall in a region are True.
3D masks are convenient as they can be used to directly calculate weighted regional means (over all regions) using xarray v0.15.1 or later. Further, the mask includes the region names and abbreviations as non-dimension coordinates.
Import regionmask and check the version:
End of explanation
import xarray as xr
import numpy as np
# don't expand data
xr.set_options(display_style="text", display_expand_data=False)
Explanation: Load xarray and numpy:
End of explanation
lon = np.arange(-179.5, 180)
lat = np.arange(-89.5, 90)
Explanation: Creating a mask
Define a lon/ lat grid with a 1° grid spacing, where the points define the center of the grid:
End of explanation
regionmask.defined_regions.srex
Explanation: We will create a mask with the SREX regions (Seneviratne et al., 2012).
End of explanation
mask = regionmask.defined_regions.srex.mask_3D(lon, lat)
mask
Explanation: The function mask_3D determines which gripoints lie within the polygon making up each region:
End of explanation
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
from matplotlib import colors as mplc
cmap1 = mplc.ListedColormap(["none", "#9ecae1"])
fg = mask.isel(region=slice(4)).plot(
subplot_kws=dict(projection=ccrs.PlateCarree()),
col="region",
col_wrap=2,
transform=ccrs.PlateCarree(),
add_colorbar=False,
aspect=1.5,
cmap=cmap1,
)
for ax in fg.axes.flatten():
ax.coastlines()
fg.fig.subplots_adjust(hspace=0, wspace=0.1);
Explanation: As mentioned, mask is a boolean xarray.Dataset with shape region x lat x lon. It contains region (=numbers) as dimension coordinate as well as abbrevs and names as non-dimension coordinates (see the xarray docs for the details on the terminology).
Plotting
Plotting individual layers
The four first layers look as follows:
End of explanation
regionmask.plot_3D_mask(mask, add_colorbar=False, cmap="plasma");
Explanation: Plotting flattened masks
A 3D mask cannot be directly plotted - it needs to be flattened first. To do this regionmask offers a convenience function: regionmask.plot_3D_mask. The function takes a 3D mask as argument, all other keyword arguments are passed through to xr.plot.pcolormesh.
End of explanation
airtemps = xr.tutorial.load_dataset("air_temperature")
Explanation: Working with a 3D mask
masks can be used to select data in a certain region and to calculate regional averages - let's illustrate this with a 'real' dataset:
End of explanation
# choose a good projection for regional maps
proj = ccrs.LambertConformal(central_longitude=-100)
ax = plt.subplot(111, projection=proj)
airtemps.isel(time=1).air.plot.pcolormesh(ax=ax, transform=ccrs.PlateCarree())
ax.coastlines();
Explanation: The example data is a temperature field over North America. Let's plot the first time step:
End of explanation
mask_3D = regionmask.defined_regions.srex.mask_3D(airtemps)
mask_3D
Explanation: An xarray object can be passed to the mask_3D function:
End of explanation
mask_full = regionmask.defined_regions.srex.mask_3D(airtemps, drop=False)
mask_full
Explanation: Per default this creates a mask containing one layer (slice) for each region containing (at least) one gridpoint. As the example data only has values over Northern America we only get only 6 layers even though there are 26 SREX regions. To obtain all layers specify drop=False:
End of explanation
# 1) by the index of the region:
r1 = mask_3D.sel(region=3)
# 2) with the abbreviation
r2 = mask_3D.isel(region=(mask_3D.abbrevs == "WNA"))
# 3) with the long name:
r3 = mask_3D.isel(region=(mask_3D.names == "E. North America"))
Explanation: Note mask_full now has 26 layers.
Select a region
As mask_3D contains region, abbrevs, and names as (non-dimension) coordinates we can use each of those to select an individual region:
End of explanation
mask_3D.set_index(regions=["region", "abbrevs", "names"]);
Explanation: This also applies to the regionally-averaged data below.
It is currently not possible to use sel with a non-dimension coordinate - to directly select abbrev or name you need to create a MultiIndex:
End of explanation
airtemps_cna = airtemps.where(r1)
Explanation: Mask out a region
Using where a specific region can be 'masked out' (i.e. all data points outside of the region become NaN):
End of explanation
proj = ccrs.LambertConformal(central_longitude=-100)
ax = plt.subplot(111, projection=proj)
airtemps_cna.isel(time=1).air.plot(ax=ax, transform=ccrs.PlateCarree())
ax.coastlines();
Explanation: Which looks as follows:
End of explanation
weights = np.cos(np.deg2rad(airtemps.lat))
ts_airtemps_regional = airtemps.weighted(mask_3D * weights).mean(dim=("lat", "lon"))
Explanation: We could now use airtemps_cna to calculate the regional average for 'Central North America'. However, there is a more elegant way.
Calculate weighted regional averages
Using the 3-dimensional mask it is possible to calculate weighted averages of all regions in one go, using the weighted method (requires xarray 0.15.1 or later). As proxy of the grid cell area we use cos(lat).
End of explanation
ts_airtemps_regional
Explanation: Let's break down what happens here. By multiplying mask_3D * weights we get a DataArray where gridpoints not in the region get a weight of 0. Gridpoints within a region get a weight proportional to the gridcell area. airtemps.weighted(mask_3D * weights) creates an xarray object which can be used for weighted operations. From this we calculate the weighted mean over the lat and lon dimensions. The resulting dataarray has the dimensions region x time:
End of explanation
ts_airtemps_regional.air.plot(col="region", col_wrap=3);
Explanation: The regionally-averaged time series can be plotted:
End of explanation
land_110 = regionmask.defined_regions.natural_earth_v5_0_0.land_110
land_mask = land_110.mask_3D(airtemps)
Explanation: Restrict the mask to land points
Combining the mask of the regions with a land-sea mask we can create a land-only mask using the land_110 region from NaturalEarth.
With this caveat in mind we can create the land-sea mask:
End of explanation
proj = ccrs.LambertConformal(central_longitude=-100)
ax = plt.subplot(111, projection=proj)
land_mask.squeeze().plot.pcolormesh(
ax=ax, transform=ccrs.PlateCarree(), cmap=cmap1, add_colorbar=False
)
ax.coastlines();
Explanation: and plot it
End of explanation
mask_lsm = mask_3D * land_mask.squeeze(drop=True)
Explanation: To create the combined mask we multiply the two:
End of explanation
f, axes = plt.subplots(1, 2, subplot_kw=dict(projection=proj))
ax = axes[0]
mask_3D.sel(region=2).plot(
ax=ax, transform=ccrs.PlateCarree(), add_colorbar=False, cmap=cmap1
)
ax.coastlines()
ax.set_title("Regional mask: all points")
ax = axes[1]
mask_lsm.sel(region=2).plot(
ax=ax, transform=ccrs.PlateCarree(), add_colorbar=False, cmap=cmap1
)
ax.coastlines()
ax.set_title("Regional mask: land only");
Explanation: Note the .squeeze(drop=True). This is required to remove the region dimension from land_mask.
Finally, we compare the original mask with the one restricted to land points:
End of explanation |
6,228 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Your first steps with Python
1.1 Introduction
Python is a general purpose programming language. It is used extensively for scientific computing, data analytics and visualization, web development and software development. It has a wide user base and excellent library support.
There are many ways to use and interact with the Python language. The first way is to access it directly from the command prompt and calling python <script>.py. This runs a script written in Python and does whatever you have programmed the computer to do. But scripts have to be written and how do we actually write Python scripts?
Actually Python scripts are just .txt files. So you could just open a .txt file and write a script, saving the file with a .py extension. The downsides of this approach is obvious to anyone working with Windows. Usually, Python source code is written-not with Microsoft Word- but with and Integrated Development Environment. An IDE combines a text editor with a running Python console to test code and actually do work with Python without switching from one program to another. If you learnt the C, or C++ language, you will be familiar with Vim. Other popular IDE's for Python are Pycharm, Spyder and the Jupyter Notebook.
In this course, we will use the Jupyter Notebook as our IDE because of its ease of use ability to execute code cell by cell. It integrates with markdown so that one can annotate and document your code on the fly! All in all, it is an excellent tool for teaching and learning Python before one migrates to more advanced tools like Spyder for serious scripting and development work.
1.2 Your best friends.
In order to get the most from Python, your best source of reference is the
Python documentation. Getting good at Python is a matter using it regularly and familiarizing yourself with the keywords, constructs and commonly used idioms.
Learn to use the Shift-Tab when coding. This activates a hovering tooltip that provides documentation for keywords, functions and even variables that you have declared in your environment. This convenient tooltip and be expanded into a pop-up window on your browser for easy reference. Use this often to reference function signatures, documentation and general help.
Jupyter notebook comes with Tab completion. This quality of life assists you in typing code by listing possible autocompletion options so that you don't have to type everything out! Use Tab completion as often as you can. This makes coding faster and less tedious. Tab completion also allows you to check out various methods on classes which comes in handy when learning a library for the first time (like matplotlib or seaborn).
Finally ask Google. Once you have acquired enough "vocabulary", you can begin to query Google with your problem. And more often that not, somehow has experienced the same conundrum and left a message on Stackexchange. Browsing the solutions listed there is a powerful way to learn programming skills.
1.3 The learning objectives for this unit
The learning objectives of this first unit are
Step1: 2.2 Your first script
It is a time honoured tradition that your very first program should be to print "Hello world!" How is this achieved in Python?
Step2: Notice that Hello world! is printed at the bottom of the cell as an output. In general, this is how output of a python code is displayed to you.
print is a special function in Python. It's purpose is to display output to the console. Notice that we pass an argument-in this case a string "Hello world!"- to the function. All arguments passed to the function must be enclosed in round brackets and this signals to the Python interpreter to execute a function named print with the argument "Hello world!".
2.2.1 Self introductions
Your next exercise is to print your own name to the console. Remember to enclose your name in " " or ' '
Step3: 2.3 Commenting
Commenting is a way to annotate and document code. There are two ways to do this
Step4: Note the floating point answer. In previous versions of Python, / meant floor division. This is no longer the case in Python 3
Step5: In the above 5%2 means return me the remainder after 5 is divided by 2 (which is indeed 1).
3.1.1 Precedence
A note on arithmetic precedence. As one expects, () have the highest precedence, following by * and /. Addition and subtraction have the lowest precedence.
Step6: It is interesting to note that the % operator is not distributive.
3.1.2 Variables
In general, one does not have to declare variables in python before using it. We merely need to assign numbers to variables. In the computer, this means that a certain place in memory has been allocated to store that particular number. Assignment to variables is executed by the = operator. The equal sign in Python is the binary comparison == operator.
Python is case sensitive. So a variable name A is different from a. Variables cannot begin with numbers and cannot have empty spaces between them. So my variable is not a valid variable. Usually what is done is to write my_variable
After assigning numbers to variables, the variable can be used to represent the number in any arithmetic operation.
Step7: Notice that after assignment, I can access the variables in a different cell. However, if you reassign a variable to a different number, the old values for that variable are overwritten.
Step8: Now try clicking back to the cell x+y and re-executing it. What do you the answer will be?
Even though that cell was above our reassignment cell, nevertheless re-executing that cell means executing that block of code that the latest values for that variable. It is for this reason that one must be very careful with the order of execution of code blocks. In order to help us keep track of the order of execution, each cell has a counter next to it. Notice the In [n]. Higher values of n indicates more recent executions.
Variables can also be reassigned
Step9: So what happened here? Well, if we recall x originally was assigned 5. Therefore x+1 would give us 6. This value is then reassigned to the exact same location in memory represented by the variable x. So now that piece of memory contains the value 6. We then use the print function to display the content of x.
As this is a often used pattern, Python has a convenience syntax for this kind assignment
Step10: 3.1.3 Floating point precision
All of the above applies equally to floating point numbers (or real numbers). However, we must be mindful of floating point precision.
Step11: The following exerpt from the Python documentation explains what is happening quite clearly.
To be fair, even our decimal system is inadequate to represent rational numbers like 1/3, 1/11 and so on.
3.2 Strings
Strings are basically text. These are enclosed in ' ' or " ". The reason for having two ways of denoting strings is because we may need to nest a string within a string like in 'The quick brown fox "jumped" over the lazy old dog'. This is especially useful when setting up database queries and the like.
Step12: In the second print function, the text 'x' is printed while in the first print function, it is the contents of x which is printed to the console.
3.2.1 String formatting
Strings can be assigned to variables just like numbers. And these can be recalled in a print function.
Step13: When using % to indicate string substitution, take note of the common formatting "placeholders"
%s to substitue strings.
%d for printing integer substitutions
%.1f means to print a floating point number up to 1 decimal place. Note that there is no rounding
The utility of the .format method arises when the same string needs to printed in various places in a larger body of text. This avoids duplicating code. Also did you notice I used double quotation. Why?
More about string formats can be found in this excellent blog post
3.2.2 Weaving strings into one beautiful tapestry of text
Besides the .format and % operation on text, we can concatenate strings using + operator. However, strings cannot be changed once declared and assigned to variables. This property is called immutability
Step14: Use [] to access specific letters in the string. Python uses 0 indexing. So the first letter is accessed by my_string[0] while my_string[1] accesses the second letter.
Step15: Slicing is a way of get specific subsets of the string. If you let $x_n$ denote the $n+1$-th letter (note zero indexing) in a string (and by letter this includes whitespace characters as well!) then writing my_string[i
Step16: Notice the use of \n in the second print function. This is called a newline character which does exactly what its name says. Also in the third print function notice the seperation between e and j. It is actually not seperated. The sixth letter is a whitespace character ' '.
Slicing also utilizes arithmetic progressions to return even more specific subsets of strings. So [i
Step17: So what happened above? Well [3
Step18: Answer
Step19: 4. list, here's where the magic begins
list are the fundamental data structure in Python. These are analogous to arrays in C or Java. If you use R, lists are analogous to vectors (and not R list)
Declaring a list is as simple as using square brackets [ ] to enclose a list of objects (or variables) seperated by commas.
Step20: 4.1 Properties of list objects and indexing
One of the fundamental properties we can ask about lists is how many objects they contain. We use the len (short for length) function to do that.
Step21: Perhaps you want to recover that staff's name. It's in the first position of the list.
Step22: Notice that Python still outputs to console even though we did not use the print function. Actually the print function prints a particularly "nice" string representation of the object, which is why Andy is printed without the quotation marks if print was used.
Can you find me Andy's age now?
Step23: The same slicing rules for strings apply to lists as well. If we wanted Andy's age and wage, we would type staff[1
Step24: This returns us a sub-list containing Andy's age and renumeration.
4.2 Nested lists
Lists can also contain other lists. This ability to have a nested structure in lists gives it flexibility.
Step25: Notice that if I type nested_list[2], Python will return me the list [1.50, .40]. This can be accessed again using indexing (or slicing notation) [ ].
Step26: 4.3 List methods
Right now, let us look at four very useful list methods. Methods are basically operations which modify lists. These are
Step27: 4.3.1 Your first programming challenge
Move information for Andy's email to the second position (i.e. index 1) in the list staff in one line of code | Python Code:
# change this cell into a Markdown cell. Then type something here and execute it (Shift-Enter)
Explanation: 1. Your first steps with Python
1.1 Introduction
Python is a general purpose programming language. It is used extensively for scientific computing, data analytics and visualization, web development and software development. It has a wide user base and excellent library support.
There are many ways to use and interact with the Python language. The first way is to access it directly from the command prompt and calling python <script>.py. This runs a script written in Python and does whatever you have programmed the computer to do. But scripts have to be written and how do we actually write Python scripts?
Actually Python scripts are just .txt files. So you could just open a .txt file and write a script, saving the file with a .py extension. The downsides of this approach is obvious to anyone working with Windows. Usually, Python source code is written-not with Microsoft Word- but with and Integrated Development Environment. An IDE combines a text editor with a running Python console to test code and actually do work with Python without switching from one program to another. If you learnt the C, or C++ language, you will be familiar with Vim. Other popular IDE's for Python are Pycharm, Spyder and the Jupyter Notebook.
In this course, we will use the Jupyter Notebook as our IDE because of its ease of use ability to execute code cell by cell. It integrates with markdown so that one can annotate and document your code on the fly! All in all, it is an excellent tool for teaching and learning Python before one migrates to more advanced tools like Spyder for serious scripting and development work.
1.2 Your best friends.
In order to get the most from Python, your best source of reference is the
Python documentation. Getting good at Python is a matter using it regularly and familiarizing yourself with the keywords, constructs and commonly used idioms.
Learn to use the Shift-Tab when coding. This activates a hovering tooltip that provides documentation for keywords, functions and even variables that you have declared in your environment. This convenient tooltip and be expanded into a pop-up window on your browser for easy reference. Use this often to reference function signatures, documentation and general help.
Jupyter notebook comes with Tab completion. This quality of life assists you in typing code by listing possible autocompletion options so that you don't have to type everything out! Use Tab completion as often as you can. This makes coding faster and less tedious. Tab completion also allows you to check out various methods on classes which comes in handy when learning a library for the first time (like matplotlib or seaborn).
Finally ask Google. Once you have acquired enough "vocabulary", you can begin to query Google with your problem. And more often that not, somehow has experienced the same conundrum and left a message on Stackexchange. Browsing the solutions listed there is a powerful way to learn programming skills.
1.3 The learning objectives for this unit
The learning objectives of this first unit are:
Getting around the Jupyter notebook.
Learning how to print("Hello world!")
Using and coding with basic Python objects: int, str, float and bool.
Using the type function.
What are variables and valid variable names.
Using the list object and list methods.
Learning how to access items in list. Slicing and indexing.
2. Getting around the Jupyter notebook
2.1 Cells and colors, just remember, green is for go
All code is written in cells. Cells are where code blocks go. You execute a cell by pressing Shift-Enter or pressing the "play" button. Or you could just click on the drop down menu and select "Run cell" but who would want to do that!
In general, cells have two uses: One for writing "live" Python code which can be executed and one more to write documentation using markdown. To toggle between the two cell types, press Escape to exit from "edit" mode. The edges of the cell should turn blue. Now you are in "command" mode. Escape actually activates "command" mode. Enter activates "edit" mode. With the cell border coloured blue, press M to enter into markdown mode. You should see the In [ ]: prompt dissappear. Press Enter to change the border to green. This means you can now "edit" markdown. How does one change from markdown to a live coding cell? In "command" mode (remember blue border) press Y. Now the cell is "hot". When you Shift-Enter, you will execute code. If you happen to write markdown when in a "coding" cell, the Python kernel will shout at you. (Means raise an error message)
2.1.1 Practise makes perfect
Now its time for you to try. In the cell below, try switching to Markdown. Press Enter to activate "edit" mode and type some text in the cell. Press Shift-Enter and you should see the output rendered in html. Note that this is not coding yet
End of explanation
'''Make sure you are in "edit" mode and that this cell is for Coding ( You should see the In [ ]:)
on the left of the cell. '''
print("Hello world!")
Explanation: 2.2 Your first script
It is a time honoured tradition that your very first program should be to print "Hello world!" How is this achieved in Python?
End of explanation
# print your name in this cell.
Explanation: Notice that Hello world! is printed at the bottom of the cell as an output. In general, this is how output of a python code is displayed to you.
print is a special function in Python. It's purpose is to display output to the console. Notice that we pass an argument-in this case a string "Hello world!"- to the function. All arguments passed to the function must be enclosed in round brackets and this signals to the Python interpreter to execute a function named print with the argument "Hello world!".
2.2.1 Self introductions
Your next exercise is to print your own name to the console. Remember to enclose your name in " " or ' '
End of explanation
# Addition
5+3
# Subtraction
8-9
# Multiplication
3*12
# Division
48/12
Explanation: 2.3 Commenting
Commenting is a way to annotate and document code. There are two ways to do this: Inline using the # character or by using ''' <documentation block> ''', the latter being multi-line and hence used mainly for documenting functions or classes. Comments enclosed using ''' '''' style commenting are actually registed in Jupyter notebook and can be accessed from the Shift-Tab tooltip!
One should use # style commenting very sparingly. By right, code should be clear enough that # inline comments are not needed.
However, # has a very important function. It is used for debugging and trouble-shooting. This is because commented code sections are never executed when you execute a cell (Shift-Enter)
3. Python's building blocks
Python is an Object Oriented Programming language. That means to all of python is made out of objects which are instances of classes. The main point here is that I am going to introduce 4 basic objects of Python which form the backbone of any program or script.
Integers or int.
Strings or str. You've met one of these: "Hello world!". For those who know about character encoding, it is highly encouraged to code Python with UTF-8 encoding.
Float or float. Basically the computer version of real numbers.
Booleans or bool. In Python, true and false are indicated by the reserved keywords True and False. Take note of the capitalized first letter.
3.1 Numbers
You can't call yourself a scientific computing language without the ability to deal with numbers. The basic arithmetic operations for numbers are exactly as you expect it to be
End of explanation
# Exponentiation. Limited precision though!
16**0.5
# Residue class modulo n
5%2
Explanation: Note the floating point answer. In previous versions of Python, / meant floor division. This is no longer the case in Python 3
End of explanation
# Guess the output before executing this cell. Come on, don't cheat!
6%(1+3)
Explanation: In the above 5%2 means return me the remainder after 5 is divided by 2 (which is indeed 1).
3.1.1 Precedence
A note on arithmetic precedence. As one expects, () have the highest precedence, following by * and /. Addition and subtraction have the lowest precedence.
End of explanation
# Assignment
x=1
y=2
x+y
x/y
Explanation: It is interesting to note that the % operator is not distributive.
3.1.2 Variables
In general, one does not have to declare variables in python before using it. We merely need to assign numbers to variables. In the computer, this means that a certain place in memory has been allocated to store that particular number. Assignment to variables is executed by the = operator. The equal sign in Python is the binary comparison == operator.
Python is case sensitive. So a variable name A is different from a. Variables cannot begin with numbers and cannot have empty spaces between them. So my variable is not a valid variable. Usually what is done is to write my_variable
After assigning numbers to variables, the variable can be used to represent the number in any arithmetic operation.
End of explanation
x=5
x+y-2
Explanation: Notice that after assignment, I can access the variables in a different cell. However, if you reassign a variable to a different number, the old values for that variable are overwritten.
End of explanation
# For example
x = x+1
print(x)
Explanation: Now try clicking back to the cell x+y and re-executing it. What do you the answer will be?
Even though that cell was above our reassignment cell, nevertheless re-executing that cell means executing that block of code that the latest values for that variable. It is for this reason that one must be very careful with the order of execution of code blocks. In order to help us keep track of the order of execution, each cell has a counter next to it. Notice the In [n]. Higher values of n indicates more recent executions.
Variables can also be reassigned
End of explanation
# reset x to 5
x=5
x += 1
print(x)
x = 5
#What do you think the values of x will be for x -= 1, x *= 2 or x /= 2?
# Test it out in the space below
print(x)
Explanation: So what happened here? Well, if we recall x originally was assigned 5. Therefore x+1 would give us 6. This value is then reassigned to the exact same location in memory represented by the variable x. So now that piece of memory contains the value 6. We then use the print function to display the content of x.
As this is a often used pattern, Python has a convenience syntax for this kind assignment
End of explanation
0.1+0.2
Explanation: 3.1.3 Floating point precision
All of the above applies equally to floating point numbers (or real numbers). However, we must be mindful of floating point precision.
End of explanation
# Noting the difference between printing quoted variables (strings) and printing the variable itself.
x = 5
print(x)
print('x')
Explanation: The following exerpt from the Python documentation explains what is happening quite clearly.
To be fair, even our decimal system is inadequate to represent rational numbers like 1/3, 1/11 and so on.
3.2 Strings
Strings are basically text. These are enclosed in ' ' or " ". The reason for having two ways of denoting strings is because we may need to nest a string within a string like in 'The quick brown fox "jumped" over the lazy old dog'. This is especially useful when setting up database queries and the like.
End of explanation
my_name = 'Tang U-Liang'
print(my_name)
# String formatting: Using the %
age = 35
print('Hello doctor, my name is %s. I am %d years old. I weigh %.1f kg' % (my_name, age, 70.25))
# or using .format method
print("Hi, I'm {name}. Please register {name} for this conference".format(name=my_name))
Explanation: In the second print function, the text 'x' is printed while in the first print function, it is the contents of x which is printed to the console.
3.2.1 String formatting
Strings can be assigned to variables just like numbers. And these can be recalled in a print function.
End of explanation
fruit = 'Apple'
drink = 'juice'
print(fruit+drink) # concatenation
#Don't like the lack of spacing between words?
print(fruit+' '+drink)
Explanation: When using % to indicate string substitution, take note of the common formatting "placeholders"
%s to substitue strings.
%d for printing integer substitutions
%.1f means to print a floating point number up to 1 decimal place. Note that there is no rounding
The utility of the .format method arises when the same string needs to printed in various places in a larger body of text. This avoids duplicating code. Also did you notice I used double quotation. Why?
More about string formats can be found in this excellent blog post
3.2.2 Weaving strings into one beautiful tapestry of text
Besides the .format and % operation on text, we can concatenate strings using + operator. However, strings cannot be changed once declared and assigned to variables. This property is called immutability
End of explanation
print(fruit[0])
print(fruit[1])
Explanation: Use [] to access specific letters in the string. Python uses 0 indexing. So the first letter is accessed by my_string[0] while my_string[1] accesses the second letter.
End of explanation
favourite_drink = fruit+' '+drink
print("Printing the first to 3rd letter.")
print(favourite_drink[0:3])
print("\nNow I want to print the second to seventh letter:")
print(favourite_drink[1:7])
Explanation: Slicing is a way of get specific subsets of the string. If you let $x_n$ denote the $n+1$-th letter (note zero indexing) in a string (and by letter this includes whitespace characters as well!) then writing my_string[i:j] returns a subset $$x_i, x_{i+1}, \ldots, x_{j-1}$$ of letters in a string. That means the slice [i:j] takes all subsets of letters starting from index i and stops one index before the index indicated by j.
0 indexing and stopping point convention frequently trips up first time users. So take special note of this convention. 0 indexing is used throughout Python especially in matplotlib and pandas.
End of explanation
print(favourite_drink[0:7:2])
# Here's a trick, try this out
print(favourite_drink[3:0:-1])
Explanation: Notice the use of \n in the second print function. This is called a newline character which does exactly what its name says. Also in the third print function notice the seperation between e and j. It is actually not seperated. The sixth letter is a whitespace character ' '.
Slicing also utilizes arithmetic progressions to return even more specific subsets of strings. So [i:j:k] means that the slice will return $$ x_{i}, x_{i+k}, x_{i+2k}, \ldots, x_{i+mk}$$ where $m$ is the largest (resp. smallest) integer such that $i+mk \leq j-1$ (resp $1+mk \geq j+1$ if $i\geq j$)
End of explanation
# Write your answer here and check it with the output below
Explanation: So what happened above? Well [3:0:-1] means that starting from the 4-th letter $x_3$ which is 'l' return a subtring including $x_{2}, x_{1}$ as well. Note that the progression does not include $x_0 =$ 'A' because the stopping point is non-inclusive of j.
The slice [:j] or [i:] means take substrings starting from the beginning up to the $j$-th letter (i.e. the $x_{j-1}$ letter) and substring starting from the $i+1$-th (i.e. the $x_{i}$) letter to the end of the string.
3.2.3 A mini challenge
Print the string favourite_drink in reverse order. How would you do it?
End of explanation
x = 5.0
type(x)
type(favourite_drink)
type(True)
type(500)
Explanation: Answer: eciuj elppA
3.3 The type function
All objects in python are instances of classes. It is useful sometimes to find out what type of object we are looking at, especially if it has been assigned to a variable. For this we use the type function.
End of explanation
# Here's a list called staff containing his name, his age and current renumeration
staff = ['Andy', 28, 980.15]
Explanation: 4. list, here's where the magic begins
list are the fundamental data structure in Python. These are analogous to arrays in C or Java. If you use R, lists are analogous to vectors (and not R list)
Declaring a list is as simple as using square brackets [ ] to enclose a list of objects (or variables) seperated by commas.
End of explanation
len(staff)
Explanation: 4.1 Properties of list objects and indexing
One of the fundamental properties we can ask about lists is how many objects they contain. We use the len (short for length) function to do that.
End of explanation
staff[0]
Explanation: Perhaps you want to recover that staff's name. It's in the first position of the list.
End of explanation
# type your answer here and run the cell
Explanation: Notice that Python still outputs to console even though we did not use the print function. Actually the print function prints a particularly "nice" string representation of the object, which is why Andy is printed without the quotation marks if print was used.
Can you find me Andy's age now?
End of explanation
staff[1:3]
Explanation: The same slicing rules for strings apply to lists as well. If we wanted Andy's age and wage, we would type staff[1:3]
End of explanation
nested_list = ['apples', 'banana', [1.50, 0.40]]
Explanation: This returns us a sub-list containing Andy's age and renumeration.
4.2 Nested lists
Lists can also contain other lists. This ability to have a nested structure in lists gives it flexibility.
End of explanation
# Accesing items from within a nested list structure.
print(nested_list[2])
# Assigning nested_list[2] to a variable. The variable price represents a list
price = nested_list[2]
print(type(price))
# Getting the smaller of the two floats
print(nested_list[2][1])
Explanation: Notice that if I type nested_list[2], Python will return me the list [1.50, .40]. This can be accessed again using indexing (or slicing notation) [ ].
End of explanation
# append
staff.append('Finance')
print(staff)
# pop away the information about his salary
andys_salary = staff.pop(2)
print(andys_salary)
print(staff)
# oops, made a mistake, I want to reinsert information about his salary
staff.insert(3, andys_salary)
print(staff)
contacts = [99993535, "[email protected]"]
staff = staff+contacts # reassignment of the concatenated list back to staff
print(staff)
Explanation: 4.3 List methods
Right now, let us look at four very useful list methods. Methods are basically operations which modify lists. These are:
pop which allows us to remove an item in a list.
So for example if $x_0, x_1, \ldots, x_n$ are items in a list, calling my_list.pop(r) will modify the list so that it contains only $$x_0, \ldots, x_{r-1}, x_{r+1},\ldots, x_n$$ while returning the element $x_r$.
append which adds items to the end of the list.
Let's say $x_{n+1}$ is the new object you wish to append to the end of the list. Calling the method my_list.append(x_n+1) will modify the list inplace so that the list will now contain $$x_0, \ldots, x_n, x_{n+1}$$ Note that append does not return any output!
insert which as the name suggests, allows us to add items to a list in a particular index location
When using this, type my_list.insert(r, x_{n+1}) with the second argument to the method the object you wish to insert and r the position (still 0 indexed) where this object ought to go in that list. This method modifies the list inplace and does not return any output. After calling the insert method, the list now contains $$x_0,\ldots, x_{r-1}, x_{n+1}, x_{r}, \ldots, x_n$$ This means that my_list[r] = $x_{n+1}$ while my_list[r+1] = $x_{r}$
+ is used to concatenate two lists. If you have two lists and want to join them together producing a union of two (or more lists), use this binary operator.
This works by returning a union of two lists. So $$[ x_1,\ldots, x_n] + [y_1,\ldots, y_m]$$ is the list containing $$ x_1,\ldots, x_n,y_1, \ldots, y_m$$ This change is not permanent unless you assign the result of the operation to another variable.
End of explanation
staff = ['Andy', 28, 'Finance', 980.15, 99993535, '[email protected]']
staff
# type your answer here
print(staff)
Explanation: 4.3.1 Your first programming challenge
Move information for Andy's email to the second position (i.e. index 1) in the list staff in one line of code
End of explanation |
6,229 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Categorical Data
Categoricals are a pandas data type, which correspond to categorical variables in statistics
Step1: Change data type
change data type for "Grade" column to category
documentation for astype()
Step2: Rename the categories
Rename the categories to more meaningful names (assigning to Series.cat.categories is inplace)
Step3: Values in data frame have not changed
tabulate Department, Name, and YearsOfService, by Grade | Python Code:
import pandas as pd
import numpy as np
file_name_string = 'C:/Users/Charles Kelly/Desktop/Exercise Files/02_07/Begin/EmployeesWithGrades.xlsx'
employees_df = pd.read_excel(file_name_string, 'Sheet1', index_col=None, na_values=['NA'])
Explanation: Categorical Data
Categoricals are a pandas data type, which correspond to categorical variables in statistics: a variable, which can take
on only a limited, and usually fixed, number of possible values (categories; levels in R). Examples are gender, social
class, blood types, country affiliations, observation time or ratings via Likert scales.
In contrast to statistical categorical variables, categorical data might have an order (e.g. ‘strongly agree’ vs ‘agree’ or
‘first observation’ vs. ‘second observation’), but numerical operations (additions, divisions, ...) are not possible.
All values of categorical data are either in categories or np.nan. Order is defined by the order of categories, not lexical
order of the values.
documentation: http://pandas.pydata.org/pandas-docs/stable/categorical.html
End of explanation
employees_df["Grade"] = employees_df["Grade"].astype("category")
Explanation: Change data type
change data type for "Grade" column to category
documentation for astype(): http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.astype.html
End of explanation
employees_df["Grade"].cat.categories = ["excellent", "good", "acceptable", "poor", "unacceptable"]
Explanation: Rename the categories
Rename the categories to more meaningful names (assigning to Series.cat.categories is inplace)
End of explanation
employees_df.groupby('Grade').count()
Explanation: Values in data frame have not changed
tabulate Department, Name, and YearsOfService, by Grade
End of explanation |
6,230 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clustering Algorithms
Step1: Proof of Concept
Step2: The above block creates 400 points evenly distributed between two clusters around the coordinates (1,1) and (6,6) on the x-y plane with a standard deviation of 1.0. The sklearn function make_blobs returns the data points as well as the truth-labels, but we won't be given the actual labels in the real world so we put them into here into the dummy variable _.
The labels consist of the index of which cluster each point belongs to. We can get an idea of what that looks like by printing the first ten labels
Step3: Likewise, their corresponding data points
Step4: Let's visualize the full dataset
Step5: Very obviously two distinct clusters.
Notes on syntax
Step6: ms = MeanShift() assigns the variable ms to the MeanShift class which we imported from scikit-learn's cluster module. The class has its own methods which give us all the functionality we need. Running the Mean Shift algorithm then is as simple as calling ms.fit(X) which fits the clustering algorithm to our dataset X.
The generated labels and estimated cluster centers are retrieved by calling ms.labels_ and ms.cluster_centers_; don't forget the trailing underscore!
Finally we make a new variable, n_clusters which we set equal to the number of unique labels.
Now we can take a look at where Mean Shift thinks our centers are
Step7: So far so good. The estimates are very close to the actual centroids. Let's visualize this
Step8: There are our two clusters, with their centroids marked by black X's.
Now we can split the data up by cluster
Step9: Or scan through the data set
Step10: Algorithm Limitations
Mean Shift isn't omnipotent. Mean Shift will merge clusters that converge to the same centroid. So if two clusters heavily overlap, they may be treated as one.
We can see this by bringing our actual centroid closer together and/or raising the standard deviation
Step11: However, if there's an outlier point that doesn't converge with the rest, Mean Shift will make it it's own cluster.
Step12: Fancier Visuals
Step13: Setting the z-axis to a 3rd coordinate for data with more dimensions, instead of as the label, makes it clear how hard it can be to eyeball the data | Python Code:
%matplotlib inline
import numpy as np
from sklearn.cluster import MeanShift
from sklearn.datasets.samples_generator import make_blobs
import matplotlib.pyplot as plt
from matplotlib import style
style.use("ggplot")
Explanation: Clustering Algorithms: Mean Shift
Very often datasets contain information about overall trends that aren't explicitly stated. By looking at a dataset's relation to itself we can infer some of this information. This lies within the field of Unsupervised Machine Learning.
Clustering Algorithms are a class of tools that allow us to identify subsets of the data that tend to cluster together. Two popular clustering algorithms are K-Means and Mean Shift clustering. K-Means is simpler, but has some drawbacks and requires you to tell it how many clusters to fit the data to. We'll instead cover Mean Shift as it's more versatile, and infers on its own how many clusters exist in the data.
Setup:
End of explanation
# the centroids of the 'actual' clusters
centers = [[1,1], [6,6]]
X, _ = make_blobs(n_samples=400, n_features=2, centers=centers, cluster_std=1.)
Explanation: Proof of Concept:
We'll start with a simple example to build some intuition.
First our toy dataset:
End of explanation
_[:10]
Explanation: The above block creates 400 points evenly distributed between two clusters around the coordinates (1,1) and (6,6) on the x-y plane with a standard deviation of 1.0. The sklearn function make_blobs returns the data points as well as the truth-labels, but we won't be given the actual labels in the real world so we put them into here into the dummy variable _.
The labels consist of the index of which cluster each point belongs to. We can get an idea of what that looks like by printing the first ten labels:
End of explanation
X[:10]
Explanation: Likewise, their corresponding data points:
End of explanation
plt.figure(figsize=(15,10)) # set custom display size
plt.scatter(X[:,0], X[:,1], s=10)
Explanation: Let's visualize the full dataset:
End of explanation
ms = MeanShift()
ms.fit(X)
labels = ms.labels_
cluster_centers = ms.cluster_centers_
n_clusters = len(np.unique(labels))
Explanation: Very obviously two distinct clusters.
Notes on syntax: Python allows for array slicing. some_array[:10] will return the first ten elements of some_array, and some_array[0][-1] will return the last element of the first subarray in some_array. Note some_array[:10] is the same as some_array[0:10]. Python is zero-indexed and -1 is the index of the last item of a list. Caution: some_array[0:-1] will return a slice of some_array up to but not including the last element because Python follows the convention of some_array[start_before : end_before].
Let's see how Mean Shift does on this example:
End of explanation
print(cluster_centers)
Explanation: ms = MeanShift() assigns the variable ms to the MeanShift class which we imported from scikit-learn's cluster module. The class has its own methods which give us all the functionality we need. Running the Mean Shift algorithm then is as simple as calling ms.fit(X) which fits the clustering algorithm to our dataset X.
The generated labels and estimated cluster centers are retrieved by calling ms.labels_ and ms.cluster_centers_; don't forget the trailing underscore!
Finally we make a new variable, n_clusters which we set equal to the number of unique labels.
Now we can take a look at where Mean Shift thinks our centers are:
End of explanation
colors = ['r.','g.','b.','c.','m.','y.']*5
plt.figure(figsize=(15,10))
for i in range(len(X)):
plt.plot(X[i][0], X[i][1], colors[labels[i]])
plt.scatter(cluster_centers[:,0], cluster_centers[:,1],
marker="x", zorder=10, s=150, c='k')
print("Number of clusters: ", n_clusters)
Explanation: So far so good. The estimates are very close to the actual centroids. Let's visualize this:
End of explanation
X_clstr0, X_clstr1 = X[np.where(labels==0)], X[np.where(labels==1)]
print("Cluster 0:\n", X_clstr0[:5])
print("Cluster 1:\n", X_clstr1[:5])
Explanation: There are our two clusters, with their centroids marked by black X's.
Now we can split the data up by cluster:
End of explanation
for i in range(5):
print("Coordinate: ", X[i], "Label: ", labels[i])
Explanation: Or scan through the data set:
End of explanation
centers = [[1,1], [6,6]]
X, _ = make_blobs(n_samples=400, n_features=2, centers=centers, cluster_std=4.)
plt.figure(figsize=(15,10))
plt.scatter(X[:,0], X[:,1], s=10)
def run_MS(visuals=True):
ms = MeanShift()
ms.fit(X)
labels = ms.labels_
cluster_centers = ms.cluster_centers_
n_clusters = len(np.unique(labels))
print(cluster_centers)
if visuals:
colors = ['r.','g.','b.','c.','m.','y.']*5
plt.figure(figsize=(15,10))
for i in range(len(X)):
plt.plot(X[i][0], X[i][1], colors[labels[i]])
plt.scatter(cluster_centers[:,0], cluster_centers[:,1],
marker="x", zorder=10, s=150, c='k')
print("Number of clusters: ", n_clusters)
return labels, cluster_centers
run_MS();
Explanation: Algorithm Limitations
Mean Shift isn't omnipotent. Mean Shift will merge clusters that converge to the same centroid. So if two clusters heavily overlap, they may be treated as one.
We can see this by bringing our actual centroid closer together and/or raising the standard deviation:
End of explanation
centers = [[1,1], [6,6]]
X, _ = make_blobs(n_samples=400, n_features=2, centers=centers, cluster_std=3.)
run_MS();
Explanation: However, if there's an outlier point that doesn't converge with the rest, Mean Shift will make it it's own cluster.
End of explanation
labels, cluster_centers = run_MS(visuals=False)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(15,10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X[:,0], X[:,1], labels)
ax.scatter(cluster_centers[:,0], cluster_centers[:,1],
np.arange(len(cluster_centers)), c='k', s=180)
Explanation: Fancier Visuals:
End of explanation
centers = [[1,1,1],[5,5,1],[3,10,1]]
X, _ = make_blobs(n_samples=400, centers=centers, cluster_std=1)
labels, cluster_centers = run_MS(visuals=True);
fig = plt.figure(figsize=(15,10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X[:,0], X[:,1], X[:,2], c='m')
ax.scatter(cluster_centers[:,0], cluster_centers[:,1],
np.arange(len(cluster_centers)), c='k', s=180)
Explanation: Setting the z-axis to a 3rd coordinate for data with more dimensions, instead of as the label, makes it clear how hard it can be to eyeball the data:
End of explanation |
6,231 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
LightGBM Custom Loss Function
| Python Code::
import LightGBM as lgb
def custom_loss(y_pred, data):
y_true = data.get_label()
error = y_pred-y_true
#1st derivative of loss function
grad = 2 * error
#2nd derivative of loss function
hess = 0 * error + 2
return grad, hess
params = {"learning_rate" : 0.1}
training_data = lgb.Dataset(X_train , label = y_train)
model = lgb.train(train_set=training_data,
params=params,
fobj=custom_loss)
|
6,232 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
word2vec
<img src="images/book.png" style="width
Step1: Why is word2vec so popular?
Creates a word "cloud", organized by semantic meaning.
Converts text into a numerical form that machine learning algorithms and Deep Learning Neural Nets can then use as input.
<img src="images/firth.png" style="width | Python Code:
output = {'fox': [-0.00449447, -0.00310097]}
input_text = "The quick brown fox"
print(output['fox'])
Explanation: word2vec
<img src="images/book.png" style="width: 300px;" align="middle"/>
Slides: bit.ly/word2vec_talk
Agenda
0) Welcome!
1) What is word2vec?
2) How does word2vec work?
3) What can you do with word2vec?
4) Demo(?)
Slides: bit.ly/word2vec_talk
<img src="images/me_cropped.png" style="width: 300px;"/>
hi, brian. @BrianSpiering
Data Science Faculty @GalvanizeU
Slides: bit.ly/word2vec_talk
(adapted from my Natural Language Processing (NLP) course)
Pop Quiz
Do computers prefer numbers or words?
Numbers
<br>
<br>
word2vec is a series of algorithms to map words (strings) to numbers (lists of floats).
</details>
<img src="images/tsne.png" style="width: 300px;"/>
End of explanation
bigrams = ["insurgents killed", "killed in", "in ongoing", "ongoing fighting"]
skip_2_bigrams = ["insurgents killed", "insurgents in", "insurgents ongoing", "killed in",
"killed ongoing", "killed fighting", "in ongoing", "in fighting", "ongoing fighting"]
Explanation: Why is word2vec so popular?
Creates a word "cloud", organized by semantic meaning.
Converts text into a numerical form that machine learning algorithms and Deep Learning Neural Nets can then use as input.
<img src="images/firth.png" style="width: 300px;"/>
“You shall know a word
by the company it keeps”
- J. R. Firth 1957
Distributional Hypothesis: Words that occur in the same contexts tend to have similar meanings
Example:
... government debt problems are turning into banking crises...
... Europe governments needs unified banking regulation to replace the hodgepodge of debt regulations...
The words: government, regulation, and debt probably represent some aspect of banking since they frequently appear near by.
How does word2vec model the Distributional Hypothesis?
word2Vec is a very simple neural network:
<img src="images/w2v_neural_net.png" style="width: 350px;"/>
Source
Skip-gram architecture
<img src="images/skip-gram.png" style="width: 300px;"/>
Given the current word, predict the context (surrounding words).
Skip-gram example
“Insurgents killed in ongoing fighting”
End of explanation |
6,233 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using the scipy Kolmogorov–Smirnov test
A short note on how to use the scipy Kolmogorov–Smirnov test because quite frankly the documentation was not compelling and I always forget how the scipy.stats module handles distributions arguments.
Set up some test data
Using a beta distribution generate rvs from it and save them as d1, then make a second set of data containing the fist set and 50 drawd from a different distribution
Step1: Now compute KS test
The trick here is to give the scipy.stats.kstest the data, then a callable function of the cumulative distribution function. There is some clever way to just pass in a string of the name, but I was unconvinved that the argumements were going to the proper place, so I prefer this method | Python Code:
xvals = np.linspace(0, 0.5, 500)
args = [0.5, 20]
d1 = ss.beta.rvs(args[0], args[1], size=1000)
noise = ss.norm.rvs(0.4, 0.1, size=50)
d2 = np.append(d1[:len(d1)-len(noise)], noise)
fig, (ax1, ax2) = plt.subplots(nrows = 2)
ax1.plot(xvals, ss.gaussian_kde(d1).evaluate(xvals), "-k",
label="d1")
ax1.legend()
ax2.plot(xvals, ss.gaussian_kde(d2).evaluate(xvals), "-k",
label="d2")
ax2.legend()
plt.show()
Explanation: Using the scipy Kolmogorov–Smirnov test
A short note on how to use the scipy Kolmogorov–Smirnov test because quite frankly the documentation was not compelling and I always forget how the scipy.stats module handles distributions arguments.
Set up some test data
Using a beta distribution generate rvs from it and save them as d1, then make a second set of data containing the fist set and 50 drawd from a different distribution: this measn d2 not beta. We plot the Gaussian KDE to show how large the effect is graphically
End of explanation
beta_cdf = lambda x: ss.beta.cdf(x, args[0], args[1])
ss.kstest(d1, beta_cdf)
ss.kstest(d2, beta_cdf)
Explanation: Now compute KS test
The trick here is to give the scipy.stats.kstest the data, then a callable function of the cumulative distribution function. There is some clever way to just pass in a string of the name, but I was unconvinved that the argumements were going to the proper place, so I prefer this method
End of explanation |
6,234 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center> Problema zilei de nastere</center>
In acest notebook calculam probabilitatile evenimentelor $E_n$, ca intr-un grup de n persoane, $2\leq n\leq 365$, sa existe cel putin doua cu aceeasi zi de nastere
si vizualizam comparativ aceste probabilitati.
Apoi generam aleator zilele de nastere si verificam ca probabilitatile experimentale sunt foarte apropiate de cele teoretice.
Formula teoretica de calcul a probabilitatii a cel putin unei coincidente a zilelor de nastere intr-un grup de $n$ persoane, dedusa in Cursul 7, este
Step1: Calculam acum probabilitatile de coincidenta a cel putin 2 zile de nastere intr-un grup de $n$ persoane cu
$2\leq n\leq 60$ si le reprezentam grafic
Step2: Afisam probabilitatile calculate
Step3: Pentru a verifica experimental problema zilei de nastere, prezentam mai intai cateva elemente de programare Python ce le vom folosi.
Vom genera un dictionar ce are drept chei zilele de nastere de la 1 la 365 inclusiv si fiecarei chei ii atribuim
numarul de persoane din cele $n$ ce au acea zi de nastere.
Pentru a crea dictionarul in mod compact si nu explicit, procedam astfel
Step4: Pentru a incepe numararea de la 1, nu de la 0, setam startul astfel
Step5: Astfel din lista nrbdays generam dictionarul de interes astfel
Step6: Pentru simulare importam functia randint(m,n) care genereaza uniform (cu aceeasi probabilitate) numere din multimea de intregi ${m,m+1, \ldots, n}$, $m<n$.
Step7: Acum repetam experimentul aleator de $N$ ori cu acelasi nr de participanti si calculam nr de coincidente in fiecare
incercare | Python Code:
from __future__ import division
def prob_theor(n):
if n==2:
return (1-1/365)
else:
return prob_theor(n-1)*(1-(n-1)/365)
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot') # setam stilul de afisare a plot-urilor.
# Incepand cu matplotlib 1.4 s-au introdus mai multe stiluri
# http://matplotlib.org/users/style_sheets.html
plt.rcParams['figure.figsize'] = (8.0, 4.0)
Explanation: <center> Problema zilei de nastere</center>
In acest notebook calculam probabilitatile evenimentelor $E_n$, ca intr-un grup de n persoane, $2\leq n\leq 365$, sa existe cel putin doua cu aceeasi zi de nastere
si vizualizam comparativ aceste probabilitati.
Apoi generam aleator zilele de nastere si verificam ca probabilitatile experimentale sunt foarte apropiate de cele teoretice.
Formula teoretica de calcul a probabilitatii a cel putin unei coincidente a zilelor de nastere intr-un grup de $n$ persoane, dedusa in Cursul 7, este:
$P(E_n)=1-\displaystyle\prod_{k=1}^{n-1}\left(1-\displaystyle\frac{k}{365}\right)$
Definim functia recursiva prob_theor ce calculeaza probabilitatea evenimentului complementar
$P(\complement E_n)=\displaystyle\prod_{k=1}^{n-1}\left(1-\displaystyle\frac{k}{365}\right)$
End of explanation
nrPartic=60
probsEn=[1- prob_theor(n) for n in range(2,nrPartic+1)]
n=[ k for k in range(2, nrPartic+1)]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.stem(n, probsEn, 'b', bottom=0)
ax.set_xlabel('n')
ax.set_ylabel(r'$P(E_n)$')
ax.set_title('Probabilitatile $P(E_n)$, $n=2,\ldots, 60$')
Explanation: Calculam acum probabilitatile de coincidenta a cel putin 2 zile de nastere intr-un grup de $n$ persoane cu
$2\leq n\leq 60$ si le reprezentam grafic:
End of explanation
import numpy as np
print np.array(probsEn).round(3)
Explanation: Afisam probabilitatile calculate:
End of explanation
L=[2, 3, 'u', 'z']
print list(enumerate(L))
Explanation: Pentru a verifica experimental problema zilei de nastere, prezentam mai intai cateva elemente de programare Python ce le vom folosi.
Vom genera un dictionar ce are drept chei zilele de nastere de la 1 la 365 inclusiv si fiecarei chei ii atribuim
numarul de persoane din cele $n$ ce au acea zi de nastere.
Pentru a crea dictionarul in mod compact si nu explicit, procedam astfel: general lista nrbdays=[0]*365 de lungime 365, ce are setate toate elementele pe $0$ (la inceputul experimentuluinu stim inca zilele de nasere ale participantilor).
In general unei liste L=[2, 3, 'u', 'z'] i se asociaza o lista de cupluri $(i, L[i])$ apeland functia enumerate(L):
End of explanation
print list(enumerate(L, start=1))
Explanation: Pentru a incepe numararea de la 1, nu de la 0, setam startul astfel:
End of explanation
nrbdays=[0]*365
birthdays=dict(enumerate(nrbdays, start=1))
print birthdays
Explanation: Astfel din lista nrbdays generam dictionarul de interes astfel: birthdays=dict(enumerate(nrbdays, start=1)).
Cu alte cuvinte, dictionarul va avea drept chei "numerele de ordine" al perechii
End of explanation
from random import randint
def bdayExper(nrP=23): #nrP= numarul de participanti la chef
#generam aleator cate o zi de nastere si marim cu 1 contorul ce da nr de participandti nascuti in acea zi
# calculam si returnam nr de coincidente
coincidence=0 # initial nu exista nicio coincidenta
nrbdays=[0]*365 #numarul initial de participanti nascuti intr-o zi a anului este 0
birthdays=dict(enumerate(nrbdays, start=1)) # creaza dictionarul {1:0, 2:0, ...., 365:0}
for n in range(nrP):
birthdays[randint(1,365)]+=1 #actualizam datele din dictionar
for k in birthdays.keys(): # parcurgem dictionarul si numaram cate coincidente s-au inregistrat
# intr-o singura simulare
if birthdays[k]>1:
coincidence+=1
return coincidence
Explanation: Pentru simulare importam functia randint(m,n) care genereaza uniform (cu aceeasi probabilitate) numere din multimea de intregi ${m,m+1, \ldots, n}$, $m<n$.
End of explanation
N=1000
nrP=30
Lcoinc=[bdayExper(nrP=30) for k in range(N)]# lista numarului de coincidente in fiecare incercare din cele N
#E mai simplu sa numaram cate incercari exista fara nicio coincidenta:
NrCoinc0=Lcoinc.count(0)# numarul de incercari cu nicio coincidenta
print 'Probabilitate experimentala a cel putin unei coincidente intr-un grup de ', nrP, 'persoane este:',\
1.0*(N-NrCoinc0)/N, '\n', 'Probabilitatea teoretica:', 1-prob_theor(nrP)
print 'Lista numarului de coincidente in fiecare incercare din cele N', Lcoinc
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: Acum repetam experimentul aleator de $N$ ori cu acelasi nr de participanti si calculam nr de coincidente in fiecare
incercare:
End of explanation |
6,235 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Likelihood-free Inference of Stable Distribution Parameters
1. Stable Distribution
Stable distributions, also known as $\alpha$-stable distributions and Lévy (alpha) stable distribution, are the distibutions with 4 parameters
Step1: 2.1. Example Data
We now generate random data from stable distribution with various parameters. The goal is to give intuitive explanation of how changing parameters affect the distribution.
Stability Parameter $\alpha$
We first see how changing stability parameter while keeping the others same affect the distribution. Histograms indicate that increased $\alpha$ values yield samples that are closer to the mean, which is zero in these figures. Observe that this is different than variance since only very few samples are scattered, so $\alpha$ does not control the variance around the mean.
Step2: Skewness Parameter $\beta$
Below, we observe how histograms change with $\beta$
Step3: Location Parameter $\mu$
The impact of $\mu$ is rather straightforward
Step4: Scale Parameter $\sigma$
Similar to $\mu$, we are familiar with the $\sigma$ parameter from Gaussian distribution. Below, we see that playing with $\sigma$, we can control the variance of the samples.
Step5: Sampling from Standard Normal Distribution
Now we generate data from standard Gaussian distribution by setting $\alpha=2$ and $\beta=0$.
Step6: 3. Experiments
To analyze the performance of likelihood-free inference methods on this problem, we follow a similar experiment setup as presented in [2]
Step7: 3.2. Summary Statistics
In this section, we give brief explanations for 5 sets of summary statistics. Last four summary vectors are based on characteristic function of stable distribution. Thus the expressions for the summary vectors are too complex to discuss here. We give the exact formulas of the first set of summary statistics and those of the last four items below can be checked out in Section 3.1.1 of [2].
S1 - McCulloch’s Quantiles
McCulloch has developed a method for estimating the parameters based on sample quantiles [5]. He gives consistent estimators, which are the functions of sample quantiles, of population quantiles, and provides tables for recovering $\alpha$ and $\beta$ from the estimated quantiles. Note that in contrast to his work, we use the sample means as the summary for the location parameter $\mu$. If $\hat{q}p(x)$ denotes the estimate for $p$'th quantile, then the statictics are as follows
Step8: Below, I visualize the ELFI graph. Note that the sole purpose of the below cell is to draw the graph. Therefore, in the next cell, I re-create the initial model (and the priors).
Step9: 3.3. Data Sets
Below, the data sets are visualized. In each data set we set $\beta=0.3$, $\mu=10$ and $\sigma=10$. $\alpha$ is set to be 1.7/1.2/0.7 in easy/medium/hard data sets. As we see, increased $\alpha$ values lead to more scattered samples.
Step10: 3.4. Running the Experiments
Step11: Inferring Model Parameters Altogether
In this method, we use summary statistics to infer all the model parameters. This is the natural way of doing ABC inference as long as the summary statistics are informative about all parameters.
Step12: Inferring Model Parameters Separately
Now, we try to infer each model parameter separately. This way of inference applies to the first three summary statictics. More concretely, our summary statistics will be not vectors but just some real numbers which are informative about only a single parameter. Note that we repeat the procedure for all four parameters.
Step13: 4. Results
Step14: 4.1. When Parameters Inferred Altogether
Step15: The tables may not show the performance clearly. Thus, we plot the marginals of 5 summary vectors when rejection sampling is executed on the easy data set. As can be seen, only $\mu$ is identified well. S1 estimates a somewhat good interval for $\sigma$. All the other plots are almost random draws from the prior.
Step16: 4.2. When Parameters Inferred Separately
Step17: Now we take a look at the marginals for S1 only (Note that due to the design of the inference, the output of each experiment is the marginals of a single variable). Below are the marginals of the first 4 experiment, which correspond to $\alpha, \beta, \mu$ and $\sigma$ variables. True values of the parameters are given in paranthesis. | Python Code:
import elfi
import scipy.stats as ss
import numpy as np
import matplotlib.pyplot as plt
import pickle
# http://www.ams.sunysb.edu/~yiyang/research/presentation/proof_simulate_stable_para_estimate.pdf
def stable_dist_rvs(alpha,beta,mu,sig,Ns=200,batch_size=1,random_state=None):
'''
generates random numbers from stable distribution
Input
alpha - stability parameter
beta - skewness parameter
c - scale parameter
mu - location parameter
N - number of random numbers
Ns - number of samples in each batch
batch_size
random_state
sigma in the last column of each batch
'''
assert(np.all(alpha!=1))
N = batch_size
Rs = np.zeros((batch_size,Ns))
for i in range(len(alpha)):
U = ss.uniform.rvs(size=Ns,random_state=random_state)*np.pi - np.pi/2
E = ss.expon.rvs(size=Ns,random_state=random_state)
S = (1 + (beta[i]*np.tan(np.pi*alpha[i]/2))**2 )**(1/2/alpha[i])
B = np.arctan(beta[i]*np.tan(np.pi*alpha[i]/2)) / alpha[i]
X = S * np.sin(alpha[i]*(U+B)) / (np.cos(U)**(1/alpha[i])) * \
(np.cos(U-alpha[i]*(U+B))/E)**((1-alpha[i])/alpha[i])
R = sig[i]*X + mu[i]
Rs[i,:] = R.reshape((1,-1))
return Rs
# check the simulator is correct
randstate = np.random.RandomState(seed=102340)
y0 = stable_dist_rvs([1.7],[0.9],[10],[10],200,1,random_state=randstate)
randstate = np.random.RandomState(seed=102340)
y0scipy = ss.levy_stable.rvs(1.7,0.9,10,10,200,random_state=randstate).reshape((1,-1))
print(np.mean(y0) - np.mean(y0scipy))
print(np.var(y0) - np.var(y0scipy))
Explanation: Likelihood-free Inference of Stable Distribution Parameters
1. Stable Distribution
Stable distributions, also known as $\alpha$-stable distributions and Lévy (alpha) stable distribution, are the distibutions with 4 parameters:
* $\alpha \in (0, 2]$: stability parameter
* $\beta \in [-1, 1]$: skewness parameter
* $\mu \in [0, \infty)$: location parameter
* $\sigma \in (-\infty, \infty)$: scale parameter
If two independent random variables follow stable distribution, then their linear combinations also have the same distribution up to scale and location parameters[1]. That is to say, if two random variables are generated using the same stability and skewness parameters, their linear combinations would have the same ($\alpha$ and $\beta$) parameters. Note that parameterization is different for multivariate stable distributions but in this work we are only interested in the univariate version.
The distrubution has several attractive properties such as including infinite variance/skewness and having heavy tails, and therefore has been applied in many domains including statistics, finance, signal processing, etc[2]. Some special cases of stable distribution are are as follows:
* If $\alpha=2$ and $\beta=0$, the distribution is Gaussian
* If $\alpha=1$ and $\beta=0$, the distribution is Cauchy
* Variance is undefined for $\alpha<2$ and mean is undefined for $\alpha\leq 1$ (Undefined meaning that the integrals for these moments are not finite)
Stable distributions have no general analytic expressions for the density, median, mode or entropy. On the other hand, it is possible to generate random variables given fixed parameters [2,3]. Therefore, ABC techniques are suitable to estimate the unknown parameters of stable distributions. In this notebook, we present how the estimation can be done using ELFI framework. A good list of alternative methods for the parameter estimation is given in [2].
2. Simulator and Data Generation
Below is the simulator implementation. The method takes 4 parameters of the stable distribution and generates random variates. We follow the algorithm outlined in [3].
End of explanation
alphas = np.array([1.4, 1.6, 1.8, 1.99])
betas = np.array([-0.99, -0.5, 0.5, 0.99])
mus = np.array([-200, -100, 0, 100])
sigs = np.array([0.1, 1, 5, 100])
beta = 0
mu = 10
sig = 5
y = stable_dist_rvs(alphas,np.repeat(beta,4),np.repeat(mu,4),\
np.repeat(sig,4),Ns=1000,batch_size=4)
plt.figure(figsize=(20,5))
for i in range(4):
plt.subplot(1,4,i+1)
plt.hist(y[i,:], bins=10)
plt.title("({0:.3g}, {1:.3g}, {2:.3g}, {3:.3g})".format(alphas[i],beta,mu,sig),fontsize=18)
Explanation: 2.1. Example Data
We now generate random data from stable distribution with various parameters. The goal is to give intuitive explanation of how changing parameters affect the distribution.
Stability Parameter $\alpha$
We first see how changing stability parameter while keeping the others same affect the distribution. Histograms indicate that increased $\alpha$ values yield samples that are closer to the mean, which is zero in these figures. Observe that this is different than variance since only very few samples are scattered, so $\alpha$ does not control the variance around the mean.
End of explanation
alpha = 1.4
mu = 0
sig = 0.1
y = stable_dist_rvs(np.repeat(alpha,4),betas,np.repeat(mu,4),np.repeat(sig,4),Ns=200,batch_size=4)
plt.figure(figsize=(20,5))
for i in range(4):
plt.subplot(1,4,i+1)
plt.hist(y[i,:])
plt.title("({0:.3g}, {1:.3g}, {2:.3g}, {3:.3g})".format(alpha,betas[i],mu,sig),fontsize=18)
Explanation: Skewness Parameter $\beta$
Below, we observe how histograms change with $\beta$
End of explanation
alpha = 1.9
beta = 0.5
sigma = 1
y = stable_dist_rvs(np.repeat(alpha,4),np.repeat(beta,4),mus,\
np.repeat(sigma,4),Ns=1000,batch_size=4)
plt.figure(figsize=(20,5))
for i in range(4):
plt.subplot(1,4,i+1)
plt.hist(y[i,:])
plt.title("({0:.3g}, {1:.3g}, {2:.3g}, {3:.3g})".format(alpha,beta,mus[i],sigma),fontsize=18)
Explanation: Location Parameter $\mu$
The impact of $\mu$ is rather straightforward: The mean value of the distribution changes. Observe that we set $\alpha$ to a rather high value (1.9) so that the distribution looks like Gaussian and the role of $\mu$ is therefore more evident.
End of explanation
alpha = 1.8
beta = 0.5
mu = 0
y = stable_dist_rvs(np.repeat(alpha,4),np.repeat(beta,4),np.repeat(mu,4),sigs,Ns=1000,batch_size=4)
plt.figure(figsize=(20,5))
for i in range(4):
plt.subplot(1,4,i+1)
plt.hist(y[i,:])
plt.title("({0:.3g}, {1:.3g}, {2:.3g}, {3:.3g})".format(alpha,beta,mu,sigs[i]),fontsize=18)
Explanation: Scale Parameter $\sigma$
Similar to $\mu$, we are familiar with the $\sigma$ parameter from Gaussian distribution. Below, we see that playing with $\sigma$, we can control the variance of the samples.
End of explanation
y = stable_dist_rvs(np.array([2]),np.array([0]),np.array([0]),np.array([1]),Ns=10000,batch_size=1)
plt.figure(figsize=(4,4))
plt.hist(y[0,:],bins=10)
plt.title('Standard Gaussian samples drawn using simulator');
Explanation: Sampling from Standard Normal Distribution
Now we generate data from standard Gaussian distribution by setting $\alpha=2$ and $\beta=0$.
End of explanation
elfi.new_model()
alpha = elfi.Prior(ss.uniform, 1.1, 0.9)
beta = elfi.Prior(ss.uniform, -1, 2)
mu = elfi.Prior(ss.uniform, -300, 600)
sigma = elfi.Prior(ss.uniform, 0, 300)
Explanation: 3. Experiments
To analyze the performance of likelihood-free inference methods on this problem, we follow a similar experiment setup as presented in [2]:
Data Generation: We first generate three datasets where only stability parameter varies among the generative parameters. Note that the larger the stability parameter, the harder the estimation problem. The true parmeters used to generate the easiest data set are the same as in [2]. Each data set consists of 200 samples.
Summary Statistics: Previous methods for stable distribution parameter estimation dominantly use five summary statistic vectors, which we denote by $S_1$-$S_5$. In this work, we implement all of them and test the performance of the methods on each vector separately.
Inference: We then run rejection sampling and SMC using on summary statistic vector independently so that we can judge how informative summary vectors are. We use the same number of data points and (simulator) sample sizes as in [2]. We first try to infer all the parameters together using summary statistics vectors that are informative about all model variables, then we infer a variable at a time with appropriate summary statistics.
3.1. Defining the Model and Priors
Similar to [4,2], we consider a restricted domain for the priors:
* $\alpha \sim [1.1, 2]$
* $\beta \sim [-1, 1]$
* $\mu \sim [-300, 300]$
* $\sigma \sim (0, 300]$
Limiting the domains of $\mu$ and $\sigma$ is meaningful for practical purposes. We also restrict the possible values that $\alpha$ can take because summary statistics are defined for $\alpha>1$.
End of explanation
def S1(X):
q95 = np.percentile(X,95,axis=1)
q75 = np.percentile(X,75,axis=1)
q50 = np.percentile(X,50,axis=1)
q25 = np.percentile(X,25,axis=1)
q05 = np.percentile(X,5,axis=1)
Xalpha = (q95-q05) / (q75-q25)
Xbeta = (q95+q05-2*q50) / (q95-q05)
Xmu = np.mean(X,axis=1)
Xsig = (q75-q25)
return np.column_stack((Xalpha,Xbeta,Xmu,Xsig))
def S1_alpha(X):
X = S1(X)
return X[:,0]
def S1_beta(X):
X = S1(X)
return X[:,1]
def S1_mu(X):
X = S1(X)
return X[:,2]
def S1_sigma(X):
X = S1(X)
return X[:,3]
def S2(X):
ksi = 0.25
N = int(np.floor((X.shape[1]-1)/3))
R = X.shape[0]
Z = np.zeros((R,N))
for i in range(N):
Z[:,i] = X[:,3*i] - ksi*X[:,3*i+1] - (1-ksi)*X[:,3*i+2]
V = np.log(np.abs(Z))
U = np.sign(Z)
sighat = np.mean(V,1)
betahat = np.mean(U,1)
t1 = 6/np.pi/np.pi*np.var(V,1) - 3/2*np.var(U,1)+1
t2 = np.power(1+np.abs(betahat),2)/4
alphahat = np.max(np.vstack((t1,t2)),0)
muhat = np.mean(X,axis=1)
return np.column_stack((alphahat,betahat,muhat,sighat))
def S2_alpha(X):
X = S2(X)
return X[:,0]
def S2_beta(X):
X = S2(X)
return X[:,1]
def S2_mu(X):
X = S2(X)
return X[:,2]
def S2_sigma(X):
X = S2(X)
return X[:,3]
def S3(X):
t = [0.2, 0.8, 1, 0.4]
pht = np.zeros((X.shape[0],len(t)))
uhat = np.zeros((X.shape[0],len(t)))
for i in range(len(t)):
pht[:,i] = np.mean(np.exp(1j*t[i]*X),1)
uhat[:,i] = np.arctan( np.sum(np.cos(t[i]*X),1) / np.sum(np.sin(t[i]*X),1) )
alphahat = np.log(np.log(np.abs(pht[:,0]))/np.log(np.abs(pht[:,1]))) / np.log(t[0]/t[1])
sighat = np.exp( (np.log(np.abs(t[0]))*np.log(-np.log(np.abs(pht[:,1]))) - \
np.log(np.abs(t[1]))*np.log(-np.log(np.abs(pht[:,0]))))/ \
np.log(t[0]/t[1]) )
betahat = (uhat[:,3]/t[3]-uhat[:,2]/t[2]) / np.power(sighat,alphahat) / np.tan(alphahat*np.pi/2) \
/ (np.power(np.abs(t[3]),alphahat-1) - np.power(np.abs(t[2]),alphahat-1) )
muhat = (np.power(np.abs(t[3]),alphahat-1)*uhat[:,3]/t[3] - np.power(np.abs(t[2]),alphahat-1)*uhat[:,2]/t[2]) \
/ (np.power(np.abs(t[3]),alphahat-1) - np.power(np.abs(t[2]),alphahat-1) )
return np.column_stack((alphahat,betahat,muhat,sighat))
def S3_alpha(X):
X = S3(X)
return X[:,0]
def S3_beta(X):
X = S3(X)
return X[:,1]
def S3_mu(X):
X = S3(X)
return X[:,2]
def S3_sigma(X):
X = S3(X)
return X[:,3]
def S4(X):
Xc = X.copy()
Xc = Xc - np.median(Xc,1).reshape((-1,1))
Xc = Xc / 2 / ss.iqr(Xc,1).reshape((-1,1))
ts = np.linspace(-5,5,21)
phts = np.zeros((Xc.shape[0],len(ts)))
for i in range(len(ts)):
phts[:,i] = np.mean(np.exp(1j*ts[i]*Xc),1)
return phts
def S5(X):
N = X.shape[0]
R = np.zeros((N,23))
kss = np.zeros(N).reshape((N,1))
for i in range(N):
R[i,0] = ss.ks_2samp(X[i,:],z.flatten()).statistic
R[:,1] = np.mean(X,axis=1)
qs = np.linspace(0,100,21)
qs[0] = 1
qs[-1] = 99
for i in range(21):
R[:,i+2] = np.percentile(X,qs[i],axis=1)
return R
Explanation: 3.2. Summary Statistics
In this section, we give brief explanations for 5 sets of summary statistics. Last four summary vectors are based on characteristic function of stable distribution. Thus the expressions for the summary vectors are too complex to discuss here. We give the exact formulas of the first set of summary statistics and those of the last four items below can be checked out in Section 3.1.1 of [2].
S1 - McCulloch’s Quantiles
McCulloch has developed a method for estimating the parameters based on sample quantiles [5]. He gives consistent estimators, which are the functions of sample quantiles, of population quantiles, and provides tables for recovering $\alpha$ and $\beta$ from the estimated quantiles. Note that in contrast to his work, we use the sample means as the summary for the location parameter $\mu$. If $\hat{q}p(x)$ denotes the estimate for $p$'th quantile, then the statictics are as follows:
\begin{align}
\hat{v}\alpha = \frac{\hat{q}{0.95}(\cdot)-\hat{q}{0.05}(\cdot)}{\hat{q}{0.75}(\cdot)-\hat{q}{0.25}(\cdot)} \qquad \hat{v}\beta = \frac{\hat{q}{0.95}(\cdot)+\hat{q}{0.05}(\cdot)-2\hat{q}{0.5}(\cdot)}{\hat{q}{0.95}(\cdot)-\hat{q}{0.5}(\cdot)} \qquad \hat{v}\mu = \frac{1}{N}\sum{i=1}^N x_i \qquad \hat{v}\sigma = \frac{\hat{q}{0.75}(\cdot)-\hat{q}{0.25}(\cdot)}{\sigma}
\end{align}
where the samples are denoted by $x_i$. In practice, we observe that $\frac{1}{\sigma}$ term in $\hat{v}\sigma$ detoriates the performance, so we ignore it.
S2 - Zolotarev’s Transformation
Zolotarev gives an alternative parameterization of stable distribution in terms of its characteristic function [6]. The characteristic function of a probability density function is simply its Fourier transform and it completely defines the pdf [7]. More formally, the characteristic function of a random variable $X$ is defined to be
\begin{align}
\phi_X(t) = \mathbb{E}[e^{itX}]
\end{align}
The exact statistics are not formulated here as it would be out of scope of this project but one can see, for example [6] or [2] for details.
S3 - Press’ Method of Moments
By evaluating the characteristic function at particular time points, it is possible to obtain the method of moment equations [8]. In turn, these equations can be used to obtain estimates for the model parameters. We follow the recommended evaluation time points in [2].
S4 - Empirical Characteristic Function
The formula for empirical characteristic function is
\begin{align}
\hat{\phi}X(t) = \frac{1}{n} \sum{i=1}^N e^{itX_i}
\end{align}
where $X_i$ denotes the samples and $t \in (-\infty,\infty)$. So, the extracted statistics are $\left(\hat{\phi}_X(t_1),\hat{\phi}_X(t_2),\ldots,\hat{\phi}_X(t_20\right)$ and $t=\left{ \pm 0.5, \pm 1 \ldots \pm 5 \right}$
S5 - Mean, Quantiles and Kolmogorov–Smirnov Statistic
The Kolmogorov–Smirnov statistic measures the maximum absolute distance between a cumulative density function and the empirical distribution function, which is defined as a step function that jumps up by $1/n$ at each of the $n$ data points. By computing the statistic on two empricial distribution functions (rather than a cdf), one can test whether two underlying one-dimensional probability distributions differ. Observe that in ABC setting we compare the empirical distribution function of the observed data and the data generated using a set of candidate parameters. In addition to Kolmogorov–Smirnov statistic, we include the mean and a set of quantiles $\hat{q}_p(x)$ where $p \in {0.01, 0.05, 0.1, 0.15, \ldots, 0.9, 0.95, 0.99}$.
End of explanation
sim = elfi.Simulator(stable_dist_rvs,alpha,beta,mu,sigma,observed=None)
SS1 = elfi.Summary(S1, sim)
SS2 = elfi.Summary(S2, sim)
SS3 = elfi.Summary(S3, sim)
SS4 = elfi.Summary(S4, sim)
SS5 = elfi.Summary(S5, sim)
d = elfi.Distance('euclidean',SS1,SS2,SS3,SS4,SS5)
elfi.draw(d)
elfi.new_model()
alpha = elfi.Prior(ss.uniform, 1.1, 0.9)
beta = elfi.Prior(ss.uniform, -1, 2)
mu = elfi.Prior(ss.uniform, -300, 600)
sigma = elfi.Prior(ss.uniform, 0, 300)
Explanation: Below, I visualize the ELFI graph. Note that the sole purpose of the below cell is to draw the graph. Therefore, in the next cell, I re-create the initial model (and the priors).
End of explanation
Ns = 200
alpha0 = np.array([1.7,1.2,0.7])
beta0 = np.array([0.9,0.3,0.3])
mu0 = np.array([10,10,10])
sig0 = np.array([10,10,10])
y0 = stable_dist_rvs(alpha0,beta0,mu0,sig0,Ns,batch_size=3)
plt.figure(figsize=(18,6))
plt.subplot(1,3,1)
plt.hist(y0[0,:])
plt.title('Data Set 1 (Easy)', fontsize=14)
plt.subplot(1,3,2)
plt.hist(y0[1,:])
plt.title('Data Set 2 (Medium)', fontsize=14)
plt.subplot(1,3,3)
plt.hist(y0[2,:])
plt.title('Data Set 3 (Hard)', fontsize=14);
z = y0[0,:].reshape((1,-1))
Explanation: 3.3. Data Sets
Below, the data sets are visualized. In each data set we set $\beta=0.3$, $\mu=10$ and $\sigma=10$. $\alpha$ is set to be 1.7/1.2/0.7 in easy/medium/hard data sets. As we see, increased $\alpha$ values lead to more scattered samples.
End of explanation
class Experiment():
def __init__(self, inf_method, dataset, sumstats, quantile=None, schedule=None, Nsamp=100):
'''
inf_method - 'SMC' or 'Rejection'
dataset - 'Easy', 'Medium' or 'Hard'
sumstats - 'S1', 'S2', 'S3', 'S4' or 'S5'
'''
self.inf_method = inf_method
self.dataset = dataset
self.sumstats = sumstats
self.quantile = quantile
self.schedule = schedule
self.Nsamp = Nsamp
self.res = None
def infer(self):
# read the dataset
if self.dataset == 'Easy':
z = y0[0,:].reshape((1,-1))
elif self.dataset == 'Medium':
z = y0[1,:].reshape((1,-1))
elif self.dataset == 'Hard':
z = y0[2,:].reshape((1,-1))
sim = elfi.Simulator(stable_dist_rvs,alpha,beta,mu,sigma,observed=z)
ss_func = globals().get(self.sumstats)
SS = elfi.Summary(ss_func, sim)
d = elfi.Distance('euclidean',SS)
# run the inference and save the results
if self.inf_method == 'SMC':
algo = elfi.SMC(d, batch_size=Ns)
self.res = algo.sample(self.Nsamp, self.schedule)
elif self.inf_method == 'Rejection':
algo = elfi.Rejection(d, batch_size=Ns)
self.res = algo.sample(self.Nsamp, quantile=self.quantile)
Explanation: 3.4. Running the Experiments
End of explanation
datasets = ['Easy','Medium','Hard']
sumstats = ['S1','S2','S3','S4','S5']
inferences = ['Rejection','SMC']
exps_alt = []
for dataset in datasets:
for sumstat in sumstats:
rej = Experiment('Rejection',dataset,sumstat,quantile=0.01,Nsamp=1000)
rej.infer()
exps_alt.append(rej)
schedule = [rej.res.threshold*4, rej.res.threshold*2, rej.res.threshold]
smc = Experiment('SMC',dataset,sumstat,schedule=schedule,Nsamp=1000)
smc.infer()
exps_alt.append(smc)
Explanation: Inferring Model Parameters Altogether
In this method, we use summary statistics to infer all the model parameters. This is the natural way of doing ABC inference as long as the summary statistics are informative about all parameters.
End of explanation
params = ['alpha','beta','mu','sigma']
exps_sep = []
for dataset in datasets:
for sumstat in sumstats[0:3]:
for param in params:
sumstat_ = sumstat + '_' + param
rej = Experiment('Rejection',dataset,sumstat_,quantile=0.01,Nsamp=250)
rej.infer()
exps_sep.append(rej)
schedule = [rej.res.threshold*4, rej.res.threshold*2, rej.res.threshold]
smc = Experiment('SMC',dataset,sumstat_,schedule=schedule,Nsamp=250)
smc.infer()
exps_sep.append(smc)
file_pi = open('exps_sep.obj', 'wb')
pickle.dump(exps_sep, file_pi)
Explanation: Inferring Model Parameters Separately
Now, we try to infer each model parameter separately. This way of inference applies to the first three summary statictics. More concretely, our summary statistics will be not vectors but just some real numbers which are informative about only a single parameter. Note that we repeat the procedure for all four parameters.
End of explanation
from IPython.display import display, Markdown, Latex
def print_results_alt(dataset,inference,exps_):
nums = []
for sumstat in sumstats:
exp_ = [e for e in exps_ if e.sumstats==sumstat][0]
for param in params:
nums.append(np.mean(exp_.res.outputs[param]))
nums.append(np.std(exp_.res.outputs[param]))
if dataset == 'Easy':
ds_id = 0
elif dataset == 'Medium':
ds_id = 1
elif dataset == 'Hard':
ds_id = 2
tmp = '### {0:s} Data Set - {1:s}'.format(dataset,inference)
display(Markdown(tmp))
tmp = "| Variable | True Value | S1 | S2 | S3 | S4 | S5 | \n \
|:-----: | :----------: |:-------------:|:-----:|:-----:|:-----:|:-----:| \n \
| {44:s} | {40:4.2f} | {0:4.2f} $\pm$ {1:4.2f} | {8:4.2f} $\pm$ {9:4.2f} | {16:4.2f} $\pm$ {17:4.2f} | {24:4.2f} $\pm$ {25:4.2f} | {32:4.2f} $\pm$ {33:4.2f} | \n \
| {45:s} | {41:4.2f} | {2:4.2f} $\pm$ {3:4.2f} | {10:4.2f} $\pm$ {11:4.2f} | {18:4.2f} $\pm$ {19:4.2f} | {26:4.2f} $\pm$ {27:4.2f} | {34:4.2f} $\pm$ {35:4.2f} | \n \
| $\mu$ | {42:4.2f} | {4:4.2f} $\pm$ {5:4.2f} | {12:4.2f} $\pm$ {13:4.2f} | {20:4.2f} $\pm$ {21:4.2f} | {28:4.2f} $\pm$ {29:4.2f} | {36:4.2f} $\pm$ {37:4.2f} | \n \
| $\gamma$ | {43:4.2f} | {6:4.2f} $\pm$ {7:4.2f} | {14:4.2f} $\pm$ {15:4.2f} | {22:4.2f} $\pm$ {23:4.2f} | {30:4.2f} $\pm$ {31:4.2f} | {38:4.2f} $\pm$ {39:4.2f} |".format(\
nums[0],nums[1],nums[2],nums[3],nums[4],nums[5],nums[6],nums[7],nums[8],nums[9], \
nums[10],nums[11],nums[12],nums[13],nums[14],nums[15],nums[16],nums[17],nums[18],nums[19], \
nums[20],nums[21],nums[22],nums[23],nums[24],nums[25],nums[26],nums[27],nums[28],nums[29], \
nums[30],nums[31],nums[32],nums[33],nums[34],nums[35],nums[36],nums[37],nums[38],nums[39], \
alpha0[ds_id],beta0[ds_id],mu0[ds_id],sig0[ds_id],r'$\alpha$',r'$\beta$')
display(Markdown(tmp))
def print_results_sep(dataset,inference,exps_):
nums = []
for sumstat in sumstats[0:3]:
for param in params:
exp_ = [e for e in exps_ if str(e.sumstats)==str(sumstat)+'_'+param][0]
nums.append(np.mean(exp_.res.outputs[param]))
nums.append(np.std(exp_.res.outputs[param]))
if dataset == 'Easy':
ds_id = 0
elif dataset == 'Medium':
ds_id = 1
elif dataset == 'Hard':
ds_id = 2
tmp = '### {0:s} Data Set - {1:s}'.format(dataset,inference)
display(Markdown(tmp))
tmp = "| Variable | True Value | S1 | S2 | S3 | \n \
|:-----: | :----------: |:-------------:|:-----:|:-----:|:-----:|:-----:| \n \
| {28:s} | {24:4.2f} | {0:4.2f} $\pm$ {1:4.2f} | {8:4.2f} $\pm$ {9:4.2f} | {16:4.2f} $\pm$ {17:4.2f} | \n \
| {29:s} | {25:4.2f} | {2:4.2f} $\pm$ {3:4.2f} | {10:4.2f} $\pm$ {11:4.2f} | {18:4.2f} $\pm$ {19:4.2f} | \n \
| $\mu$ | {26:4.2f} | {4:4.2f} $\pm$ {5:4.2f} | {12:4.2f} $\pm$ {13:4.2f} | {20:4.2f} $\pm$ {21:4.2f} | \n \
| $\gamma$ | {27:4.2f} | {6:4.2f} $\pm$ {7:4.2f} | {14:4.2f} $\pm$ {15:4.2f} | {22:4.2f} $\pm$ {23:4.2f} |".format(\
nums[0],nums[1],nums[2],nums[3],nums[4],nums[5],nums[6],nums[7],nums[8],nums[9], \
nums[10],nums[11],nums[12],nums[13],nums[14],nums[15],nums[16],nums[17],nums[18],nums[19], \
nums[20],nums[21],nums[22],nums[23], \
alpha0[ds_id],beta0[ds_id],mu0[ds_id],sig0[ds_id],r'$\alpha$',r'$\beta$')
display(Markdown(tmp))
Explanation: 4. Results
End of explanation
for inference in inferences:
for dataset in datasets:
results = [e for e in exps_alt if e.inf_method==inference and e.dataset==dataset]
print_results_alt(dataset,inference,results)
Explanation: 4.1. When Parameters Inferred Altogether
End of explanation
exps_alt[0].res.plot_marginals();
exps_alt[1].res.plot_marginals();
exps_alt[2].res.plot_marginals();
exps_alt[3].res.plot_marginals();
exps_alt[4].res.plot_marginals();
Explanation: The tables may not show the performance clearly. Thus, we plot the marginals of 5 summary vectors when rejection sampling is executed on the easy data set. As can be seen, only $\mu$ is identified well. S1 estimates a somewhat good interval for $\sigma$. All the other plots are almost random draws from the prior.
End of explanation
for inference in inferences[0:1]:
for dataset in datasets:
results = [e for e in exps_sep if e.inf_method==inference and e.dataset==dataset]
print_results_sep(dataset,inference,results)
Explanation: 4.2. When Parameters Inferred Separately
End of explanation
alphas = exps_sep[0].res.samples['alpha']
betas = exps_sep[2].res.samples['beta']
mus = exps_sep[4].res.samples['mu']
sigmas = exps_sep[6].res.samples['sigma']
plt.figure(figsize=(20,5))
plt.subplot(141)
plt.hist(alphas)
plt.title('alpha (1.7)')
plt.subplot(142)
plt.hist(betas)
plt.title('beta (0.9)')
plt.subplot(143)
plt.hist(mus)
plt.title('mu (10)')
plt.subplot(144)
plt.hist(sigmas)
plt.title('sigma (10)');
Explanation: Now we take a look at the marginals for S1 only (Note that due to the design of the inference, the output of each experiment is the marginals of a single variable). Below are the marginals of the first 4 experiment, which correspond to $\alpha, \beta, \mu$ and $\sigma$ variables. True values of the parameters are given in paranthesis.
End of explanation |
6,236 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
bsym – a basic symmetry module
bsym is a basic Python symmetry module. It consists of some core classes that describe configuration vector spaces, their symmetry operations, and specific configurations of objects withing these spaces. The module also contains an interface for working with pymatgen Structure objects, to allow simple generation of disordered symmetry-inequivalent structures from a symmetric parent crystal structure.
API documentation is here.
Configuration Spaces, Symmetry Operations, and Groups
The central object described by bsym is the configuration space. This defines a vector space that can be occupied by other objects. For example; the three points $a, b, c$ defined by an equilateral triangle,
<img src='figures/triangular_configuration_space.pdf'>
which can be described by a length 3 vector
Step1: Each SymmetryOperation has an optional label attribute. This can be set at records the matrix representation of the symmetry operation and an optional label. We can provide the label when creating a SymmetryOperation
Step2: or set it afterwards
Step3: Or for $C_3$
Step4: Vector representations of symmetry operations
The matrix representation of a symmetry operation is a permutation matrix. Each row maps one position in the corresponding configuration space to one other position. An alternative, condensed, representation for each symmetry operation matrix uses vector notation, where each element gives the row containing 1 in the equivalent matrix column. e.g. for $C_3$ the vector mapping is given by $\left[2,3,1\right]$, corresponding to the mapping $1\to2$, $2\to3$, $3\to1$.
Step5: The vector representation of a SymmetryOperation can be accessed using the as_vector() method.
Step6: Inverting symmetry operations
For every symmetry operation, $A$, there is an inverse operation, $A^{-1}$, such that
\begin{equation}
A \cdot A^{-1}=E.
\end{equation}
For example, the inverse of $C_3$ (clockwise rotation by 120°) is $C_3^\prime$ (anticlockwise rotation by 120°)
Step7: The product of $C_3$ and $C_3^\prime$ is the identity, $E$.
Step8: <img src="figures/triangular_c3_inversion.pdf" />
c_3_inv can also be generated using the .invert() method
Step9: The resulting SymmetryOperation does not have a label defined. This can be set directly, or by chaining the .set_label() method, e.g.
Step10: The SymmetryGroup class
A SymmetryGroup is a collections of SymmetryOperation objects. A SymmetryGroup is not required to contain all the symmetry operations of a particular configuration space, and therefore is not necessarily a complete mathematical <a href="https
Step11: <img src="figures/triangular_c3v_symmetry_operations.pdf" />
Step12: The ConfigurationSpace class
A ConfigurationSpace consists of a set of objects that represent the configuration space vectors, and the SymmetryGroup containing the relevant symmetry operations.
Step13: The Configuration class
A Configuration instance describes a particular configuration, i.e. how a set of objects are arranged within a configuration space. Internally, a Configuration is represented as a vector (as a numpy array).
Each element in a configuration is represented by a single digit non-negative integer.
Step14: The effect of a particular symmetry operation acting on a configuration can now be calculated using the SymmetryOperation.operate_on() method, or by direct multiplication, e.g.
Step15: <img src="figures/triangular_rotation_operation.pdf" />
Finding symmetry-inequivalent permutations.
A common question that comes up when considering the symmetry properties of arrangements of objects is
Step16: This ConfigurationSpace has been created without a symmetry_group argument. The default behaviour in this case is to create a SymmetryGroup containing only the identity, $E$.
Step17: We can now calculate all symmetry inequivalent arrangements where two sites are occupied and two are unoccupied, using the unique_configurations() method. This takes as a argument a dict with the numbers of labels to be arranged in the configuration space. Here, we use the labels 1 and 0 to represent occupied and unoccupied sites, respectively, and the distribution of sites is given by { 1
Step18: Because we have not yet taken into account the symmetry of the configuration space, we get
\begin{equation}
\frac{4\times3}{2}
\end{equation}
unique configurations (where the factor of 2 comes from the occupied sites being indistinguishable).
The configurations generated by unique_configurations have a count attribute that records the number of symmetry equivalent configurations of each case
Step19: We can also calculate the result when all symmetry operations of this configuration space are included.
Step20: Taking symmetry in to account, we now only have two unique configurations
Step21: <img src="figures/square_unique_configurations_2.pdf">
Working with crystal structures using pymatgen
One example where the it can be useful to identify symmetry-inequivalent arrangements of objects in a vector space, is when considering the possible arrangements of disordered atoms on a crystal lattice.
To solve this problem for an arbitrary crystal structure, bsym contains an interface to pymatgen that will identify symmetry-inequivalent atom substitutions in a given pymatgen Structure.
As an example, consider a $4\times4$ square-lattice supercell populated by lithium atoms.
Step22: We can use the bsym.interface.pymatgen.unique_structure_substitutions() function to identify symmetry-inequivalent structures generated by substituting at different sites.
Step23: As a trivial example, when substituting one Li atom for Na, we get a single unique structure
Step24: <img src="figures/pymatgen_example_one_site.pdf">
Step25: This Li$\to$Na substitution breaks the symmetry of the $4\times4$ supercell.
If we now replace a second lithium with a magnesium atom, we generate five symmetry inequivalent structures
Step26: number_of_equivalent_configurations only lists the number of equivalent configurations found when performing the second substitution, when the list of structures unique_structures_with_Mg was created. The full configuration degeneracy relative to the initial empty 4×4 lattice can be queried using full_configuration_degeneracy.
Step27: <img src="figures/pymatgen_example_two_sites.pdf">
Step28: This double substitution can also be done in a single step
Step29: Because both substitutions were performed in a single step, number_of_equivalent_configurations and full_configuration_degeneracy now contain the same data
Step30: Constructing SpaceGroup and ConfigurationSpace objects using pymatgen
The bsym.interface.pymatgen module contains functions for generating SpaceGroup and ConfigurationSpace objects directly from pymatgen Structure objects.
Step31: Documentation
Step32: Progress bars
bsym.ConfigurationSpace.unique_configurations() and bsym.interface.pymatgen.unique_structure_substitutions() both accept optional show_progress arguments, which can be used to display progress bars (using tqdm(https | Python Code:
from bsym import SymmetryOperation
SymmetryOperation([[ 1, 0, 0 ],
[ 0, 1, 0 ],
[ 0, 0, 1 ]])
Explanation: bsym – a basic symmetry module
bsym is a basic Python symmetry module. It consists of some core classes that describe configuration vector spaces, their symmetry operations, and specific configurations of objects withing these spaces. The module also contains an interface for working with pymatgen Structure objects, to allow simple generation of disordered symmetry-inequivalent structures from a symmetric parent crystal structure.
API documentation is here.
Configuration Spaces, Symmetry Operations, and Groups
The central object described by bsym is the configuration space. This defines a vector space that can be occupied by other objects. For example; the three points $a, b, c$ defined by an equilateral triangle,
<img src='figures/triangular_configuration_space.pdf'>
which can be described by a length 3 vector:
\begin{pmatrix}a\b\c\end{pmatrix}
If these points can be coloured black or white, then we can define a configuration for each different colouring (0 for white, 1 for black), e.g.
<img src='figures/triangular_configuration_example_1.pdf'>
with the corresponding vector
\begin{pmatrix}1\1\0\end{pmatrix}
A specific configuration therefore defines how objects are distributed within a particular configuration space.
The symmetry relationships between the different vectors in a configuration space are described by symmetry operations. A symmetry operation describes a transformation of a configuration space that leaves it indistinguishable. Each symmetry operation can be describes as a matrix that maps the vectors in a configuration space onto each other, e.g. in the case of the equiateral triangle the simplest symmetry operation is the identity, $E$, which leaves every corner unchanged, and can be represented by the matrix
\begin{equation}
E=\begin{pmatrix}1 & 0 & 0\0 & 1 & 0 \ 0 & 0 & 1\end{pmatrix}
\end{equation}
For this triangular example, there are other symmetry operations, including reflections, $\sigma$ and rotations, $C_n$:
<img src='figures/triangular_example_symmetry_operations.pdf'>
In this example reflection operation, $b$ is mapped to $c$; $b\to c$, and $c$ is mapped to $b$; $b\to c$.
The matrix representation of this symmetry operation is
\begin{equation}
\sigma_\mathrm{a}=\begin{pmatrix}1 & 0 & 0\0 & 0 & 1 \ 0 & 1 & 0\end{pmatrix}
\end{equation}
For the example rotation operation, $a\to b$, $b\to c$, and $c\to a$, with matrix representation
\begin{equation}
C_3=\begin{pmatrix}0 & 0 & 1\ 1 & 0 & 0 \ 0 & 1 & 0\end{pmatrix}
\end{equation}
Using this matrix and vector notation, the effect of a symmetry operation on a specific configuration can be calculated as the matrix product of the symmetry operation matrix and the configuration vector:
<img src='figures/triangular_rotation_operation.pdf'>
In matrix notation this is represented as
\begin{equation}
\begin{pmatrix}0\1\1\end{pmatrix} = \begin{pmatrix}0 & 0 & 1\ 1 & 0 & 0 \ 0 & 1 &
0\end{pmatrix}\begin{pmatrix}1\1\0\end{pmatrix}
\end{equation}
or more compactly
\begin{equation}
c_\mathrm{f} = C_3 c_\mathrm{i}.
\end{equation}
The set of all symmetry operations for a particular configuration space is a group.
For an equilateral triangle this group is the $C_{3v}$ point group, which contains six symmetry operations: the identity, three reflections (each with a mirror plane bisecting the triangle and passing through $a$, $b$, or $c$ respectively) and two rotations (120° clockwise and counterclockwise).
\begin{equation}
C_{3v} = \left{ E, \sigma_\mathrm{a}, \sigma_\mathrm{b}, \sigma_\mathrm{c}, C_3, C_3^\prime \right}
\end{equation}
Modelling this using bsym
The SymmetryOperation class
In bsym, a symmetry operation is represented by an instance of the SymmetryOperation class. A SymmetryOperation instance can be initialised from the matrix representation of the corresponding symmetry operation.
For example, in the trigonal configuration space above, a SymmetryOperation describing the identify, $E$, can be created with
End of explanation
SymmetryOperation([[ 1, 0, 0 ],
[ 0, 1, 0 ],
[ 0, 0, 1 ]], label='E' )
Explanation: Each SymmetryOperation has an optional label attribute. This can be set at records the matrix representation of the symmetry operation and an optional label. We can provide the label when creating a SymmetryOperation:
End of explanation
e = SymmetryOperation([[ 1, 0, 0 ],
[ 0, 1, 0 ],
[ 0, 0, 1 ]])
e.label = 'E'
e
Explanation: or set it afterwards:
End of explanation
c_3 = SymmetryOperation( [ [ 0, 0, 1 ],
[ 1, 0, 0 ],
[ 0, 1, 0 ] ], label='C3' )
c_3
Explanation: Or for $C_3$:
End of explanation
c_3_from_vector = SymmetryOperation.from_vector( [ 2, 3, 1 ], label='C3' )
c_3_from_vector
Explanation: Vector representations of symmetry operations
The matrix representation of a symmetry operation is a permutation matrix. Each row maps one position in the corresponding configuration space to one other position. An alternative, condensed, representation for each symmetry operation matrix uses vector notation, where each element gives the row containing 1 in the equivalent matrix column. e.g. for $C_3$ the vector mapping is given by $\left[2,3,1\right]$, corresponding to the mapping $1\to2$, $2\to3$, $3\to1$.
End of explanation
c_3.as_vector()
Explanation: The vector representation of a SymmetryOperation can be accessed using the as_vector() method.
End of explanation
c_3 = SymmetryOperation.from_vector( [ 2, 3, 1 ], label='C3' )
c_3_inv = SymmetryOperation.from_vector( [ 3, 1, 2 ], label='C3_inv' )
print( c_3, '\n' )
print( c_3_inv, '\n' )
Explanation: Inverting symmetry operations
For every symmetry operation, $A$, there is an inverse operation, $A^{-1}$, such that
\begin{equation}
A \cdot A^{-1}=E.
\end{equation}
For example, the inverse of $C_3$ (clockwise rotation by 120°) is $C_3^\prime$ (anticlockwise rotation by 120°):
End of explanation
c_3 * c_3_inv
Explanation: The product of $C_3$ and $C_3^\prime$ is the identity, $E$.
End of explanation
c_3.invert()
Explanation: <img src="figures/triangular_c3_inversion.pdf" />
c_3_inv can also be generated using the .invert() method
End of explanation
c_3.invert( label= 'C3_inv')
c_3.invert().set_label( 'C3_inv' )
Explanation: The resulting SymmetryOperation does not have a label defined. This can be set directly, or by chaining the .set_label() method, e.g.
End of explanation
from bsym import PointGroup
# construct SymmetryOperations for C_3v group
e = SymmetryOperation.from_vector( [ 1, 2, 3 ], label='e' )
c_3 = SymmetryOperation.from_vector( [ 2, 3, 1 ], label='C_3' )
c_3_inv = SymmetryOperation.from_vector( [ 3, 1, 2 ], label='C_3_inv' )
sigma_a = SymmetryOperation.from_vector( [ 1, 3, 2 ], label='S_a' )
sigma_b = SymmetryOperation.from_vector( [ 3, 2, 1 ], label='S_b' )
sigma_c = SymmetryOperation.from_vector( [ 2, 1, 3 ], label='S_c' )
Explanation: The SymmetryGroup class
A SymmetryGroup is a collections of SymmetryOperation objects. A SymmetryGroup is not required to contain all the symmetry operations of a particular configuration space, and therefore is not necessarily a complete mathematical <a href="https://en.wikipedia.org/wiki/Group_(mathematics)#Definition">group</a>.
For convenience bsym has PointGroup and SpaceGroup classes, that are equivalent to the SymmetryGroup parent class.
End of explanation
c3v = PointGroup( [ e, c_3, c_3_inv, sigma_a, sigma_b, sigma_c ] )
c3v
Explanation: <img src="figures/triangular_c3v_symmetry_operations.pdf" />
End of explanation
from bsym import ConfigurationSpace
c = ConfigurationSpace( objects=['a', 'b', 'c' ], symmetry_group=c3v )
c
Explanation: The ConfigurationSpace class
A ConfigurationSpace consists of a set of objects that represent the configuration space vectors, and the SymmetryGroup containing the relevant symmetry operations.
End of explanation
from bsym import Configuration
conf_1 = Configuration( [ 1, 1, 0 ] )
conf_1
Explanation: The Configuration class
A Configuration instance describes a particular configuration, i.e. how a set of objects are arranged within a configuration space. Internally, a Configuration is represented as a vector (as a numpy array).
Each element in a configuration is represented by a single digit non-negative integer.
End of explanation
c1 = Configuration( [ 1, 1, 0 ] )
c_3 = SymmetryOperation.from_vector( [ 2, 3, 1 ] )
c_3.operate_on( c1 )
c_3 * conf_1
Explanation: The effect of a particular symmetry operation acting on a configuration can now be calculated using the SymmetryOperation.operate_on() method, or by direct multiplication, e.g.
End of explanation
c = ConfigurationSpace( [ 'a', 'b', 'c', 'd' ] ) # four vector configuration space
Explanation: <img src="figures/triangular_rotation_operation.pdf" />
Finding symmetry-inequivalent permutations.
A common question that comes up when considering the symmetry properties of arrangements of objects is: how many ways can these be arranged that are not equivalent by symmetry?
As a simple example of solving this problem using bsym consider four equivalent sites arranged in a square.
<img src="figures/square_configuration_space.pdf">
End of explanation
c
Explanation: This ConfigurationSpace has been created without a symmetry_group argument. The default behaviour in this case is to create a SymmetryGroup containing only the identity, $E$.
End of explanation
c.unique_configurations( {1:2, 0:2} )
Explanation: We can now calculate all symmetry inequivalent arrangements where two sites are occupied and two are unoccupied, using the unique_configurations() method. This takes as a argument a dict with the numbers of labels to be arranged in the configuration space. Here, we use the labels 1 and 0 to represent occupied and unoccupied sites, respectively, and the distribution of sites is given by { 1:2, 0:2 }.
End of explanation
[ uc.count for uc in c.unique_configurations( {1:2, 0:2} ) ]
Explanation: Because we have not yet taken into account the symmetry of the configuration space, we get
\begin{equation}
\frac{4\times3}{2}
\end{equation}
unique configurations (where the factor of 2 comes from the occupied sites being indistinguishable).
The configurations generated by unique_configurations have a count attribute that records the number of symmetry equivalent configurations of each case:
In this example, each configuration appears once:
End of explanation
# construct point group
e = SymmetryOperation.from_vector( [ 1, 2, 3, 4 ], label='E' )
c4 = SymmetryOperation.from_vector( [ 2, 3, 4, 1 ], label='C4' )
c4_inv = SymmetryOperation.from_vector( [ 4, 1, 2, 3 ], label='C4i' )
c2 = SymmetryOperation.from_vector( [ 3, 4, 1, 2 ], label='C2' )
sigma_x = SymmetryOperation.from_vector( [ 4, 3, 2, 1 ], label='s_x' )
sigma_y = SymmetryOperation.from_vector( [ 2, 1, 4, 3 ], label='s_y' )
sigma_ac = SymmetryOperation.from_vector( [ 1, 4, 3, 2 ], label='s_ac' )
sigma_bd = SymmetryOperation.from_vector( [ 3, 2, 1, 4 ], label='s_bd' )
c4v = PointGroup( [ e, c4, c4_inv, c2, sigma_x, sigma_y, sigma_ac, sigma_bd ] )
# create ConfigurationSpace with the c4v PointGroup.
c = ConfigurationSpace( [ 'a', 'b', 'c', 'd' ], symmetry_group=c4v )
c
c.unique_configurations( {1:2, 0:2} )
[ uc.count for uc in c.unique_configurations( {1:2, 0:2 } ) ]
Explanation: We can also calculate the result when all symmetry operations of this configuration space are included.
End of explanation
c.unique_configurations( {2:1, 1:1, 0:2} )
[ uc.count for uc in c.unique_configurations( {2:1, 1:1, 0:2 } ) ]
Explanation: Taking symmetry in to account, we now only have two unique configurations: either two adjacent site are occupied (four possible ways), or two diagonal sites are occupied (two possible ways):
<img src="figures/square_unique_configurations.pdf" >
The unique_configurations() method can also handle non-binary site occupations:
End of explanation
from pymatgen.core.lattice import Lattice
from pymatgen.core.structure import Structure
import numpy as np
# construct a pymatgen Structure instance using the site fractional coordinates
coords = np.array( [ [ 0.0, 0.0, 0.0 ] ] )
atom_list = [ 'Li' ]
lattice = Lattice.from_parameters( a=1.0, b=1.0, c=1.0, alpha=90, beta=90, gamma=90 )
parent_structure = Structure( lattice, atom_list, coords ) * [ 4, 4, 1 ]
parent_structure.cart_coords.round(2)
Explanation: <img src="figures/square_unique_configurations_2.pdf">
Working with crystal structures using pymatgen
One example where the it can be useful to identify symmetry-inequivalent arrangements of objects in a vector space, is when considering the possible arrangements of disordered atoms on a crystal lattice.
To solve this problem for an arbitrary crystal structure, bsym contains an interface to pymatgen that will identify symmetry-inequivalent atom substitutions in a given pymatgen Structure.
As an example, consider a $4\times4$ square-lattice supercell populated by lithium atoms.
End of explanation
from bsym.interface.pymatgen import unique_structure_substitutions
print( unique_structure_substitutions.__doc__ )
Explanation: We can use the bsym.interface.pymatgen.unique_structure_substitutions() function to identify symmetry-inequivalent structures generated by substituting at different sites.
End of explanation
unique_structures = unique_structure_substitutions( parent_structure, 'Li', { 'Na':1, 'Li':15 } )
len( unique_structures )
Explanation: As a trivial example, when substituting one Li atom for Na, we get a single unique structure
End of explanation
na_substituted = unique_structures[0]
Explanation: <img src="figures/pymatgen_example_one_site.pdf">
End of explanation
unique_structures_with_Mg = unique_structure_substitutions( na_substituted, 'Li', { 'Mg':1, 'Li':14 } )
len( unique_structures_with_Mg )
[ s.number_of_equivalent_configurations for s in unique_structures_with_Mg ]
Explanation: This Li$\to$Na substitution breaks the symmetry of the $4\times4$ supercell.
If we now replace a second lithium with a magnesium atom, we generate five symmetry inequivalent structures:
End of explanation
[ s.full_configuration_degeneracy for s in unique_structures_with_Mg ]
Explanation: number_of_equivalent_configurations only lists the number of equivalent configurations found when performing the second substitution, when the list of structures unique_structures_with_Mg was created. The full configuration degeneracy relative to the initial empty 4×4 lattice can be queried using full_configuration_degeneracy.
End of explanation
# Check the squared distances between the Na and Mg sites in these unique structures are [1, 2, 4, 5, 8]
np.array( sorted( [ s.get_distance( s.indices_from_symbol('Na')[0],
s.indices_from_symbol('Mg')[0] )**2 for s in unique_structures_with_Mg ] ) )
Explanation: <img src="figures/pymatgen_example_two_sites.pdf">
End of explanation
unique_structures = unique_structure_substitutions( parent_structure, 'Li', { 'Mg':1, 'Na':1, 'Li':14 } )
len(unique_structures)
np.array( sorted( [ s.get_distance( s.indices_from_symbol('Na')[0],
s.indices_from_symbol('Mg')[0] ) for s in unique_structures ] ) )**2
[ s.number_of_equivalent_configurations for s in unique_structures ]
Explanation: This double substitution can also be done in a single step:
End of explanation
[ s.full_configuration_degeneracy for s in unique_structures ]
Explanation: Because both substitutions were performed in a single step, number_of_equivalent_configurations and full_configuration_degeneracy now contain the same data:
End of explanation
from bsym.interface.pymatgen import ( space_group_symbol_from_structure,
space_group_from_structure,
configuration_space_from_structure )
Explanation: Constructing SpaceGroup and ConfigurationSpace objects using pymatgen
The bsym.interface.pymatgen module contains functions for generating SpaceGroup and ConfigurationSpace objects directly from pymatgen Structure objects.
End of explanation
coords = np.array( [ [ 0.0, 0.0, 0.0 ],
[ 0.5, 0.5, 0.0 ],
[ 0.0, 0.5, 0.5 ],
[ 0.5, 0.0, 0.5 ] ] )
atom_list = [ 'Li' ] * len( coords )
lattice = Lattice.from_parameters( a=3.0, b=3.0, c=3.0, alpha=90, beta=90, gamma=90 )
structure = Structure( lattice, atom_list, coords )
space_group_symbol_from_structure( structure )
space_group_from_structure( structure )
configuration_space_from_structure( structure )
Explanation: Documentation:
space_group_symbol_from_structure
space_group_from_structure
configuration_space_from_structure
End of explanation
a = 3.798 # lattice parameter
coords = np.array( [ [ 0.0, 0.0, 0.0 ],
[ 0.5, 0.0, 0.0 ],
[ 0.0, 0.5, 0.0 ],
[ 0.0, 0.0, 0.5 ] ] )
atom_list = [ 'Ti', 'X', 'X', 'X' ]
lattice = Lattice.from_parameters( a=a, b=a, c=a, alpha=90, beta=90, gamma=90 )
unit_cell = Structure( lattice, atom_list, coords )
parent_structure = unit_cell * [ 2, 2, 2 ]
unique_structures = unique_structure_substitutions( parent_structure, 'X', { 'O':8, 'F':16 },
show_progress='notebook' )
%load_ext version_information
%version_information bsym, numpy, jupyter, pymatgen, tqdm
Explanation: Progress bars
bsym.ConfigurationSpace.unique_configurations() and bsym.interface.pymatgen.unique_structure_substitutions() both accept optional show_progress arguments, which can be used to display progress bars (using tqdm(https://tqdm.github.io).
Setting show_progress=True will give a simple progress bar. If you are running bsym in a Jupyter notebook, setting show_progress="notebook" will give you a progress bar as a notebook widget.
(note, the widget status is not saved with this notebook, and may not display correctly on GitHub or using nbviewer)
In the example below, we find all unique configurations for the pseudo-ReO<sub>3</sub> structured TiOF<sub>2</sub> in a 2×2×2 supercell.
End of explanation |
6,237 | Given the following text description, write Python code to implement the functionality described.
Description:
I think we all remember that feeling when the result of some long-awaited
event is finally known. The feelings and thoughts you have at that moment are
definitely worth noting down and comparing.
Your task is to determine if a person correctly guessed the results of a number of matches.
You are given two arrays of scores and guesses of equal length, where each index shows a match.
Return an array of the same length denoting how far off each guess was. If they have guessed correctly,
the value is 0, and if not, the value is the absolute difference between the guess and the score.
example:
compare([1,2,3,4,5,1],[1,2,3,4,2,-2]) -> [0,0,0,0,3,3]
compare([0,5,0,0,0,4],[4,1,1,0,0,-2]) -> [4,4,1,0,0,6]
| Python Code:
def compare(game,guess):
return [abs(x-y) for x,y in zip(game,guess)] |
6,238 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-1', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: IPSL
Source ID: SANDBOX-1
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
6,239 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Unsupervised learning
AutoEncoders
An autoencoder, is an artificial neural network used for learning efficient codings.
The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for the purpose of dimensionality reduction.
<img src="../imgs/autoencoder.png" width="25%">
Unsupervised learning is a type of machine learning algorithm used to draw inferences from datasets consisting of input data without labeled responses. The most common unsupervised learning method is cluster analysis, which is used for exploratory data analysis to find hidden patterns or grouping in data.
Reference
Based on https
Step1: Testing the Autoencoder
Step2: Sample generation with Autoencoder
Step3: Convolutional AutoEncoder
Since our inputs are images, it makes sense to use convolutional neural networks (convnets) as encoders and decoders.
In practical settings, autoencoders applied to images are always convolutional autoencoders --they simply perform much better.
The encoder will consist in a stack of Conv2D and MaxPooling2D layers (max pooling being used for spatial down-sampling), while the decoder will consist in a stack of Conv2D and UpSampling2D layers.
Step4: We coudl also have a look at the 128-dimensional encoded middle representation
Step5: Pretraining encoders
One of the powerful tools of auto-encoders is using the encoder to generate meaningful representation from the feature vectors.
Step6: Application to Image Denoising
Let's put our convolutional autoencoder to work on an image denoising problem. It's simple
Step7: Here's how the noisy digits look like
Step8: Question
If you squint you can still recognize them, but barely.
Can our autoencoder learn to recover the original digits? Let's find out.
Compared to the previous convolutional autoencoder, in order to improve the quality of the reconstructed, we'll use a slightly different model with more filters per layer
Step9: Let's train the AutoEncoder for 100 epochs
Step10: Now Let's Take a look....
Step11: Variational AutoEncoder
(Reference https
Step12: We can use these parameters to sample new similar points from the latent space
Step13: Decoder Network
Finally, we can map these sampled latent points back to reconstructed inputs
Step14: What we've done so far allows us to instantiate 3 models
Step15: Let's Visualise the VAE Model
Step16: VAE on MNIST
We train the model using the end-to-end model, with a custom loss function
Step17: Traing on MNIST Digits
Step18: Because our latent space is two-dimensional, there are a few cool visualizations that can be done at this point.
One is to look at the neighborhoods of different classes on the latent 2D plane
Step19: Each of these colored clusters is a type of digit. Close clusters are digits that are structurally similar (i.e. digits that share information in the latent space).
Because the VAE is a generative model, we can also use it to generate new digits! Here we will scan the latent plane, sampling latent points at regular intervals, and generating the corresponding digit for each of these points. This gives us a visualization of the latent manifold that "generates" the MNIST digits. | Python Code:
from keras.layers import Input, Dense
from keras.models import Model
from keras.datasets import mnist
import numpy as np
# this is the size of our encoded representations
encoding_dim = 32 # 32 floats -> compression of factor 24.5, assuming the input is 784 floats
# this is our input placeholder
input_img = Input(shape=(784,))
# "encoded" is the encoded representation of the input
encoded = Dense(encoding_dim, activation='relu')(input_img)
# "decoded" is the lossy reconstruction of the input
decoded = Dense(784, activation='sigmoid')(encoded)
# this model maps an input to its reconstruction
autoencoder = Model(input_img, decoded)
# this model maps an input to its encoded representation
encoder = Model(input_img, encoded)
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
#note: x_train, x_train :)
autoencoder.fit(x_train, x_train,
epochs=50,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test))
Explanation: Unsupervised learning
AutoEncoders
An autoencoder, is an artificial neural network used for learning efficient codings.
The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for the purpose of dimensionality reduction.
<img src="../imgs/autoencoder.png" width="25%">
Unsupervised learning is a type of machine learning algorithm used to draw inferences from datasets consisting of input data without labeled responses. The most common unsupervised learning method is cluster analysis, which is used for exploratory data analysis to find hidden patterns or grouping in data.
Reference
Based on https://blog.keras.io/building-autoencoders-in-keras.html
Introducing Keras Functional API
The Keras functional API is the way to go for defining complex models, such as multi-output models, directed acyclic graphs, or models with shared layers.
All the Functional API relies on the fact that each keras.Layer object is a callable object!
See 8.2 Multi-Modal Networks for further details.
End of explanation
from matplotlib import pyplot as plt
%matplotlib inline
encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
Explanation: Testing the Autoencoder
End of explanation
encoded_imgs = np.random.rand(10,32)
decoded_imgs = decoder.predict(encoded_imgs)
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# generation
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
Explanation: Sample generation with Autoencoder
End of explanation
from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from keras.models import Model
from keras import backend as K
input_img = Input(shape=(28, 28, 1)) # adapt this if using `channels_first` image data format
x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
# at this point the representation is (4, 4, 8) i.e. 128-dimensional
x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
conv_autoencoder = Model(input_img, decoded)
conv_autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
from keras import backend as K
if K.image_data_format() == 'channels_last':
shape_ord = (28, 28, 1)
else:
shape_ord = (1, 28, 28)
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, ((x_train.shape[0],) + shape_ord))
x_test = np.reshape(x_test, ((x_test.shape[0],) + shape_ord))
x_train.shape
from keras.callbacks import TensorBoard
batch_size=128
steps_per_epoch = np.int(np.floor(x_train.shape[0] / batch_size))
conv_autoencoder.fit(x_train, x_train, epochs=50, batch_size=128,
shuffle=True, validation_data=(x_test, x_test),
callbacks=[TensorBoard(log_dir='./tf_autoencoder_logs')])
decoded_imgs = conv_autoencoder.predict(x_test)
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i+1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n + 1)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
Explanation: Convolutional AutoEncoder
Since our inputs are images, it makes sense to use convolutional neural networks (convnets) as encoders and decoders.
In practical settings, autoencoders applied to images are always convolutional autoencoders --they simply perform much better.
The encoder will consist in a stack of Conv2D and MaxPooling2D layers (max pooling being used for spatial down-sampling), while the decoder will consist in a stack of Conv2D and UpSampling2D layers.
End of explanation
conv_encoder = Model(input_img, encoded)
encoded_imgs = conv_encoder.predict(x_test)
n = 10
plt.figure(figsize=(20, 8))
for i in range(n):
ax = plt.subplot(1, n, i+1)
plt.imshow(encoded_imgs[i].reshape(4, 4 * 8).T)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
Explanation: We coudl also have a look at the 128-dimensional encoded middle representation
End of explanation
# Use the encoder to pretrain a classifier
Explanation: Pretraining encoders
One of the powerful tools of auto-encoders is using the encoder to generate meaningful representation from the feature vectors.
End of explanation
from keras.datasets import mnist
import numpy as np
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 28, 28, 1)) # adapt this if using `channels_first` image data format
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1)) # adapt this if using `channels_first` image data format
noise_factor = 0.5
x_train_noisy = x_train + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_train.shape)
x_test_noisy = x_test + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_test.shape)
x_train_noisy = np.clip(x_train_noisy, 0., 1.)
x_test_noisy = np.clip(x_test_noisy, 0., 1.)
Explanation: Application to Image Denoising
Let's put our convolutional autoencoder to work on an image denoising problem. It's simple: we will train the autoencoder to map noisy digits images to clean digits images.
Here's how we will generate synthetic noisy digits: we just apply a gaussian noise matrix and clip the images between 0 and 1.
End of explanation
n = 10
plt.figure(figsize=(20, 2))
for i in range(n):
ax = plt.subplot(1, n, i+1)
plt.imshow(x_test_noisy[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
Explanation: Here's how the noisy digits look like:
End of explanation
from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from keras.models import Model
from keras.callbacks import TensorBoard
input_img = Input(shape=(28, 28, 1)) # adapt this if using `channels_first` image data format
x = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
# at this point the representation is (7, 7, 32)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
Explanation: Question
If you squint you can still recognize them, but barely.
Can our autoencoder learn to recover the original digits? Let's find out.
Compared to the previous convolutional autoencoder, in order to improve the quality of the reconstructed, we'll use a slightly different model with more filters per layer:
End of explanation
autoencoder.fit(x_train_noisy, x_train,
epochs=100,
batch_size=128,
shuffle=True,
validation_data=(x_test_noisy, x_test),
callbacks=[TensorBoard(log_dir='/tmp/autoencoder_denoise',
histogram_freq=0, write_graph=False)])
Explanation: Let's train the AutoEncoder for 100 epochs
End of explanation
decoded_imgs = autoencoder.predict(x_test_noisy)
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i+1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n + 1)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
Explanation: Now Let's Take a look....
End of explanation
batch_size = 100
original_dim = 784
latent_dim = 2
intermediate_dim = 256
epochs = 50
epsilon_std = 1.0
x = Input(batch_shape=(batch_size, original_dim))
h = Dense(intermediate_dim, activation='relu')(x)
z_mean = Dense(latent_dim)(h)
z_log_sigma = Dense(latent_dim)(h)
Explanation: Variational AutoEncoder
(Reference https://blog.keras.io/building-autoencoders-in-keras.html)
Variational autoencoders are a slightly more modern and interesting take on autoencoding.
What is a variational autoencoder ?
It's a type of autoencoder with added constraints on the encoded representations being learned.
More precisely, it is an autoencoder that learns a latent variable model for its input data.
So instead of letting your neural network learn an arbitrary function, you are learning the parameters of a probability distribution modeling your data.
If you sample points from this distribution, you can generate new input data samples:
a VAE is a "generative model".
How does a variational autoencoder work?
First, an encoder network turns the input samples $x$ into two parameters in a latent space, which we will note $z_{\mu}$ and $z_{log_{\sigma}}$.
Then, we randomly sample similar points $z$ from the latent normal distribution that is assumed to generate the data, via $z = z_{\mu} + \exp(z_{log_{\sigma}}) * \epsilon$, where $\epsilon$ is a random normal tensor.
Finally, a decoder network maps these latent space points back to the original input data.
The parameters of the model are trained via two loss functions:
a reconstruction loss forcing the decoded samples to match the initial inputs (just like in our previous autoencoders);
and the KL divergence between the learned latent distribution and the prior distribution, acting as a regularization term.
You could actually get rid of this latter term entirely, although it does help in learning well-formed latent spaces and reducing overfitting to the training data.
Encoder Network
End of explanation
from keras.layers.core import Lambda
from keras import backend as K
def sampling(args):
z_mean, z_log_sigma = args
epsilon = K.random_normal(shape=(batch_size, latent_dim),
mean=0., stddev=epsilon_std)
return z_mean + K.exp(z_log_sigma) * epsilon
# note that "output_shape" isn't necessary with the TensorFlow backend
# so you could write `Lambda(sampling)([z_mean, z_log_sigma])`
z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_sigma])
Explanation: We can use these parameters to sample new similar points from the latent space:
End of explanation
decoder_h = Dense(intermediate_dim, activation='relu')
decoder_mean = Dense(original_dim, activation='sigmoid')
h_decoded = decoder_h(z)
x_decoded_mean = decoder_mean(h_decoded)
Explanation: Decoder Network
Finally, we can map these sampled latent points back to reconstructed inputs:
End of explanation
# end-to-end autoencoder
vae = Model(x, x_decoded_mean)
# encoder, from inputs to latent space
encoder = Model(x, z_mean)
# generator, from latent space to reconstructed inputs
decoder_input = Input(shape=(latent_dim,))
_h_decoded = decoder_h(decoder_input)
_x_decoded_mean = decoder_mean(_h_decoded)
generator = Model(decoder_input, _x_decoded_mean)
Explanation: What we've done so far allows us to instantiate 3 models:
an end-to-end autoencoder mapping inputs to reconstructions
an encoder mapping inputs to the latent space
a generator that can take points on the latent space and will output the corresponding reconstructed samples.
End of explanation
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(vae).create(prog='dot', format='svg'))
## Exercise: Let's Do the Same for `encoder` and `generator` Model(s)
Explanation: Let's Visualise the VAE Model
End of explanation
from keras.objectives import binary_crossentropy
def vae_loss(x, x_decoded_mean):
xent_loss = binary_crossentropy(x, x_decoded_mean)
kl_loss = - 0.5 * K.mean(1 + z_log_sigma - K.square(z_mean) - K.exp(z_log_sigma), axis=-1)
return xent_loss + kl_loss
vae.compile(optimizer='rmsprop', loss=vae_loss)
Explanation: VAE on MNIST
We train the model using the end-to-end model, with a custom loss function: the sum of a reconstruction term, and the KL divergence regularization term.
End of explanation
from keras.datasets import mnist
import numpy as np
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
vae.fit(x_train, x_train,
shuffle=True,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_test, x_test))
Explanation: Traing on MNIST Digits
End of explanation
x_test_encoded = encoder.predict(x_test, batch_size=batch_size)
plt.figure(figsize=(6, 6))
plt.scatter(x_test_encoded[:, 0], x_test_encoded[:, 1], c=y_test)
plt.colorbar()
plt.show()
Explanation: Because our latent space is two-dimensional, there are a few cool visualizations that can be done at this point.
One is to look at the neighborhoods of different classes on the latent 2D plane:
End of explanation
# display a 2D manifold of the digits
n = 15 # figure with 15x15 digits
digit_size = 28
figure = np.zeros((digit_size * n, digit_size * n))
# we will sample n points within [-15, 15] standard deviations
grid_x = np.linspace(-15, 15, n)
grid_y = np.linspace(-15, 15, n)
for i, yi in enumerate(grid_x):
for j, xi in enumerate(grid_y):
z_sample = np.array([[xi, yi]]) * epsilon_std
x_decoded = generator.predict(z_sample)
digit = x_decoded[0].reshape(digit_size, digit_size)
figure[i * digit_size: (i + 1) * digit_size,
j * digit_size: (j + 1) * digit_size] = digit
plt.figure(figsize=(10, 10))
plt.imshow(figure)
plt.show()
Explanation: Each of these colored clusters is a type of digit. Close clusters are digits that are structurally similar (i.e. digits that share information in the latent space).
Because the VAE is a generative model, we can also use it to generate new digits! Here we will scan the latent plane, sampling latent points at regular intervals, and generating the corresponding digit for each of these points. This gives us a visualization of the latent manifold that "generates" the MNIST digits.
End of explanation |
6,240 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p><font size="6"><b>01 - Pandas
Step1: Introduction
Let's directly start with importing some data
Step2: Starting from reading such a tabular dataset, Pandas provides the functionalities to answer questions about this data in a few lines of code. Let's start with a few examples as illustration
Step3: <div class="alert alert-warning">
<ul>
<li>How does the survival rate of the passengers differ between sexes?</li>
</ul>
</div>
Step4: <div class="alert alert-warning">
<ul>
<li>Or how does the survival rate differ between the different classes of the Titanic?</li>
</ul>
</div>
Step5: <div class="alert alert-warning">
<ul>
<li>Are young people (e.g. < 25 years) likely to survive?</li>
</ul>
</div>
Step6: All the needed functionality for the above examples will be explained throughout the course, but as a start
Step7: Let's start with getting some data.
In practice, most of the time you will import the data from some data source (text file, excel, database, ..), and Pandas provides functions for many different formats.
But to start here, let's create a small dataset about a few countries manually from a dictionar of lists
Step8: The object created here is a DataFrame
Step9: A DataFrame is a 2-dimensional, tablular data structure comprised of rows and columns. It is similar to a spreadsheet, a database (SQL) table or the data.frame in R.
<img align="center" width=50% src="../img/pandas/01_table_dataframe1.svg">
A DataFrame can store data of different types (including characters, integers, floating point values, categorical data and more) in columns. In pandas, we can check the data types of the columns with the dtypes attribute
Step10: Each column in a DataFrame is a Series
When selecting a single column of a pandas DataFrame, the result is a pandas Series, a 1-dimensional data structure.
To select the column, use the column label in between square brackets [].
Step11: Pandas objects have attributes and methods
Pandas provides a lot of functionalities for the DataFrame and Series. The .dtypes shown above is an attribute of the DataFrame. Another example is the .columns attribute, returning the column names of the DataFrame
Step12: In addition, there are also functions that can be called on a DataFrame or Series, i.e. methods. As methods are functions, do not forget to use parentheses ().
A few examples that can help exploring the data
Step13: The describe method computes summary statistics for each column
Step14: Sorting your data by a specific column is another important first-check
Step15: The plot method can be used to quickly visualize the data in different ways
Step16: However, for this dataset, it does not say that much
Step17: <div class="alert alert-success">
**EXERCISE 1**
Step18: <div class="alert alert-info">
**Note
Step19: <div class="alert alert-success">
**EXERCISE 3**
Step20: <div class="alert alert-success">
**EXERCISE 4**
Step21: <div class="alert alert-success">
<b>EXERCISE 5</b>
Step22: <div class="alert alert-success">
<b>EXERCISE 6</b>
Step23: <div class="alert alert-success">
**EXERCISE 7** | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
Explanation: <p><font size="6"><b>01 - Pandas: Data Structures </b></font></p>
© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
End of explanation
df = pd.read_csv("data/titanic.csv")
df.head()
Explanation: Introduction
Let's directly start with importing some data: the titanic dataset about the passengers of the Titanic and their survival:
End of explanation
df['Age'].hist()
Explanation: Starting from reading such a tabular dataset, Pandas provides the functionalities to answer questions about this data in a few lines of code. Let's start with a few examples as illustration:
<div class="alert alert-warning">
- What is the age distribution of the passengers?
</div>
End of explanation
df.groupby('Sex')[['Survived']].mean()
Explanation: <div class="alert alert-warning">
<ul>
<li>How does the survival rate of the passengers differ between sexes?</li>
</ul>
</div>
End of explanation
df.groupby('Pclass')['Survived'].mean().plot.bar()
Explanation: <div class="alert alert-warning">
<ul>
<li>Or how does the survival rate differ between the different classes of the Titanic?</li>
</ul>
</div>
End of explanation
df['Survived'].mean()
df25 = df[df['Age'] <= 25]
df25['Survived'].mean()
Explanation: <div class="alert alert-warning">
<ul>
<li>Are young people (e.g. < 25 years) likely to survive?</li>
</ul>
</div>
End of explanation
import pandas as pd
Explanation: All the needed functionality for the above examples will be explained throughout the course, but as a start: the data types to work with.
The pandas data structures: DataFrame and Series
To load the pandas package and start working with it, we first import the package. The community agreed alias for pandas is pd, which we will also use here:
End of explanation
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
Explanation: Let's start with getting some data.
In practice, most of the time you will import the data from some data source (text file, excel, database, ..), and Pandas provides functions for many different formats.
But to start here, let's create a small dataset about a few countries manually from a dictionar of lists:
End of explanation
type(countries)
Explanation: The object created here is a DataFrame:
End of explanation
countries.dtypes
Explanation: A DataFrame is a 2-dimensional, tablular data structure comprised of rows and columns. It is similar to a spreadsheet, a database (SQL) table or the data.frame in R.
<img align="center" width=50% src="../img/pandas/01_table_dataframe1.svg">
A DataFrame can store data of different types (including characters, integers, floating point values, categorical data and more) in columns. In pandas, we can check the data types of the columns with the dtypes attribute:
End of explanation
countries['population']
s = countries['population']
type(s)
Explanation: Each column in a DataFrame is a Series
When selecting a single column of a pandas DataFrame, the result is a pandas Series, a 1-dimensional data structure.
To select the column, use the column label in between square brackets [].
End of explanation
countries.columns
Explanation: Pandas objects have attributes and methods
Pandas provides a lot of functionalities for the DataFrame and Series. The .dtypes shown above is an attribute of the DataFrame. Another example is the .columns attribute, returning the column names of the DataFrame:
End of explanation
countries.head() # Top rows
countries.tail() # Bottom rows
Explanation: In addition, there are also functions that can be called on a DataFrame or Series, i.e. methods. As methods are functions, do not forget to use parentheses ().
A few examples that can help exploring the data:
End of explanation
countries['population'].describe()
Explanation: The describe method computes summary statistics for each column:
End of explanation
countries.sort_values(by='population')
Explanation: Sorting your data by a specific column is another important first-check:
End of explanation
countries.plot()
Explanation: The plot method can be used to quickly visualize the data in different ways:
End of explanation
countries['population'].plot.barh() # or .plot(kind='barh')
Explanation: However, for this dataset, it does not say that much:
End of explanation
# pd.read_
# countries.to_
Explanation: <div class="alert alert-success">
**EXERCISE 1**:
* You can play with the `kind` keyword or accessor of the `plot` method in the figure above: 'line', 'bar', 'hist', 'density', 'area', 'pie', 'scatter', 'hexbin', 'box'
Note: doing `df.plot(kind="bar", ...)` or `df.plot.bar(...)` is exactly equivalent. You will see both ways in the wild.
</div>
<div style="border: 5px solid #3776ab; border-radius: 2px; padding: 2em;">
## Python recap
Python objects have **attributes** and **methods**:
* Attribute: `obj.attribute` (no parentheses!) -> property of the object (pandas examples: `dtypes`, `columns`, `shape`, ..)
* Method: `obj.method()` (function call with parentheses) -> action (pandas examples: `mean()`, `sort_values()`, ...)
</div>
Importing and exporting data
A wide range of input/output formats are natively supported by pandas:
CSV, text
SQL database
Excel
HDF5
json
html
pickle
sas, stata
Parquet
...
End of explanation
# %load _solutions/pandas_01_data_structures1.py
Explanation: <div class="alert alert-info">
**Note: I/O interface**
* All readers are `pd.read_...`
* All writers are `DataFrame.to_...`
</div>
Application on a real dataset
Throughout the pandas notebooks, many of exercises will use the titanic dataset. This dataset has records of all the passengers of the Titanic, with characteristics of the passengers (age, class, etc. See below), and an indication whether they survived the disaster.
The available metadata of the titanic data set provides the following information:
VARIABLE | DESCRIPTION
------ | --------
Survived | Survival (0 = No; 1 = Yes)
Pclass | Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd)
Name | Name
Sex | Sex
Age | Age
SibSp | Number of Siblings/Spouses Aboard
Parch | Number of Parents/Children Aboard
Ticket | Ticket Number
Fare | Passenger Fare
Cabin | Cabin
Embarked | Port of Embarkation (C = Cherbourg; Q = Queenstown; S = Southampton)
<div class="alert alert-success">
**EXERCISE 2**:
* Read the CSV file (available at `data/titanic.csv`) into a pandas DataFrame. Call the result `df`.
</div>
End of explanation
# %load _solutions/pandas_01_data_structures2.py
Explanation: <div class="alert alert-success">
**EXERCISE 3**:
* Quick exploration: show the first 5 rows of the DataFrame.
</div>
End of explanation
# %load _solutions/pandas_01_data_structures3.py
Explanation: <div class="alert alert-success">
**EXERCISE 4**:
* How many records (i.e. rows) has the titanic dataset?
<details><summary>Hints</summary>
* The length of a DataFrame gives the number of rows (`len(..)`). Alternatively, you can check the "shape" (number of rows, number of columns) of the DataFrame using the `shape` attribute.
</details>
</div>
End of explanation
# %load _solutions/pandas_01_data_structures4.py
Explanation: <div class="alert alert-success">
<b>EXERCISE 5</b>:
* Select the 'Age' column (remember: we can use the [] indexing notation and the column label).
</div>
End of explanation
# %load _solutions/pandas_01_data_structures5.py
Explanation: <div class="alert alert-success">
<b>EXERCISE 6</b>:
* Make a box plot of the Fare column.
</div>
End of explanation
# %load _solutions/pandas_01_data_structures6.py
Explanation: <div class="alert alert-success">
**EXERCISE 7**:
* Sort the rows of the DataFrame by 'Age' column, with the oldest passenger at the top. Check the help of the `sort_values` function and find out how to sort from the largest values to the lowest values
</div>
End of explanation |
6,241 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: On-Device Training with TensorFlow Lite
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Note
Step3: The train function in the code above uses the GradientTape class to record operations for automatic differentiation. For more information on how to use this class, see the Introduction to gradients and automatic differentiation.
You could use the Model.train_step method of the keras model here instead of a from-scratch implementation. Just note that the loss (and metrics) returned by Model.train_step is the running average, and should be reset regularly (typically each epoch). See Customize Model.fit for details.
Note
Step4: Preprocess the dataset
Pixel values in this dataset are between 0 and 255, and must be normalized to a value between 0 and 1 for processing by the model. Divide the values by 255 to make this adjustment.
Step5: Convert the data labels to categorical values by performing one-hot encoding.
Step6: Note
Step7: Note
Step8: Setup the TensorFlow Lite signatures
The TensorFlow Lite model you saved in the previous step contains several function signatures. You can access them through the tf.lite.Interpreter class and invoke each restore, train, save, and infer signature separately.
Step9: Compare the output of the original model, and the converted lite model
Step10: Above, you can see that the behavior of the model is not changed by the conversion to TFLite.
Retrain the model on a device
After converting your model to TensorFlow Lite and deploying it with your app, you can retrain the model on a device using new data and the train signature method of your model. Each training run generates a new set of weights that you can save for re-use and further improvement of the model, as shown in the next section.
Note
Step11: Above you can see that the on-device training picks up exactly where the pretraining stopped.
Save the trained weights
When you complete a training run on a device, the model updates the set of weights it is using in memory. Using the save signature method you created in your TensorFlow Lite model, you can save these weights to a checkpoint file for later reuse and improve your model.
Step12: In your Android application, you can store the generated weights as a checkpoint file in the internal storage space allocated for your app.
```Java
try (Interpreter interpreter = new Interpreter(modelBuffer)) {
// Conduct the training jobs.
// Export the trained weights as a checkpoint file.
File outputFile = new File(getFilesDir(), "checkpoint.ckpt");
Map<String, Object> inputs = new HashMap<>();
inputs.put("checkpoint_path", outputFile.getAbsolutePath());
Map<String, Object> outputs = new HashMap<>();
interpreter.runSignature(inputs, outputs, "save");
}
```
Restore the trained weights
Any time you create an interpreter from a TFLite model, the interpreter will initially load the original model weights.
So after you've done some training and saved a checkpoint file, you'll need to run the restore signature method to load the checkpoint.
A good rule is "Anytime you create an Interpreter for a model, if the checkpoint exists, load it". If you need to reset the model to the baseline behavior, just delete the checkpoint and create a fresh interpreter.
Step13: The checkpoint was generated by training and saving with TFLite. Above you can see that applying the checkpoint updates the behavior of the model.
Note
Step14: Plot the predicted labels. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
print("TensorFlow version:", tf.__version__)
Explanation: On-Device Training with TensorFlow Lite
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/examples/on_device_training/overview"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/examples/on_device_training/overview.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/examples/on_device_training/overview.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/examples/on_device_training/overview.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
When deploying TensorFlow Lite machine learning model to device or mobile app, you may want to enable the model to be improved or personalized based on input from the device or end user. Using on-device training techniques allows you to update a model without data leaving your users' devices, improving user privacy, and without requiring users to update the device software.
For example, you may have a model in your mobile app that recognizes fashion items, but you want users to get improved recognition performance over time based on their interests. Enabling on-device training allows users who are interested in shoes to get better at recognizing a particular style of shoe or shoe brand the more often they use your app.
This tutorial shows you how to construct a TensorFlow Lite model that can be incrementally trained and improved within an installed Android app.
Note: The on-device training technique can be added to existing TensorFlow Lite implementations, provided the devices you are targeting support local file storage.
Setup
This tutorial uses Python to train and convert a TensorFlow model before incorporating it into an Android app. Get started by installing and importing the following packages.
End of explanation
IMG_SIZE = 28
class Model(tf.Module):
def __init__(self):
self.model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(IMG_SIZE, IMG_SIZE), name='flatten'),
tf.keras.layers.Dense(128, activation='relu', name='dense_1'),
tf.keras.layers.Dense(10, name='dense_2')
])
self.model.compile(
optimizer='sgd',
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True))
# The `train` function takes a batch of input images and labels.
@tf.function(input_signature=[
tf.TensorSpec([None, IMG_SIZE, IMG_SIZE], tf.float32),
tf.TensorSpec([None, 10], tf.float32),
])
def train(self, x, y):
with tf.GradientTape() as tape:
prediction = self.model(x)
loss = self.model.loss(y, prediction)
gradients = tape.gradient(loss, self.model.trainable_variables)
self.model.optimizer.apply_gradients(
zip(gradients, self.model.trainable_variables))
result = {"loss": loss}
return result
@tf.function(input_signature=[
tf.TensorSpec([None, IMG_SIZE, IMG_SIZE], tf.float32),
])
def infer(self, x):
logits = self.model(x)
probabilities = tf.nn.softmax(logits, axis=-1)
return {
"output": probabilities,
"logits": logits
}
@tf.function(input_signature=[tf.TensorSpec(shape=[], dtype=tf.string)])
def save(self, checkpoint_path):
tensor_names = [weight.name for weight in self.model.weights]
tensors_to_save = [weight.read_value() for weight in self.model.weights]
tf.raw_ops.Save(
filename=checkpoint_path, tensor_names=tensor_names,
data=tensors_to_save, name='save')
return {
"checkpoint_path": checkpoint_path
}
@tf.function(input_signature=[tf.TensorSpec(shape=[], dtype=tf.string)])
def restore(self, checkpoint_path):
restored_tensors = {}
for var in self.model.weights:
restored = tf.raw_ops.Restore(
file_pattern=checkpoint_path, tensor_name=var.name, dt=var.dtype,
name='restore')
var.assign(restored)
restored_tensors[var.name] = restored
return restored_tensors
Explanation: Note: The On-Device Training APIs are available in TensorFlow version 2.7 and higher.
Classify images of clothing
This example code uses the Fashion MNIST dataset to train a neural network model for classifying images of clothing. This dataset contains 60,000 small (28 x 28 pixel) grayscale images containing 10 different categories of fashion accessories, including dresses, shirts, and sandals.
<figure>
<img src="https://tensorflow.org/images/fashion-mnist-sprite.png"
alt="Fashion MNIST images">
<figcaption><b>Figure 1</b>: <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST samples</a> (by Zalando, MIT License).</figcaption>
</figure>
You can explore this dataset in more depth in the Keras classification tutorial.
Build a model for on-device training
TensorFlow Lite models typically have only a single exposed function method (or signature) that allows you to call the model to run an inference. For a model to be trained and used on a device, you must be able to perform several separate operations, including train, infer, save, and restore functions for the model. You can enable this functionality by first extending your TensorFlow model to have multiple functions, and then exposing those functions as signatures when you convert your model to the TensorFlow Lite model format.
The code example below shows you how to add the following functions to a TensorFlow model:
train function trains the model with training data.
infer function invokes the inference.
save function saves the trainable weights into the file system.
restore function loads the trainable weights from the file system.
End of explanation
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
Explanation: The train function in the code above uses the GradientTape class to record operations for automatic differentiation. For more information on how to use this class, see the Introduction to gradients and automatic differentiation.
You could use the Model.train_step method of the keras model here instead of a from-scratch implementation. Just note that the loss (and metrics) returned by Model.train_step is the running average, and should be reset regularly (typically each epoch). See Customize Model.fit for details.
Note: The weights generated by this model are serialized into a TensorFlow 1 format checkpoint file.
Prepare the data
Get the the Fashion MNIST dataset for training your model.
End of explanation
train_images = (train_images / 255.0).astype(np.float32)
test_images = (test_images / 255.0).astype(np.float32)
Explanation: Preprocess the dataset
Pixel values in this dataset are between 0 and 255, and must be normalized to a value between 0 and 1 for processing by the model. Divide the values by 255 to make this adjustment.
End of explanation
train_labels = tf.keras.utils.to_categorical(train_labels)
test_labels = tf.keras.utils.to_categorical(test_labels)
Explanation: Convert the data labels to categorical values by performing one-hot encoding.
End of explanation
NUM_EPOCHS = 100
BATCH_SIZE = 100
epochs = np.arange(1, NUM_EPOCHS + 1, 1)
losses = np.zeros([NUM_EPOCHS])
m = Model()
train_ds = tf.data.Dataset.from_tensor_slices((train_images, train_labels))
train_ds = train_ds.batch(BATCH_SIZE)
for i in range(NUM_EPOCHS):
for x,y in train_ds:
result = m.train(x, y)
losses[i] = result['loss']
if (i + 1) % 10 == 0:
print(f"Finished {i+1} epochs")
print(f" loss: {losses[i]:.3f}")
# Save the trained weights to a checkpoint.
m.save('/tmp/model.ckpt')
plt.plot(epochs, losses, label='Pre-training')
plt.ylim([0, max(plt.ylim())])
plt.xlabel('Epoch')
plt.ylabel('Loss [Cross Entropy]')
plt.legend();
Explanation: Note: Make sure you preprocess your training and testing datasets in the same way, so that your testing accurately evaluate your model's performance.
Train the model
Before converting and setting up your TensorFlow Lite model, complete the initial training of your model using the preprocessed dataset and the train signature method. The following code runs model training for 100 epochs, processing batches of 100 images at a time, and displaying the loss value after every 10 epochs. Since this training run is processing quite a bit of data, it may take a few minutes to finish.
End of explanation
SAVED_MODEL_DIR = "saved_model"
tf.saved_model.save(
m,
SAVED_MODEL_DIR,
signatures={
'train':
m.train.get_concrete_function(),
'infer':
m.infer.get_concrete_function(),
'save':
m.save.get_concrete_function(),
'restore':
m.restore.get_concrete_function(),
})
# Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(SAVED_MODEL_DIR)
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
converter.experimental_enable_resource_variables = True
tflite_model = converter.convert()
Explanation: Note: You should complete initial training of your model before converting it to TensorFlow Lite format, so that the model has an initial set of weights, and is able to perform reasonable inferences before you start collecting data and conducting training runs on the device.
Convert model to TensorFlow Lite format
After you have extended your TensorFlow model to enable additional functions for on-device training and completed initial training of the model, you can convert it to TensorFlow Lite format. The following code converts and saves your model to that format, including the set of signatures that you use with the TensorFlow Lite model on a device: train, infer, save, restore.
End of explanation
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
infer = interpreter.get_signature_runner("infer")
Explanation: Setup the TensorFlow Lite signatures
The TensorFlow Lite model you saved in the previous step contains several function signatures. You can access them through the tf.lite.Interpreter class and invoke each restore, train, save, and infer signature separately.
End of explanation
logits_original = m.infer(x=train_images[:1])['logits'][0]
logits_lite = infer(x=train_images[:1])['logits'][0]
#@title
def compare_logits(logits):
width = 0.35
offset = width/2
assert len(logits)==2
keys = list(logits.keys())
plt.bar(x = np.arange(len(logits[keys[0]]))-offset,
height=logits[keys[0]], width=0.35, label=keys[0])
plt.bar(x = np.arange(len(logits[keys[1]]))+offset,
height=logits[keys[1]], width=0.35, label=keys[1])
plt.legend()
plt.grid(True)
plt.ylabel('Logit')
plt.xlabel('ClassID')
delta = np.sum(np.abs(logits[keys[0]] - logits[keys[1]]))
plt.title(f"Total difference: {delta:.3g}")
compare_logits({'Original': logits_original, 'Lite': logits_lite})
Explanation: Compare the output of the original model, and the converted lite model:
End of explanation
train = interpreter.get_signature_runner("train")
NUM_EPOCHS = 50
BATCH_SIZE = 100
more_epochs = np.arange(epochs[-1]+1, epochs[-1] + NUM_EPOCHS + 1, 1)
more_losses = np.zeros([NUM_EPOCHS])
for i in range(NUM_EPOCHS):
for x,y in train_ds:
result = train(x=x, y=y)
more_losses[i] = result['loss']
if (i + 1) % 10 == 0:
print(f"Finished {i+1} epochs")
print(f" loss: {more_losses[i]:.3f}")
plt.plot(epochs, losses, label='Pre-training')
plt.plot(more_epochs, more_losses, label='On device')
plt.ylim([0, max(plt.ylim())])
plt.xlabel('Epoch')
plt.ylabel('Loss [Cross Entropy]')
plt.legend();
Explanation: Above, you can see that the behavior of the model is not changed by the conversion to TFLite.
Retrain the model on a device
After converting your model to TensorFlow Lite and deploying it with your app, you can retrain the model on a device using new data and the train signature method of your model. Each training run generates a new set of weights that you can save for re-use and further improvement of the model, as shown in the next section.
Note: Since training tasks are resource intensive, you should consider performing them when users are not actively interacting with the device, and as a background process. Consider using the WorkManager API to schedule model retraining as an asynchronous task.
On Android, you can perform on-device training with TensorFlow Lite using either Java or C++ APIs. In Java, use the Interpreter class to load a model and drive model training tasks. The following example shows how to run the training procedure using the runSignature method:
```Java
try (Interpreter interpreter = new Interpreter(modelBuffer)) {
int NUM_EPOCHS = 100;
int BATCH_SIZE = 100;
int IMG_HEIGHT = 28;
int IMG_WIDTH = 28;
int NUM_TRAININGS = 60000;
int NUM_BATCHES = NUM_TRAININGS / BATCH_SIZE;
List<FloatBuffer> trainImageBatches = new ArrayList<>(NUM_BATCHES);
List<FloatBuffer> trainLabelBatches = new ArrayList<>(NUM_BATCHES);
// Prepare training batches.
for (int i = 0; i < NUM_BATCHES; ++i) {
FloatBuffer trainImages = FloatBuffer.allocateDirect(BATCH_SIZE * IMG_HEIGHT * IMG_WIDTH).order(ByteOrder.nativeOrder());
FloatBuffer trainLabels = FloatBuffer.allocateDirect(BATCH_SIZE * 10).order(ByteOrder.nativeOrder());
// Fill the data values...
trainImageBatches.add(trainImages.rewind());
trainImageLabels.add(trainLabels.rewind());
}
// Run training for a few steps.
float[] losses = new float[NUM_EPOCHS];
for (int epoch = 0; epoch < NUM_EPOCHS; ++epoch) {
for (int batchIdx = 0; batchIdx < NUM_BATCHES; ++batchIdx) {
Map<String, Object> inputs = new HashMap<>();
inputs.put("x", trainImageBatches.get(batchIdx));
inputs.put("y", trainLabelBatches.get(batchIdx));
Map<String, Object> outputs = new HashMap<>();
FloatBuffer loss = FloatBuffer.allocate(1);
outputs.put("loss", loss);
interpreter.runSignature(inputs, outputs, "train");
// Record the last loss.
if (batchIdx == NUM_BATCHES - 1) losses[epoch] = loss.get(0);
}
// Print the loss output for every 10 epochs.
if ((epoch + 1) % 10 == 0) {
System.out.println(
"Finished " + (epoch + 1) + " epochs, current loss: " + loss.get(0));
}
}
// ...
}
```
You can see a complete code example of model retraining inside an Android app in the model personalization demo app.
Run training for a few epochs to improve or personalize the model. In practice, you would run this additional training using data collected on the device. For simplicity, this example uses the same training data as the previous training step.
End of explanation
save = interpreter.get_signature_runner("save")
save(checkpoint_path=np.array("/tmp/model.ckpt", dtype=np.string_))
Explanation: Above you can see that the on-device training picks up exactly where the pretraining stopped.
Save the trained weights
When you complete a training run on a device, the model updates the set of weights it is using in memory. Using the save signature method you created in your TensorFlow Lite model, you can save these weights to a checkpoint file for later reuse and improve your model.
End of explanation
another_interpreter = tf.lite.Interpreter(model_content=tflite_model)
another_interpreter.allocate_tensors()
infer = another_interpreter.get_signature_runner("infer")
restore = another_interpreter.get_signature_runner("restore")
logits_before = infer(x=train_images[:1])['logits'][0]
# Restore the trained weights from /tmp/model.ckpt
restore(checkpoint_path=np.array("/tmp/model.ckpt", dtype=np.string_))
logits_after = infer(x=train_images[:1])['logits'][0]
compare_logits({'Before': logits_before, 'After': logits_after})
Explanation: In your Android application, you can store the generated weights as a checkpoint file in the internal storage space allocated for your app.
```Java
try (Interpreter interpreter = new Interpreter(modelBuffer)) {
// Conduct the training jobs.
// Export the trained weights as a checkpoint file.
File outputFile = new File(getFilesDir(), "checkpoint.ckpt");
Map<String, Object> inputs = new HashMap<>();
inputs.put("checkpoint_path", outputFile.getAbsolutePath());
Map<String, Object> outputs = new HashMap<>();
interpreter.runSignature(inputs, outputs, "save");
}
```
Restore the trained weights
Any time you create an interpreter from a TFLite model, the interpreter will initially load the original model weights.
So after you've done some training and saved a checkpoint file, you'll need to run the restore signature method to load the checkpoint.
A good rule is "Anytime you create an Interpreter for a model, if the checkpoint exists, load it". If you need to reset the model to the baseline behavior, just delete the checkpoint and create a fresh interpreter.
End of explanation
infer = another_interpreter.get_signature_runner("infer")
result = infer(x=test_images)
predictions = np.argmax(result["output"], axis=1)
true_labels = np.argmax(test_labels, axis=1)
result['output'].shape
Explanation: The checkpoint was generated by training and saving with TFLite. Above you can see that applying the checkpoint updates the behavior of the model.
Note: Loading the saved weights from the checkpoint can take time, based on the number of variables in the model and the size of the checkpoint file.
In your Android app, you can restore the serialized, trained weights from the checkpoint file you stored earlier.
Java
try (Interpreter anotherInterpreter = new Interpreter(modelBuffer)) {
// Load the trained weights from the checkpoint file.
File outputFile = new File(getFilesDir(), "checkpoint.ckpt");
Map<String, Object> inputs = new HashMap<>();
inputs.put("checkpoint_path", outputFile.getAbsolutePath());
Map<String, Object> outputs = new HashMap<>();
anotherInterpreter.runSignature(inputs, outputs, "restore");
}
Note: When your application restarts, you should reload your trained weights prior to running new inferences.
Run Inference using trained weights
Once you have loaded previously saved weights from a checkpoint file, running the infer method uses those weights with your original model to improve predictions. After loading the saved weights, you can use the infer signature method as shown below.
Note: Loading the saved weights is not required to run an inference, but running in that configuration produces predictions using the originally trained model, without improvements.
End of explanation
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
def plot(images, predictions, true_labels):
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(images[i], cmap=plt.cm.binary)
color = 'b' if predictions[i] == true_labels[i] else 'r'
plt.xlabel(class_names[predictions[i]], color=color)
plt.show()
plot(test_images, predictions, true_labels)
predictions.shape
Explanation: Plot the predicted labels.
End of explanation |
6,242 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
glob contains function glob that finds files that match a pattern
* matches 0+ characters; ? matches any one char
Step1: results in a list of strings, we can loop oer
we want to create sets of plots
Step2: We can ask Python to take different actions, depending on a condition, with an if statement
Step3: second line of code above uses keyword if to denote choice
if the test after if is true, the body of the if are executed
if test false the body else is executed
conditional statements don't have to include else - if not present python does nothing
Step4: we can also chain several tests together using elif, short for else if
Step5: NOTE
Step6: while or is true if at least one part is true
Step7: Challenge - making choices
Step8: Challenge - making choices 2 | Python Code:
print(glob.glob('data/inflammation*.csv'))
Explanation: glob contains function glob that finds files that match a pattern
* matches 0+ characters; ? matches any one char
End of explanation
# loop here
counter = 0
for filename in glob.glob('data/*.csv'):
#counter+=1
counter = counter + 1
print("number of files:", counter)
counter = 0
for filename in glob.glob('data/infl*.csv'):
#counter+=1
counter = counter + 1
print("number of files:", counter)
counter = 0
for filename in glob.glob('data/infl*.csv'):
#counter+=1
data = numpy.loadtxt(fname=filename, delimiter=',')
print(filename, "mean is: ", data.mean())
counter = counter + 1
print("number of files:", counter)
Explanation: results in a list of strings, we can loop oer
we want to create sets of plots
End of explanation
#We use an if statement to take different actions
#based on conditions
num = 37
if num > 100:
print('greater')
else:
print('not greater')
print('done')
Explanation: We can ask Python to take different actions, depending on a condition, with an if statement:
Making choices
last lesson we discovereed something suspicious in our inflammatin data by drawing plots
how can python recognized these different features and act on it
we will write code that runs on certain conditions
End of explanation
num = 53
print('before conditional...')
if num > 100:
print('53 is greater than 100')
print('...after conditional')
Explanation: second line of code above uses keyword if to denote choice
if the test after if is true, the body of the if are executed
if test false the body else is executed
conditional statements don't have to include else - if not present python does nothing
End of explanation
num = -3
if num > 0:
print(num, "is positive")
elif num == 0:
print(num, "is zero")
else:
print(num, "is negative")
Explanation: we can also chain several tests together using elif, short for else if
End of explanation
if (1 > 0) and (-1 > 0):
print('both parts are true')
else:
print('at least one part is false')
Explanation: NOTE: we use == to test for equality rather than single equal b/c the later is the assignment operator
we can also combine tests using and and or. and is only true if both parts are true
End of explanation
if (1 < 0) or (-1 < 0):
print('at least one test is true')
Explanation: while or is true if at least one part is true:
End of explanation
if 4 > 5:
print('A')
elif 4 == 5:
print('B')
elif 4 < 5:
print('C')
Explanation: Challenge - making choices:
Which of the following would be printed if you were to run this code? Why did you pick this answer?
A
B
C
B and C
python
if 4 > 5:
print('A')
elif 4 == 5:
print('B')
elif 4 < 5:
print('C')
End of explanation
if '':
print('empty string is true')
if 'word':
print('word is true')
if []:
print('empty list is true')
if [1, 2, 3]:
print('non-empty list is true')
if 0:
print('zero is true')
if 1:
print('one is true')
Explanation: Challenge - making choices 2:
True and False are special words in Python called booleans which represent true and false statements. However, they aren’t the only values in Python that are true and false. In fact, any value can be used in an if or elif. After reading and running the code below, explain what the rule is for which values are considered true and which are considered false.
python
if '':
print('empty string is true')
if 'word':
print('word is true')
if []:
print('empty list is true')
if [1, 2, 3]:
print('non-empty list is true')
if 0:
print('zero is true')
if 1:
print('one is true')
End of explanation |
6,243 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example of use of Package PVSystems
Author
Step1: Extraterrestrial radiation
Extraterrestrial radiation. Computation for a specific day
Define input variable (day of the year)
i.e.
2nd of february is n=31+2=33
Step2: Extraterrestrial radiation. Computation for a range of days
Step3: Equation of Time
Computation of $E$ for a complete year
Step4: Example of computation of solar time for Santander, Spain
3rd January, 11
Step5: Example of computation of solar time for Madison, WI
3rd February, 10
Step6: Angle of incidence
Calculate the angle of incidence of beam radiation on a surface located at Madison, WI at 10 | Python Code:
import sys
print(sys.version)
from pvsystems import PVSystems as PVS
import math
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Example of use of Package PVSystems
Author: Mario Mañana. University of Cantabria
Email: [email protected]
Version: 1.0
End of explanation
S1=PVS()
n=33
Gon = S1.Gon( n)
print(str(np.round(Gon,2)) + " W/m^2")
Explanation: Extraterrestrial radiation
Extraterrestrial radiation. Computation for a specific day
Define input variable (day of the year)
i.e.
2nd of february is n=31+2=33
End of explanation
Nday = np.linspace(1, 365, 365)
Gsc = []
for d in Nday:
Gsc.append( S1.Gon(d))
plt.plot(Nday,Gsc)
plt.xlabel('day of the year')
plt.ylabel('Extraterrestrial Solar Radiation [W/m$^2$]')
plt.grid()
Explanation: Extraterrestrial radiation. Computation for a range of days
End of explanation
Ea = []
for d in Nday:
Ea.append( S1.ET(d))
plt.plot(Nday,Ea)
plt.xlabel('day of the year')
plt.ylabel('E [min]')
plt.grid()
Explanation: Equation of Time
Computation of $E$ for a complete year
End of explanation
STD_Lat=43.46
STD_Lon=3.80
day=3
month=1
hour_std=11
min_std=50
Santander=PVS(Location='Santander', Latitude=STD_Lat, Longitude=STD_Lon)
STD_Lat=43.46
STD_Lon=3.80
day=6
month=5
hour_std=18
min_std=50
Santander1=PVS(Location='Santander', Country='dd', Latitude=STD_Lat, Longitude=STD_Lon)
# input: day .- Day of the month [1,31]
# month .- Month of the year [1,12]
# hour .- Hour std time [0,23]
# minute .- Minute std time [0,59]
# Long .- Longitude in degrees [0,360] East
# return: Solar time [minutes]
time_solar=Santander1.StandardTimetoSolarTime( day, month, hour_std, min_std)
print('Solar Time: ' + str(time_solar) + ' min')
ET=Santander1.ET( Santander1.DayOfYear(6,5))
print('ET: ' + str(ET) + ' min')
gg=Santander.StandardTimetoHM( time_solar, day, month)
print('Solar time: ' + str(gg[0]) + ':' + str(gg[1]))
Explanation: Example of computation of solar time for Santander, Spain
3rd January, 11:50
End of explanation
Madison_Lon=89.40
Madison_Lat=43.07
day=3
month=2
hour_std=10
min_std=30
Madison=PVS(Location='Madison', Country='USA', Latitude=Madison_Lat, Longitude=Madison_Lon)
# input: day .- Day of the month [1,31]
# month .- Month of the year [1,12]
# hour .- Hour std time [0,23]
# minute .- Minute std time [0,59]
# Long .- Longitude in degrees [0,360] East
# return: Solar time [minutes]
print('Longitude std: ' + str( Madison.LongStd()))
time_solar=Madison.StandardTimetoSolarTime( day, month, hour_std, min_std)
print('Solar Time: ' + str(time_solar) + ' min')
ET=Madison.ET( Madison.DayOfYear( day, month))
print('ET: ' + str(ET) + ' min')
gg=Madison.StandardTimetoHM( time_solar, day, month)
print('Solar time: ' + str(gg[0]) + ':' + str(gg[1]))
Explanation: Example of computation of solar time for Madison, WI
3rd February, 10:30
End of explanation
Madison_Lon=89.40
Madison_Lat=43.07
day=13
month=2
hour_solar=10
min_solar=30
beta=45.0
gamma=15.0
Madison=PVS(Location='Madison', Country='USA', Latitude=Madison_Lat, Longitude=Madison_Lon)
theta=Madison.Theta( day, month, hour_solar, min_solar, gamma, beta)
print('Angle of incidence: ' + str(np.round(theta,2)) + ' degrees')
Explanation: Angle of incidence
Calculate the angle of incidence of beam radiation on a surface located at Madison, WI at 10:30 (solar time) on February 13, if the surface is tilted 45 degrees from the horizontal and pointed 15 degrees west of south.
End of explanation |
6,244 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Converting masks back to annotations
Overview
Step1: 1. Connect girder client and set parameters
Step2: Let's inspect the ground truth codes file
This contains the ground truth codes and information dataframe. This is a dataframe that is indexed by the annotation group name and has the following columns
Step3: Read and visualize mask
Step4: 2. Get contours from mask
This function get_contours_from_mask() generates contours from a mask image. There are many parameters that can be set but most have defaults set for the most common use cases. The only required parameters you must provide are MASK and GTCodes_df, but you may want to consider setting the following parameters based on your specific needs
Step5: Extract contours
Step6: Let's inspect the contours dataframe
The columns that really matter here are group, color, coords_x, and coords_y.
Step7: 3. Get annotation documents from contours
This method get_annotation_documents_from_contours() generates formatted annotation documents from contours that can be posted to the DSA server.
Step8: As mentioned in the docs, this function wraps get_single_annotation_document_from_contours()
Step9: Let's get a list of annotation documents (each is a dictionary). For the purpose of this tutorial,
we separate the documents by group (i.e. each document is composed of polygons from the same
style/group). You could decide to allow heterogeneous groups in the same annotation document by
setting separate_docs_by_group to False. We place 10 polygons in each document for this demo
for illustration purposes. Realistically you would want each document to contain several hundred depending on their complexity. Placing too many polygons in each document can lead to performance issues when rendering in HistomicsUI.
Get annotation documents
Step10: Let's examine one of the documents.
Limit display to the first two elements (polygons) and cap the vertices for clarity.
Step11: Post the annotation to the correct item/slide in DSA | Python Code:
import os
CWD = os.getcwd()
import girder_client
from pandas import read_csv
from imageio import imread
from histomicstk.annotations_and_masks.masks_to_annotations_handler import (
get_contours_from_mask,
get_single_annotation_document_from_contours,
get_annotation_documents_from_contours)
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 7, 7
Explanation: Converting masks back to annotations
Overview:
Most segmentation algorithms produce outputs in an image format. Visualizing these outputs in HistomicsUI requires conversion from mask images to an annotation document containing (x,y) coordinates in the whole-slide image coordinate frame. This notebook demonstrates this conversion process in two steps:
Converting a mask image into contours (coordinates in the mask frame)
Placing contours data into a format following the annotation document schema that can be pushed to DSA for visualization in HistomicsUI.
This notebook is based on work described in Amgad et al, 2019:
Mohamed Amgad, Habiba Elfandy, Hagar Hussein, ..., Jonathan Beezley, Deepak R Chittajallu, David Manthey, David A Gutman, Lee A D Cooper, Structured crowdsourcing enables convolutional segmentation of histology images, Bioinformatics, 2019, btz083
Where to look?
|_ histomicstk/
|_annotations_and_masks/
| |_masks_to_annotations_handler.py
|_tests/
|_test_masks_to_annotations_handler.py
End of explanation
# APIURL = 'http://demo.kitware.com/histomicstk/api/v1/'
# SAMPLE_SLIDE_ID = '5bbdee92e629140048d01b5d'
APIURL = 'http://candygram.neurology.emory.edu:8080/api/v1/'
SAMPLE_SLIDE_ID = '5d586d76bd4404c6b1f286ae'
# Connect to girder client
gc = girder_client.GirderClient(apiUrl=APIURL)
gc.authenticate(interactive=True)
# gc.authenticate(apiKey='kri19nTIGOkWH01TbzRqfohaaDWb6kPecRqGmemb')
Explanation: 1. Connect girder client and set parameters
End of explanation
# read GTCodes dataframe
GTCODE_PATH = os.path.join(
CWD, '..', '..', 'tests', 'test_files', 'sample_GTcodes.csv')
GTCodes_df = read_csv(GTCODE_PATH)
GTCodes_df.index = GTCodes_df.loc[:, 'group']
GTCodes_df.head()
Explanation: Let's inspect the ground truth codes file
This contains the ground truth codes and information dataframe. This is a dataframe that is indexed by the annotation group name and has the following columns:
group: group name of annotation (string), eg. "mostly_tumor"
GT_code: int, desired ground truth code (in the mask) Pixels of this value belong to corresponding group (class)
color: str, rgb format. eg. rgb(255,0,0).
NOTE:
Zero pixels have special meaning and do not encode specific ground truth class. Instead, they simply mean 'Outside ROI' and should be ignored during model training or evaluation.
End of explanation
# read mask
X_OFFSET = 59206
Y_OFFSET = 33505
MASKNAME = "TCGA-A2-A0YE-01Z-00-DX1.8A2E3094-5755-42BC-969D-7F0A2ECA0F39" + \
"_left-%d_top-%d_mag-BASE.png" % (X_OFFSET, Y_OFFSET)
MASKPATH = os.path.join(CWD, '..', '..', 'tests', 'test_files', 'annotations_and_masks', MASKNAME)
MASK = imread(MASKPATH)
plt.figure(figsize=(7,7))
plt.imshow(MASK)
plt.title(MASKNAME[:23])
plt.show()
Explanation: Read and visualize mask
End of explanation
print(get_contours_from_mask.__doc__)
Explanation: 2. Get contours from mask
This function get_contours_from_mask() generates contours from a mask image. There are many parameters that can be set but most have defaults set for the most common use cases. The only required parameters you must provide are MASK and GTCodes_df, but you may want to consider setting the following parameters based on your specific needs: get_roi_contour, roi_group, discard_nonenclosed_background, background_group, that control behaviour regarding region of interest (ROI) boundary and background pixel class (e.g. stroma).
End of explanation
# Let's extract all contours from a mask, including ROI boundary. We will
# be discarding any stromal contours that are not fully enclosed within a
# non-stromal contour since we already know that stroma is the background
# group. This is so things look uncluttered when posted to DSA.
groups_to_get = None
contours_df = get_contours_from_mask(
MASK=MASK, GTCodes_df=GTCodes_df, groups_to_get=groups_to_get,
get_roi_contour=True, roi_group='roi',
discard_nonenclosed_background=True,
background_group='mostly_stroma',
MIN_SIZE=30, MAX_SIZE=None, verbose=True,
monitorPrefix=MASKNAME[:12] + ": getting contours")
Explanation: Extract contours
End of explanation
contours_df.head()
Explanation: Let's inspect the contours dataframe
The columns that really matter here are group, color, coords_x, and coords_y.
End of explanation
print(get_annotation_documents_from_contours.__doc__)
Explanation: 3. Get annotation documents from contours
This method get_annotation_documents_from_contours() generates formatted annotation documents from contours that can be posted to the DSA server.
End of explanation
print(get_single_annotation_document_from_contours.__doc__)
Explanation: As mentioned in the docs, this function wraps get_single_annotation_document_from_contours()
End of explanation
# get list of annotation documents
annprops = {
'X_OFFSET': X_OFFSET,
'Y_OFFSET': Y_OFFSET,
'opacity': 0.2,
'lineWidth': 4.0,
}
annotation_docs = get_annotation_documents_from_contours(
contours_df.copy(), separate_docs_by_group=True, annots_per_doc=10,
docnamePrefix='demo', annprops=annprops,
verbose=True, monitorPrefix=MASKNAME[:12] + ": annotation docs")
Explanation: Let's get a list of annotation documents (each is a dictionary). For the purpose of this tutorial,
we separate the documents by group (i.e. each document is composed of polygons from the same
style/group). You could decide to allow heterogeneous groups in the same annotation document by
setting separate_docs_by_group to False. We place 10 polygons in each document for this demo
for illustration purposes. Realistically you would want each document to contain several hundred depending on their complexity. Placing too many polygons in each document can lead to performance issues when rendering in HistomicsUI.
Get annotation documents
End of explanation
ann_doc = annotation_docs[0].copy()
ann_doc['elements'] = ann_doc['elements'][:2]
for i in range(2):
ann_doc['elements'][i]['points'] = ann_doc['elements'][i]['points'][:5]
ann_doc
Explanation: Let's examine one of the documents.
Limit display to the first two elements (polygons) and cap the vertices for clarity.
End of explanation
# deleting existing annotations in target slide (if any)
existing_annotations = gc.get('/annotation/item/' + SAMPLE_SLIDE_ID)
for ann in existing_annotations:
gc.delete('/annotation/%s' % ann['_id'])
# post the annotation documents you created
for annotation_doc in annotation_docs:
resp = gc.post(
"/annotation?itemId=" + SAMPLE_SLIDE_ID, json=annotation_doc)
Explanation: Post the annotation to the correct item/slide in DSA
End of explanation |
6,245 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: CSE 6040, Fall 2015 [05, Part A]
Step6: Exercise
Step8: Putting it all together
Step9: Exercise | Python Code:
# === Sparse vector: definition ===
from collections import defaultdict
def sparse_vector ():
return defaultdict (int)
def print_sparse_vector (x):
for key, value in x.items ():
print ("%s: %d" % (key, value))
# === Sparse vector demo ===
def alpha_chars (text):
(Generator) Yields each of the alphabetic characters in a string.
for letter in text:
if letter.isalpha ():
yield letter
text = How much wood could a woodchuck chuck
if a woodchuck could chuck wood?
letter_freqs = sparse_vector ()
for letter in alpha_chars (text.lower ()):
letter_freqs[letter] += 1
# If you really wanted an list of the letters
letter_freqs2 = [a for a in alpha_chars (text.lower ())]
print_sparse_vector (letter_freqs)
assert letter_freqs['o'] == 11 and letter_freqs['h'] == 6
print ("\n(Passed partial test.)")
Explanation: CSE 6040, Fall 2015 [05, Part A]: A-priori algorithm wrap-up
The first part of today's class is the following notebook, which continues the last two exercises from Lab 4.
Sparse vectors and matrices
When we ended Lab 3 in class, we asked you to complete the following exercise, which is a test to see whether you understand how default dictionaries work.
First, recall the notion of a sparse (integer) vector.
End of explanation
# === COMPLETE THIS FUNCTION ===
# Hint: See the definition of `print_sparse_matrix()`
# to see the interface to which your sparse matrix object
# should conform.
def sparse_matrix ():
Returns an empty sparse matrix that can hold integer counts
of pairs of elements.
return defaultdict (sparse_vector)
def print_sparse_matrix (x):
for i, row_i in x.items ():
for j, value in row_i.items ():
print ("[%s, %s]: %d" % (i, j, value))
# === COMPLETE THIS FUNCTION ===
# Hint: Look at how this function is used, below.
import itertools
def alpha_chars_pairs (text):
(Generator) Yields every one of the 4-choose-2 pairs of
'positionally distinct' alphabetic characters in a string.
Assume 'text' is a single word.
That is, each position of the string is regarded as distinct,
but the pair of characters coming from positions (i, j),
where i != j, are considered the "same" as the paired
positions (j, i). Non-alphabetic characters should be
ignored.
For instance, `alpha_chars_pairs ("te3x_t")` should produce
has just 4 positionally distinct characters, so this routine
should return the 4 choose 2 == 6 pairs:
('t', 'e') <-- from positions (0, 1)
('t', 'x') <-- from positions (0, 3)
('t', 't') <-- from positions (0, 5)
('e', 'x') <-- from positions (1, 3)
('e', 't') <-- from positions (1, 5)
('x', 't') <-- from positions (3, 5)
# Shang's neat solution!
return itertools.combinations (alpha_chars (text.lower ()), 2)
# Rich's original solution, which is less neat:
alpha_text = list (alpha_chars (text.lower ()))
for i in range (0, len (alpha_text)):
for j in range (i+1, len (alpha_text)):
yield (alpha_text[i], alpha_text[j])
# === Testing code follows ===
# Compute frequency of pairs of positionally distinct,
# case-insensitive alphabetic characters in a word.
letter_pair_counts = sparse_matrix ()
words = text.split ()
for word in words:
for w_i, w_j in alpha_chars_pairs (word.lower ()):
# Enforce convention: w_i < w_j
w_i, w_j = min (w_i, w_j), max (w_i, w_j)
letter_pair_counts[w_i][w_j] += 1
print ("Text: '%s'" % text)
print ("\n==> Frequencies:")
print_sparse_matrix (letter_pair_counts)
assert letter_pair_counts['c']['c'] == 4
assert letter_pair_counts['h']['o'] == 5
print ("\n(Passed partial test.)")
Explanation: Exercise: Sparse matrices. Suppose that we instead want to compute how frequently pairs of letters occur within words.
Instead of a sparse vector, you might instead maintain a table, or sparse matrix, such that the $(i, j)$ entry of the matrix counts the number of times the letters $i$ and $j$ co-occur within a word.
Complete the code below to implement a sparse matrix that counts the number of times that a pair of letters co-occurs in a word. In particular, fill in the code for sparse_matrix() and alpha_chars_pairs().
End of explanation
# This code box contains some helper routines needed to implement
# the A-Priori algorithm.
import re
import email.parser
import os
EMAIL_PATTERN = re.compile (r'[\w+.]+@[\w.]+')
def messages (maildir_root):
(Generator) Given a mailbox directory name, yields an
email object for each message therein.
for base, dirs, files in os.walk (maildir_root):
for filename in files:
filepath = os.path.join (base, filename)
email_file = open (filepath)
msg = email.parser.Parser ().parse (email_file)
email_file.close ()
if len (msg) > 0: # Patch for non-email files?
yield msg
Explanation: Putting it all together: The A-Priori algorithm
Using all of the preceding material, implement the A-Priori algorithm from the previous Lab 3 notebook to detect frequent email correspondents.
But first, here's a little bit of helper code from last time, which you'll find useful.
End of explanation
# Specify maildir location; you may need to update these paths.
MAILDIR = 'enron-maildir-subset/skilling-j' # Skilling's mail only
#MAILDIR = 'enron-maildir-subset' # Full subset
# Specify the minimum number of occurrences to be considered "frequent"
THRESHOLD = 65
# === FILL-IN YOUR IMPLEMENTATION AND TEST CODE BELOW ==
pass
Explanation: Exercise: The A-Priori algorithm applied to email. Your task is to implement the a-priori algorithm to generate a list of commonly co-occurring correspondents.
You may make the following simplifying assumptions, which may or may not be valid depending on what the true analysis end-goal is.
* You need only examine the 'From:' and 'To:' fields of an email message. Ignore all other fields.
* You should only "count" an email address if both the 'From:' and 'To:' fields are set. Otherwise, you cannot tell from whom the message was sent or who is the recipient, and may therefore ignore the interaction.
* Consider pairs that consist of a sender and a recipient. In other words, do not match multiple recipients of a single message as a "pair."
* Ignore the direction of the exchange. That is, regard [email protected] sending to [email protected] as the same pair as [email protected] sending to [email protected].
For Jeffrey Skilling's maildir and a threshold of 65 or more co-occurrences, our solution finds 10 frequently corresponding pairs. For the full data set, it finds 140 pairs.
End of explanation |
6,246 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: TFP Release Notes notebook (0.13.0)
The intent of this notebook is to help TFP 0.13.0 "come to life" via some small snippets - little demos of things you can achieve with TFP.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Distributions [core math]
BetaQuotient
Ratio of two independent Beta-distributed random variables
Step3: DeterminantalPointProcess
Distribution over subsets (represented as one-hot) of a given set. Samples follow a repulsivity property (probabilities are proportional to the volume spanned by vectors corresponding to the selected subset of points), which tends toward sampling diverse subsets. [Compare against i.i.d. Bernoulli samples.]
Step4: SigmoidBeta
Log-odds of two gamma distributions. More numerically stable sample space than Beta.
Step5: Zipf
Added JAX support.
Step6: NormalInverseGaussian
Flexible parametric family that supports heavy tails, skewed, and vanilla Normal.
MatrixNormalLinearOperator
Matrix Normal distribution.
Step7: MatrixStudentTLinearOperator
Matrix T distribution.
Step8: Distributions [software / wrappers]
Sharded
Shards independent event portions of a distribution across multiple processors. Aggregates log_prob across devices, handles gradients in concert with tfp.experimental.distribute.JointDistribution*. Much more in the Distributed Inference notebook.
Step9: BatchBroadcast
Implicitly broadcast the batch dimensions of an underlying distribution with or to a given batch shape.
Step10: Masked
For single-program/multiple-data or sparse-as-masked-dense use-cases, a distribution that masks out the log_prob of invalid underlying distributions.
Step11: Bijectors
Bijectors
Add bijectors to mimic tf.nest.flatten (tfb.tree_flatten) and tf.nest.pack_sequence_as (tfb.pack_sequence_as).
Adds tfp.experimental.bijectors.Sharded
Remove deprecated tfb.ScaleTrilL. Use tfb.FillScaleTriL instead.
Adds cls.parameter_properties() annotations for Bijectors.
Extend range tfb.Power to all reals for odd integer powers.
Infer the log-deg-jacobian of scalar bijectors using autodiff, if not otherwise specified.
Restructuring bijectors
Step12: Sharded
SPMD reduction in log-determinant. See Sharded in Distributions, below.
Step13: VI
Adds build_split_flow_surrogate_posterior to tfp.experimental.vi to build structured VI surrogate posteriors from normalizing flows.
Adds build_affine_surrogate_posterior to tfp.experimental.vi for construction of ADVI surrogate posteriors from an event shape.
Adds build_affine_surrogate_posterior_from_base_distribution to tfp.experimental.vi to enable construction of ADVI surrogate posteriors with correlation structures induced by affine transformations.
VI/MAP/MLE
Added convenience method tfp.experimental.util.make_trainable(cls) to create trainable instances of distributions and bijectors.
Step14: MCMC
MCMC diagnostics support arbitrary structures of states, not just lists.
remc_thermodynamic_integrals added to tfp.experimental.mcmc
Adds tfp.experimental.mcmc.windowed_adaptive_hmc
Adds an experimental API for initializing a Markov chain from a near-zero uniform distribution in unconstrained space. tfp.experimental.mcmc.init_near_unconstrained_zero
Adds an experimental utility for retrying Markov Chain initialization until an acceptable point is found. tfp.experimental.mcmc.retry_init
Shuffling experimental streaming MCMC API to slot into tfp.mcmc with a minimum of disruption.
Adds ThinningKernel to experimental.mcmc.
Adds experimental.mcmc.run_kernel driver as a candidate streaming-based replacement to mcmc.sample_chain
init_near_unconstrained_zero, retry_init
Step15: Windowed adaptive HMC and NUTS samplers
Step16: Math, stats
Math/linalg
Add tfp.math.trapz for trapezoidal integration.
Add tfp.math.log_bessel_kve.
Add no_pivot_ldl to experimental.linalg.
Add marginal_fn argument to GaussianProcess (see no_pivot_ldl).
Added tfp.math.atan_difference(x, y)
Add tfp.math.erfcx, tfp.math.logerfc and tfp.math.logerfcx
Add tfp.math.dawsn for Dawson's Integral.
Add tfp.math.igammaincinv, tfp.math.igammacinv.
Add tfp.math.sqrt1pm1.
Add LogitNormal.stddev_approx and LogitNormal.variance_approx
Add tfp.math.owens_t for the Owen's T function.
Add bracket_root method to automatically initialize bounds for a root search.
Add Chandrupatla's method for finding roots of scalar functions.
Stats
tfp.stats.windowed_mean efficiently computes windowed means.
tfp.stats.windowed_variance efficiently and accurately computes windowed variances.
tfp.stats.cumulative_variance efficiently and accurately computes cumulative variances.
RunningCovariance and friends can now be initialized from an example Tensor, not just from explicit shape and dtype.
Cleaner API for RunningCentralMoments, RunningMean, RunningPotentialScaleReduction.
Owen's T, Erfcx, Logerfc, Logerfcx, Dawson functions
Step17: igammainv / igammacinv
Step18: log-kve
Step19: Other
STS
Speed up STS forecasting and decomposition using internal tf.function wrapping.
Add option to speed up filtering in LinearGaussianSSM when only the final step's results are required.
Variational Inference with joint distributions | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
#@title Installs & imports { vertical-output: true }
!pip3 install -qU tensorflow==2.5.0 tensorflow_probability==0.13.0 tensorflow-datasets inference_gym
import tensorflow as tf
import tensorflow_probability as tfp
assert '0.13' in tfp.__version__, tfp.__version__
assert '2.5' in tf.__version__, tf.__version__
physical_devices = tf.config.list_physical_devices('CPU')
tf.config.set_logical_device_configuration(
physical_devices[0],
[tf.config.LogicalDeviceConfiguration(),
tf.config.LogicalDeviceConfiguration()])
tfd = tfp.distributions
tfb = tfp.bijectors
tfpk = tfp.math.psd_kernels
import matplotlib.pyplot as plt
import numpy as np
import scipy.interpolate
import IPython
import seaborn as sns
import logging
Explanation: TFP Release Notes notebook (0.13.0)
The intent of this notebook is to help TFP 0.13.0 "come to life" via some small snippets - little demos of things you can achieve with TFP.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/TFP_Release_Notebook_0_13_0"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/TFP_Release_Notebook_0_13_0.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/TFP_Release_Notebook_0_13_0.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/TFP_Release_Notebook_0_13_0.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
End of explanation
plt.hist(tfd.BetaQuotient(concentration1_numerator=5.,
concentration0_numerator=2.,
concentration1_denominator=3.,
concentration0_denominator=8.).sample(1_000, seed=(1, 23)),
bins='auto');
Explanation: Distributions [core math]
BetaQuotient
Ratio of two independent Beta-distributed random variables
End of explanation
grid_size = 16
# Generate grid_size**2 pts on the unit square.
grid = np.arange(0, 1, 1./grid_size).astype(np.float32)
import itertools
points = np.array(list(itertools.product(grid, grid)))
# Create the kernel L that parameterizes the DPP.
kernel_amplitude = 2.
kernel_lengthscale = [.1, .15, .2, .25] # Increasing length scale indicates more points are "nearby", tending toward smaller subsets.
kernel = tfpk.ExponentiatedQuadratic(kernel_amplitude, kernel_lengthscale)
kernel_matrix = kernel.matrix(points, points)
eigenvalues, eigenvectors = tf.linalg.eigh(kernel_matrix)
dpp = tfd.DeterminantalPointProcess(eigenvalues, eigenvectors)
print(dpp)
# The inner-most dimension of the result of `dpp.sample` is a multi-hot
# encoding of a subset of {1, ..., ground_set_size}.
# We will compare against a bernoulli distribution.
samps_dpp = dpp.sample(seed=(1, 2)) # 4 x grid_size**2
logits = tf.broadcast_to([[-1.], [-1.5], [-2], [-2.5]], [4, grid_size**2])
samps_bern = tfd.Bernoulli(logits=logits).sample(seed=(2, 3))
plt.figure(figsize=(12, 6))
for i, (samp, samp_bern) in enumerate(zip(samps_dpp, samps_bern)):
plt.subplot(241 + i)
plt.scatter(*points[np.where(samp)].T)
plt.title(f'DPP, length scale={kernel_lengthscale[i]}')
plt.xticks([])
plt.yticks([])
plt.gca().set_aspect(1.)
plt.subplot(241 + i + 4)
plt.scatter(*points[np.where(samp_bern)].T)
plt.title(f'bernoulli, logit={logits[i,0]}')
plt.xticks([])
plt.yticks([])
plt.gca().set_aspect(1.)
plt.tight_layout()
plt.show()
Explanation: DeterminantalPointProcess
Distribution over subsets (represented as one-hot) of a given set. Samples follow a repulsivity property (probabilities are proportional to the volume spanned by vectors corresponding to the selected subset of points), which tends toward sampling diverse subsets. [Compare against i.i.d. Bernoulli samples.]
End of explanation
plt.hist(tfd.SigmoidBeta(concentration1=.01, concentration0=2.).sample(10_000, seed=(1, 23)),
bins='auto', density=True);
plt.show()
print('Old way, fractions non-finite:')
print(np.sum(~tf.math.is_finite(
tfb.Invert(tfb.Sigmoid())(tfd.Beta(concentration1=.01, concentration0=2.)).sample(10_000, seed=(1, 23)))) / 10_000)
print(np.sum(~tf.math.is_finite(
tfb.Invert(tfb.Sigmoid())(tfd.Beta(concentration1=2., concentration0=.01)).sample(10_000, seed=(2, 34)))) / 10_000)
Explanation: SigmoidBeta
Log-odds of two gamma distributions. More numerically stable sample space than Beta.
End of explanation
plt.hist(tfd.Zipf(3.).sample(1_000, seed=(12, 34)).numpy(), bins='auto', density=True, log=True);
Explanation: Zipf
Added JAX support.
End of explanation
# Initialize a single 2 x 3 Matrix Normal.
mu = [[1., 2, 3], [3., 4, 5]]
col_cov = [[ 0.36, 0.12, 0.06],
[ 0.12, 0.29, -0.13],
[ 0.06, -0.13, 0.26]]
scale_column = tf.linalg.LinearOperatorLowerTriangular(tf.linalg.cholesky(col_cov))
scale_row = tf.linalg.LinearOperatorDiag([0.9, 0.8])
mvn = tfd.MatrixNormalLinearOperator(loc=mu, scale_row=scale_row, scale_column=scale_column)
mvn.sample()
Explanation: NormalInverseGaussian
Flexible parametric family that supports heavy tails, skewed, and vanilla Normal.
MatrixNormalLinearOperator
Matrix Normal distribution.
End of explanation
mu = [[1., 2, 3], [3., 4, 5]]
col_cov = [[ 0.36, 0.12, 0.06],
[ 0.12, 0.29, -0.13],
[ 0.06, -0.13, 0.26]]
scale_column = tf.linalg.LinearOperatorLowerTriangular(tf.linalg.cholesky(col_cov))
scale_row = tf.linalg.LinearOperatorDiag([0.9, 0.8])
mvn = tfd.MatrixTLinearOperator(
df=2.,
loc=mu,
scale_row=scale_row,
scale_column=scale_column)
mvn.sample()
Explanation: MatrixStudentTLinearOperator
Matrix T distribution.
End of explanation
strategy = tf.distribute.MirroredStrategy()
@tf.function
def sample_and_lp(seed):
d = tfp.experimental.distribute.Sharded(tfd.Normal(0, 1))
s = d.sample(seed=seed)
return s, d.log_prob(s)
strategy.run(sample_and_lp, args=(tf.constant([12,34]),))
Explanation: Distributions [software / wrappers]
Sharded
Shards independent event portions of a distribution across multiple processors. Aggregates log_prob across devices, handles gradients in concert with tfp.experimental.distribute.JointDistribution*. Much more in the Distributed Inference notebook.
End of explanation
underlying = tfd.MultivariateNormalDiag(tf.zeros([7, 1, 5]), tf.ones([5]))
print('underlying:', underlying)
d = tfd.BatchBroadcast(underlying, [8, 1, 6])
print('broadcast [7, 1] *with* [8, 1, 6]:', d)
try:
tfd.BatchBroadcast(underlying, to_shape=[8, 1, 6])
except ValueError as e:
print('broadcast [7, 1] *to* [8, 1, 6] is invalid:', e)
d = tfd.BatchBroadcast(underlying, to_shape=[8, 7, 6])
print('broadcast [7, 1] *to* [8, 7, 6]:', d)
Explanation: BatchBroadcast
Implicitly broadcast the batch dimensions of an underlying distribution with or to a given batch shape.
End of explanation
d = tfd.Masked(tfd.Normal(tf.zeros([7]), 1),
validity_mask=tf.sequence_mask([3, 4], 7))
print(d.log_prob(d.sample(seed=(1, 1))))
d = tfd.Masked(tfd.Normal(0, 1),
validity_mask=[False, True, False],
safe_sample_fn=tfd.Distribution.mode)
print(d.log_prob(d.sample(seed=(2, 2))))
Explanation: Masked
For single-program/multiple-data or sparse-as-masked-dense use-cases, a distribution that masks out the log_prob of invalid underlying distributions.
End of explanation
ex = (tf.constant(1.), dict(b=tf.constant(2.), c=tf.constant(3.)))
b = tfb.tree_flatten(ex)
print(b.forward(ex))
print(b.inverse(list(tf.constant([1., 2, 3]))))
b = tfb.pack_sequence_as(ex)
print(b.forward(list(tf.constant([1., 2, 3]))))
print(b.inverse(ex))
Explanation: Bijectors
Bijectors
Add bijectors to mimic tf.nest.flatten (tfb.tree_flatten) and tf.nest.pack_sequence_as (tfb.pack_sequence_as).
Adds tfp.experimental.bijectors.Sharded
Remove deprecated tfb.ScaleTrilL. Use tfb.FillScaleTriL instead.
Adds cls.parameter_properties() annotations for Bijectors.
Extend range tfb.Power to all reals for odd integer powers.
Infer the log-deg-jacobian of scalar bijectors using autodiff, if not otherwise specified.
Restructuring bijectors
End of explanation
strategy = tf.distribute.MirroredStrategy()
def sample_lp_logdet(seed):
d = tfd.TransformedDistribution(tfp.experimental.distribute.Sharded(tfd.Normal(0, 1), shard_axis_name='i'),
tfp.experimental.bijectors.Sharded(tfb.Sigmoid(), shard_axis_name='i'))
s = d.sample(seed=seed)
return s, d.log_prob(s), d.bijector.inverse_log_det_jacobian(s)
strategy.run(sample_lp_logdet, (tf.constant([1, 2]),))
Explanation: Sharded
SPMD reduction in log-determinant. See Sharded in Distributions, below.
End of explanation
d = tfp.experimental.util.make_trainable(tfd.Gamma)
print(d.trainable_variables)
print(d)
Explanation: VI
Adds build_split_flow_surrogate_posterior to tfp.experimental.vi to build structured VI surrogate posteriors from normalizing flows.
Adds build_affine_surrogate_posterior to tfp.experimental.vi for construction of ADVI surrogate posteriors from an event shape.
Adds build_affine_surrogate_posterior_from_base_distribution to tfp.experimental.vi to enable construction of ADVI surrogate posteriors with correlation structures induced by affine transformations.
VI/MAP/MLE
Added convenience method tfp.experimental.util.make_trainable(cls) to create trainable instances of distributions and bijectors.
End of explanation
@tfd.JointDistributionCoroutine
def model():
Root = tfd.JointDistributionCoroutine.Root
c0 = yield Root(tfd.Gamma(2, 2, name='c0'))
c1 = yield Root(tfd.Gamma(2, 2, name='c1'))
counts = yield tfd.Sample(tfd.BetaBinomial(23, c1, c0), 10, name='counts')
jd = model.experimental_pin(counts=model.sample(seed=[20, 30]).counts)
init_dist = tfp.experimental.mcmc.init_near_unconstrained_zero(jd)
print(init_dist)
tfp.experimental.mcmc.retry_init(init_dist.sample, jd.unnormalized_log_prob)
Explanation: MCMC
MCMC diagnostics support arbitrary structures of states, not just lists.
remc_thermodynamic_integrals added to tfp.experimental.mcmc
Adds tfp.experimental.mcmc.windowed_adaptive_hmc
Adds an experimental API for initializing a Markov chain from a near-zero uniform distribution in unconstrained space. tfp.experimental.mcmc.init_near_unconstrained_zero
Adds an experimental utility for retrying Markov Chain initialization until an acceptable point is found. tfp.experimental.mcmc.retry_init
Shuffling experimental streaming MCMC API to slot into tfp.mcmc with a minimum of disruption.
Adds ThinningKernel to experimental.mcmc.
Adds experimental.mcmc.run_kernel driver as a candidate streaming-based replacement to mcmc.sample_chain
init_near_unconstrained_zero, retry_init
End of explanation
fig, ax = plt.subplots(1, 2, figsize=(10, 4))
for i, n_evidence in enumerate((10, 250)):
ax[i].set_title(f'n evidence = {n_evidence}')
ax[i].set_xlim(0, 2.5); ax[i].set_ylim(0, 3.5)
@tfd.JointDistributionCoroutine
def model():
Root = tfd.JointDistributionCoroutine.Root
c0 = yield Root(tfd.Gamma(2, 2, name='c0'))
c1 = yield Root(tfd.Gamma(2, 2, name='c1'))
counts = yield tfd.Sample(tfd.BetaBinomial(23, c1, c0), n_evidence, name='counts')
s = model.sample(seed=[20, 30])
print(s)
jd = model.experimental_pin(counts=s.counts)
states, trace = tf.function(tfp.experimental.mcmc.windowed_adaptive_hmc)(
100, jd, num_leapfrog_steps=5, seed=[100, 200])
ax[i].scatter(states.c0.numpy().reshape(-1), states.c1.numpy().reshape(-1),
marker='+', alpha=.1)
ax[i].scatter(s.c0, s.c1, marker='+', color='r')
Explanation: Windowed adaptive HMC and NUTS samplers
End of explanation
# Owen's T gives the probability that X > h, 0 < Y < a * X. Let's check that
# with random sampling.
h = np.array([1., 2.]).astype(np.float32)
a = np.array([10., 11.5]).astype(np.float32)
probs = tfp.math.owens_t(h, a)
x = tfd.Normal(0., 1.).sample(int(1e5), seed=(6, 245)).numpy()
y = tfd.Normal(0., 1.).sample(int(1e5), seed=(7, 245)).numpy()
true_values = (
(x[..., np.newaxis] > h) &
(0. < y[..., np.newaxis]) &
(y[..., np.newaxis] < a * x[..., np.newaxis]))
print('Calculated values: {}'.format(
np.count_nonzero(true_values, axis=0) / 1e5))
print('Expected values: {}'.format(probs))
x = np.linspace(-3., 3., 100)
plt.plot(x, tfp.math.erfcx(x))
plt.ylabel('$erfcx(x)$')
plt.show()
plt.plot(x, tfp.math.logerfcx(x))
plt.ylabel('$logerfcx(x)$')
plt.show()
plt.plot(x, tfp.math.logerfc(x))
plt.ylabel('$logerfc(x)$')
plt.show()
plt.plot(x, tfp.math.dawsn(x))
plt.ylabel('$dawsn(x)$')
plt.show()
Explanation: Math, stats
Math/linalg
Add tfp.math.trapz for trapezoidal integration.
Add tfp.math.log_bessel_kve.
Add no_pivot_ldl to experimental.linalg.
Add marginal_fn argument to GaussianProcess (see no_pivot_ldl).
Added tfp.math.atan_difference(x, y)
Add tfp.math.erfcx, tfp.math.logerfc and tfp.math.logerfcx
Add tfp.math.dawsn for Dawson's Integral.
Add tfp.math.igammaincinv, tfp.math.igammacinv.
Add tfp.math.sqrt1pm1.
Add LogitNormal.stddev_approx and LogitNormal.variance_approx
Add tfp.math.owens_t for the Owen's T function.
Add bracket_root method to automatically initialize bounds for a root search.
Add Chandrupatla's method for finding roots of scalar functions.
Stats
tfp.stats.windowed_mean efficiently computes windowed means.
tfp.stats.windowed_variance efficiently and accurately computes windowed variances.
tfp.stats.cumulative_variance efficiently and accurately computes cumulative variances.
RunningCovariance and friends can now be initialized from an example Tensor, not just from explicit shape and dtype.
Cleaner API for RunningCentralMoments, RunningMean, RunningPotentialScaleReduction.
Owen's T, Erfcx, Logerfc, Logerfcx, Dawson functions
End of explanation
# Igammainv and Igammacinv are inverses to Igamma and Igammac
x = np.linspace(1., 10., 10)
y = tf.math.igamma(0.3, x)
x_prime = tfp.math.igammainv(0.3, y)
print('x: {}'.format(x))
print('igammainv(igamma(a, x)):\n {}'.format(x_prime))
y = tf.math.igammac(0.3, x)
x_prime = tfp.math.igammacinv(0.3, y)
print('\n')
print('x: {}'.format(x))
print('igammacinv(igammac(a, x)):\n {}'.format(x_prime))
Explanation: igammainv / igammacinv
End of explanation
x = np.linspace(0., 5., 100)
for v in [0.5, 2., 3]:
plt.plot(x, tfp.math.log_bessel_kve(v, x).numpy())
plt.title('Log(BesselKve(v, x)')
Explanation: log-kve
End of explanation
plt.figure(figsize=(4, 4))
seed = tfp.random.sanitize_seed(123)
seed1, seed2 = tfp.random.split_seed(seed)
samps = tfp.random.spherical_uniform([30], dimension=2, seed=seed1)
plt.scatter(*samps.numpy().T, marker='+')
samps = tfp.random.spherical_uniform([30], dimension=2, seed=seed2)
plt.scatter(*samps.numpy().T, marker='+');
Explanation: Other
STS
Speed up STS forecasting and decomposition using internal tf.function wrapping.
Add option to speed up filtering in LinearGaussianSSM when only the final step's results are required.
Variational Inference with joint distributions: example notebook with the Radon model.
Add experimental support for transforming any distribution into a preconditioning bijector.
Adds tfp.random.sanitize_seed.
Adds tfp.random.spherical_uniform.
End of explanation |
6,247 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: Uncertainty-aware Deep Language Learning with BERT-SNGP
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step3: This tutorial needs the GPU to run efficiently. Check if the GPU is available.
Step4: First implement a standard BERT classifier following the classify text with BERT tutorial. We will use the BERT-base encoder, and the built-in ClassificationHead as the classifier.
Step7: Build SNGP model
To implement a BERT-SNGP model, you only need to replace the ClassificationHead with the built-in GaussianProcessClassificationHead. Spectral normalization is already pre-packaged into this classification head. Like in the SNGP tutorial, add a covariance reset callback to the model, so the model automatically reset the covariance estimator at the begining of a new epoch to avoid counting the same data twice.
Step8: Note
Step9: Make the train and test data.
Step10: Create a OOD evaluation dataset. For this, combine the in-domain test data clinc_test and the out-of-domain data clinc_test_oos. We will also assign label 0 to the in-domain examples, and label 1 to the out-of-domain examples.
Step12: Train and evaluate
First set up the basic training configurations.
Step13: Evaluate OOD performance
Evaluate how well the model can detect the unfamiliar out-of-domain queries. For rigorous evaluation, use the OOD evaluation dataset ood_eval_dataset built earlier.
Step14: Computes the OOD probabilities as $1 - p(x)$, where $p(x)=softmax(logit(x))$ is the predictive probability.
Step15: Now evaluate how well the model's uncertainty score ood_probs predicts the out-of-domain label. First compute the Area under precision-recall curve (AUPRC) for OOD probability v.s. OOD detection accuracy.
Step16: This matches the SNGP performance reported at the CLINC OOS benchmark under the Uncertainty Baselines.
Next, examine the model's quality in uncertainty calibration, i.e., whether the model's predictive probability corresponds to its predictive accuracy. A well-calibrated model is considered trust-worthy, since, for example, its predictive probability $p(x)=0.8$ means that the model is correct 80% of the time. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
!pip uninstall -y tensorflow tf-text
!pip install -U tensorflow-text-nightly
!pip install -U tf-nightly
!pip install -U tf-models-nightly
import matplotlib.pyplot as plt
import sklearn.metrics
import sklearn.calibration
import tensorflow_hub as hub
import tensorflow_datasets as tfds
import numpy as np
import tensorflow as tf
import official.nlp.modeling.layers as layers
import official.nlp.optimization as optimization
Explanation: Uncertainty-aware Deep Language Learning with BERT-SNGP
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/text/tutorials/uncertainty_quantification_with_sngp_bert"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/uncertainty_quantification_with_sngp_bert.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/tutorials/uncertainty_quantification_with_sngp_bert.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/text/docs/tutorials/uncertainty_quantification_with_sngp_bert.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
In the SNGP tutorial, you learned how to build SNGP model on top of a deep residual network to improve its ability to quantify its uncertainty. In this tutorial, you will apply SNGP to a natural language understanding (NLU) task by building it on top of a deep BERT encoder to improve deep NLU model's ability in detecting out-of-scope queries.
Specifically, you will:
* Build BERT-SNGP, a SNGP-augmented BERT model.
* Load the CLINC Out-of-scope (OOS) intent detection dataset.
* Train the BERT-SNGP model.
* Evaluate the BERT-SNGP model's performance in uncertainty calibration and out-of-domain detection.
Beyond CLINC OOS, the SNGP model has been applied to large-scale datasets such as Jigsaw toxicity detection, and to the image datasets such as CIFAR-100 and ImageNet.
For benchmark results of SNGP and other uncertainty methods, as well as high-quality implementation with end-to-end training / evaluation scripts, you can check out the Uncertainty Baselines benchmark.
Setup
End of explanation
tf.__version__
gpus = tf.config.list_physical_devices('GPU')
gpus
assert gpus,
No GPU(s) found! This tutorial will take many hours to run without a GPU.
You may hit this error if the installed tensorflow package is not
compatible with the CUDA and CUDNN versions.
Explanation: This tutorial needs the GPU to run efficiently. Check if the GPU is available.
End of explanation
#@title Standard BERT model
PREPROCESS_HANDLE = 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3'
MODEL_HANDLE = 'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3'
class BertClassifier(tf.keras.Model):
def __init__(self,
num_classes=150, inner_dim=768, dropout_rate=0.1,
**classifier_kwargs):
super().__init__()
self.classifier_kwargs = classifier_kwargs
# Initiate the BERT encoder components.
self.bert_preprocessor = hub.KerasLayer(PREPROCESS_HANDLE, name='preprocessing')
self.bert_hidden_layer = hub.KerasLayer(MODEL_HANDLE, trainable=True, name='bert_encoder')
# Defines the encoder and classification layers.
self.bert_encoder = self.make_bert_encoder()
self.classifier = self.make_classification_head(num_classes, inner_dim, dropout_rate)
def make_bert_encoder(self):
text_inputs = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text')
encoder_inputs = self.bert_preprocessor(text_inputs)
encoder_outputs = self.bert_hidden_layer(encoder_inputs)
return tf.keras.Model(text_inputs, encoder_outputs)
def make_classification_head(self, num_classes, inner_dim, dropout_rate):
return layers.ClassificationHead(
num_classes=num_classes,
inner_dim=inner_dim,
dropout_rate=dropout_rate,
**self.classifier_kwargs)
def call(self, inputs, **kwargs):
encoder_outputs = self.bert_encoder(inputs)
classifier_inputs = encoder_outputs['sequence_output']
return self.classifier(classifier_inputs, **kwargs)
Explanation: First implement a standard BERT classifier following the classify text with BERT tutorial. We will use the BERT-base encoder, and the built-in ClassificationHead as the classifier.
End of explanation
class ResetCovarianceCallback(tf.keras.callbacks.Callback):
def on_epoch_begin(self, epoch, logs=None):
Resets covariance matrix at the begining of the epoch.
if epoch > 0:
self.model.classifier.reset_covariance_matrix()
class SNGPBertClassifier(BertClassifier):
def make_classification_head(self, num_classes, inner_dim, dropout_rate):
return layers.GaussianProcessClassificationHead(
num_classes=num_classes,
inner_dim=inner_dim,
dropout_rate=dropout_rate,
gp_cov_momentum=-1,
temperature=30.,
**self.classifier_kwargs)
def fit(self, *args, **kwargs):
Adds ResetCovarianceCallback to model callbacks.
kwargs['callbacks'] = list(kwargs.get('callbacks', []))
kwargs['callbacks'].append(ResetCovarianceCallback())
return super().fit(*args, **kwargs)
Explanation: Build SNGP model
To implement a BERT-SNGP model, you only need to replace the ClassificationHead with the built-in GaussianProcessClassificationHead. Spectral normalization is already pre-packaged into this classification head. Like in the SNGP tutorial, add a covariance reset callback to the model, so the model automatically reset the covariance estimator at the begining of a new epoch to avoid counting the same data twice.
End of explanation
(clinc_train, clinc_test, clinc_test_oos), ds_info = tfds.load(
'clinc_oos', split=['train', 'test', 'test_oos'], with_info=True, batch_size=-1)
Explanation: Note: The GaussianProcessClassificationHead takes a new argument temperature. It corresponds to the $\lambda$ parameter in the mean-field approximation introduced in the SNGP tutorial. In practice, this value is usually treated as a hyperparamter, and is finetuned to optimize the model's calibration performance.
Load CLINC OOS dataset
Now load the CLINC OOS intent detection dataset. This dataset contains 15000 user's spoken queries collected over 150 intent classes, it also contains 1000 out-of-domain (OOD) sentences that are not covered by any of the known classes.
End of explanation
train_examples = clinc_train['text']
train_labels = clinc_train['intent']
# Makes the in-domain (IND) evaluation data.
ind_eval_data = (clinc_test['text'], clinc_test['intent'])
Explanation: Make the train and test data.
End of explanation
test_data_size = ds_info.splits['test'].num_examples
oos_data_size = ds_info.splits['test_oos'].num_examples
# Combines the in-domain and out-of-domain test examples.
oos_texts = tf.concat([clinc_test['text'], clinc_test_oos['text']], axis=0)
oos_labels = tf.constant([0] * test_data_size + [1] * oos_data_size)
# Converts into a TF dataset.
ood_eval_dataset = tf.data.Dataset.from_tensor_slices(
{"text": oos_texts, "label": oos_labels})
Explanation: Create a OOD evaluation dataset. For this, combine the in-domain test data clinc_test and the out-of-domain data clinc_test_oos. We will also assign label 0 to the in-domain examples, and label 1 to the out-of-domain examples.
End of explanation
TRAIN_EPOCHS = 3
TRAIN_BATCH_SIZE = 32
EVAL_BATCH_SIZE = 256
#@title
def bert_optimizer(learning_rate,
batch_size=TRAIN_BATCH_SIZE, epochs=TRAIN_EPOCHS,
warmup_rate=0.1):
Creates an AdamWeightDecay optimizer with learning rate schedule.
train_data_size = ds_info.splits['train'].num_examples
steps_per_epoch = int(train_data_size / batch_size)
num_train_steps = steps_per_epoch * epochs
num_warmup_steps = int(warmup_rate * num_train_steps)
# Creates learning schedule.
lr_schedule = tf.keras.optimizers.schedules.PolynomialDecay(
initial_learning_rate=learning_rate,
decay_steps=num_train_steps,
end_learning_rate=0.0)
return optimization.AdamWeightDecay(
learning_rate=lr_schedule,
weight_decay_rate=0.01,
epsilon=1e-6,
exclude_from_weight_decay=['LayerNorm', 'layer_norm', 'bias'])
optimizer = bert_optimizer(learning_rate=1e-4)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metrics = tf.metrics.SparseCategoricalAccuracy()
fit_configs = dict(batch_size=TRAIN_BATCH_SIZE,
epochs=TRAIN_EPOCHS,
validation_batch_size=EVAL_BATCH_SIZE,
validation_data=ind_eval_data)
sngp_model = SNGPBertClassifier()
sngp_model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
sngp_model.fit(train_examples, train_labels, **fit_configs)
Explanation: Train and evaluate
First set up the basic training configurations.
End of explanation
#@title
def oos_predict(model, ood_eval_dataset, **model_kwargs):
oos_labels = []
oos_probs = []
ood_eval_dataset = ood_eval_dataset.batch(EVAL_BATCH_SIZE)
for oos_batch in ood_eval_dataset:
oos_text_batch = oos_batch["text"]
oos_label_batch = oos_batch["label"]
pred_logits = model(oos_text_batch, **model_kwargs)
pred_probs_all = tf.nn.softmax(pred_logits, axis=-1)
pred_probs = tf.reduce_max(pred_probs_all, axis=-1)
oos_labels.append(oos_label_batch)
oos_probs.append(pred_probs)
oos_probs = tf.concat(oos_probs, axis=0)
oos_labels = tf.concat(oos_labels, axis=0)
return oos_probs, oos_labels
Explanation: Evaluate OOD performance
Evaluate how well the model can detect the unfamiliar out-of-domain queries. For rigorous evaluation, use the OOD evaluation dataset ood_eval_dataset built earlier.
End of explanation
sngp_probs, ood_labels = oos_predict(sngp_model, ood_eval_dataset)
ood_probs = 1 - sngp_probs
Explanation: Computes the OOD probabilities as $1 - p(x)$, where $p(x)=softmax(logit(x))$ is the predictive probability.
End of explanation
precision, recall, _ = sklearn.metrics.precision_recall_curve(ood_labels, ood_probs)
auprc = sklearn.metrics.auc(recall, precision)
print(f'SNGP AUPRC: {auprc:.4f}')
Explanation: Now evaluate how well the model's uncertainty score ood_probs predicts the out-of-domain label. First compute the Area under precision-recall curve (AUPRC) for OOD probability v.s. OOD detection accuracy.
End of explanation
prob_true, prob_pred = sklearn.calibration.calibration_curve(
ood_labels, ood_probs, n_bins=10, strategy='quantile')
plt.plot(prob_pred, prob_true)
plt.plot([0., 1.], [0., 1.], c='k', linestyle="--")
plt.xlabel('Predictive Probability')
plt.ylabel('Predictive Accuracy')
plt.title('Calibration Plots, SNGP')
plt.show()
Explanation: This matches the SNGP performance reported at the CLINC OOS benchmark under the Uncertainty Baselines.
Next, examine the model's quality in uncertainty calibration, i.e., whether the model's predictive probability corresponds to its predictive accuracy. A well-calibrated model is considered trust-worthy, since, for example, its predictive probability $p(x)=0.8$ means that the model is correct 80% of the time.
End of explanation |
6,248 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Class Coding Lab
Step1: Types Matter
Python's built in functions and operators work differently depending on the type of the variable.
Step2: Switching Types
there are built-in Python functions for switching types. For example
Step3: Inputs type str
When you use the input() function the result is of type str
Step4: We can use a built in Python function to convert the type from str to our desired type
Step5: We typically combine the first two lines into one expression like this
Step6: 1.1 You Code
Step7: Format Codes
Python has some string format codes which allow us to control the output of our variables.
%s = format variable as str
%d = format variable as int
%f = format variable as float
You can also include the number of spaces to use for example %5.2f prints a float with 5 spaces 2 to the right of the decimal point.
Step8: Formatting with F-Strings
The other method of formatting data in Python is F-strings. As we saw in the last lab, F-strings use interpolation to specify the variables we would like to print in-line with the print string.
You can format an f-string
{var
Step9: 1.2 You Code
Re-write the program from (1.1 You Code) so that the print statement uses format codes. Remember
Step10: 1.3 You Code
Use F-strings or format codes to Print the PI variable out 3 times.
Once as a string,
Once as an int, and
Once as a float to 4 decimal places.
Step11: Putting it all together
Step12: Metacognition
Rate your comfort level with this week's material so far.
1 ==> I don't understand this at all yet and need extra help. If you choose this please try to articulate that which you do not understand to the best of your ability in the questions and comments section below.
2 ==> I can do this with help or guidance from other people or resources. If you choose this level, please indicate HOW this person helped you in the questions and comments section below.
3 ==> I can do this on my own without any help.
4 ==> I can do this on my own and can explain/teach how to do it to others.
--== Double-Click Here then Enter a Number 1 through 4 Below This Line ==--
Questions And Comments
Record any questions or comments you have about this lab that you would like to discuss in your recitation. It is expected you will have questions if you did not complete the code sections correctly. Learning how to articulate what you do not understand is an important skill of critical thinking. Write them down here so that you remember to ask them in your recitation. We expect you will take responsilbity for your learning and ask questions in class.
--== Double-click Here then Enter Your Questions Below this Line ==-- | Python Code:
a = "4"
type(a) # should be str
a = 4
type(a) # should be int
Explanation: Class Coding Lab: Variables And Types
The goals of this lab are to help you to understand:
Python data types
Getting input as different types
Formatting output as different types
Basic arithmetic operators
How to create a program from an idea.
Variable Types
Every Python variable has a type. The Type determines how the data is stored in the computer's memory:
End of explanation
a = 4
b = 5
a + b # this plus in this case means add so 9
a = "4"
b = "5"
a + b # the plus + in this case means concatenation, so '45'
Explanation: Types Matter
Python's built in functions and operators work differently depending on the type of the variable.:
End of explanation
x = "45" # x is a str
y = int(x) # y is now an int
z = float(x) # z is a float
print(x,y,z)
Explanation: Switching Types
there are built-in Python functions for switching types. For example:
End of explanation
age = input("Enter your age: ")
type(age)
Explanation: Inputs type str
When you use the input() function the result is of type str:
End of explanation
age = input("Enter your age: ")
age = int(age)
type(age)
Explanation: We can use a built in Python function to convert the type from str to our desired type:
End of explanation
age = int(input("Enter your age: "))
type(age)
Explanation: We typically combine the first two lines into one expression like this:
End of explanation
# TODO: Debug this code
age = input("Enter your age: ")
nextage = age + 1
print("Today you are age next year you will be {nextage}")
Explanation: 1.1 You Code: Debuging
The following program has errors in it. Your task is to fix the errors so that:
your age can be input and convertred to an integer.
the program outputs your age and your age next year.
For example:
Enter your age: 45
Today you are 45 next year you will be 46
End of explanation
name = "Mike"
age = 45
gpa = 3.4
print("%s is %d years old. His gpa is %.3f" % (name, age,gpa))
Explanation: Format Codes
Python has some string format codes which allow us to control the output of our variables.
%s = format variable as str
%d = format variable as int
%f = format variable as float
You can also include the number of spaces to use for example %5.2f prints a float with 5 spaces 2 to the right of the decimal point.
End of explanation
name ="Mike"
wage = 15
print(f"{name} makes ${wage:.2f} per hour")
Explanation: Formatting with F-Strings
The other method of formatting data in Python is F-strings. As we saw in the last lab, F-strings use interpolation to specify the variables we would like to print in-line with the print string.
You can format an f-string
{var:d} formats var as integer
{var:f} formats var as float
{var:.3f} formats var as float to 3 decimal places.
Example:
End of explanation
# TODO: Write code here
Explanation: 1.2 You Code
Re-write the program from (1.1 You Code) so that the print statement uses format codes. Remember: do not copy code, as practice, re-write it.
End of explanation
#TODO: Write Code Here
Explanation: 1.3 You Code
Use F-strings or format codes to Print the PI variable out 3 times.
Once as a string,
Once as an int, and
Once as a float to 4 decimal places.
End of explanation
# TODO: Write your code here
Explanation: Putting it all together: Fred's Fence Estimator
Fred's Fence has hired you to write a program to estimate the cost of their fencing projects. For a given length and width you will calculate the number of 6 foot fence sections, and the total cost of the project. Each fence section costs $23.95. Assume the posts and labor are free.
Program Inputs:
Length of yard in feet
Width of yard in feet
Program Outputs:
Perimeter of yard ( 2 x (Length + Width))
Number of fence sections required (Permiemer divided by 6 )
Total cost for fence ( fence sections multiplied by $23.95 )
NOTE: All outputs should be formatted to 2 decimal places: e.g. 123.05
```
TODO:
1. Input length of yard as float, assign to a variable
2. Input Width of yard as float, assign to a variable
3. Calculate perimeter of yard, assign to a variable
4. calculate number of fence sections, assign to a variable
5. calculate total cost, assign to variable
6. print perimeter of yard
7. print number of fence sections
8. print total cost for fence.
```
1.4 You Code
Based on the provided TODO, write the program in python in the cell below. Your solution should have 8 lines of code, one for each TODO.
HINT: Don't try to write the program in one sitting. Instead write a line of code, run it, verify it works and fix any issues with it before writing the next line of code.
End of explanation
# run this code to turn in your work!
from coursetools.submission import Submission
Submission().submit()
Explanation: Metacognition
Rate your comfort level with this week's material so far.
1 ==> I don't understand this at all yet and need extra help. If you choose this please try to articulate that which you do not understand to the best of your ability in the questions and comments section below.
2 ==> I can do this with help or guidance from other people or resources. If you choose this level, please indicate HOW this person helped you in the questions and comments section below.
3 ==> I can do this on my own without any help.
4 ==> I can do this on my own and can explain/teach how to do it to others.
--== Double-Click Here then Enter a Number 1 through 4 Below This Line ==--
Questions And Comments
Record any questions or comments you have about this lab that you would like to discuss in your recitation. It is expected you will have questions if you did not complete the code sections correctly. Learning how to articulate what you do not understand is an important skill of critical thinking. Write them down here so that you remember to ask them in your recitation. We expect you will take responsilbity for your learning and ask questions in class.
--== Double-click Here then Enter Your Questions Below this Line ==--
End of explanation |
6,249 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting histograms from the "Z path" analysis
Initial set-up
These commands give us access to some tools for plotting histograms and other graphs. We only need to run these once at the beginning of the notebook.
Step1: We will use the location of the data file in many places, but we can use a variable to store the location so we only have to change one line to use a different file
Step2: Plotting the invariant masses as a histogram
This command opens the file. You may have to edit it to reflect the actual name of the file containing your results
Step3: Then we can read the data from the file and store it in a list in the memory of the computer. If we want to rerun this cell we will also have to rerun the command above to open the file again.
Step4: Print the list of masses, just to make sure it looks sensible
Step5: Then we can create a histogram of the masses and display it.
Step6: To present our data better, there are various tricks we can try. See Basic Data Plotting with Matplotlib Part 3
Step7: We can plot all these histograms on the same axes, using different colours.
Step8: To present our data better, there are various tricks we can try. See Basic Data Plotting with Matplotlib Part 3
Step9: So all the data is now in the dictionary called channels. What does this look like?
Step10: For each channel in the input data file, we can see the corresponding channel name (e.g. e) followed by a colon (
Step11: Let's use a loop over all the channels, creating a separate histogram for each one | Python Code:
import pylab
import matplotlib.pyplot as plt
%matplotlib inline
pylab.rcParams['figure.figsize'] = 8,6
Explanation: Plotting histograms from the "Z path" analysis
Initial set-up
These commands give us access to some tools for plotting histograms and other graphs. We only need to run these once at the beginning of the notebook.
End of explanation
filename = '../data/Invariant_Masses.txt'
Explanation: We will use the location of the data file in many places, but we can use a variable to store the location so we only have to change one line to use a different file:
End of explanation
data_file = open(filename)
Explanation: Plotting the invariant masses as a histogram
This command opens the file. You may have to edit it to reflect the actual name of the file containing your results:
End of explanation
masses = [] # Create an empty list for masses from e+e- channel
for line in data_file: # Loop over each line in the file
mass, channel = line.split() # Each line contains a mass (in GeV) and a "channel" (m for mu+mu-, etc.)
m = float(mass) # The mass is read in as a string, so we convert it to a (floating point) number ...
masses.append(m) # ... before adding it to the list.
Explanation: Then we can read the data from the file and store it in a list in the memory of the computer. If we want to rerun this cell we will also have to rerun the command above to open the file again.
End of explanation
print(masses)
Explanation: Print the list of masses, just to make sure it looks sensible:
End of explanation
plt.hist(masses, bins=50, range=(0,200))
plt.xlabel('Mass [GeV]')
Explanation: Then we can create a histogram of the masses and display it.
End of explanation
data_file = open(filename) # We need to open the file again
masses_e = [] # Create an empty list for masses from e+e- channel
masses_m = [] # Create an empty list for masses from mu+mu- channel
masses_misc = [] # Create an empty list for masses from all other channels
for line in data_file: # Loop over each line in the file
mass, channel = line.split() # Each line contains a mass (in GeV) and a "channel" (m for mu+mu-, etc.)
m = float(mass)
if channel=='e':
masses_e.append(m)
elif channel=='m':
masses_m.append(m)
else:
masses_misc.append(m)
Explanation: To present our data better, there are various tricks we can try. See Basic Data Plotting with Matplotlib Part 3: Histograms for some possibilities, or the Pyplot tutorial for many more possibilities and examples.
Plotting the channels separately
This histogram above includes all the invariant masses in the data file, without showing which come from $e^+e^-$ events, which from $\mu^+\mu^-$ events, and so on. We might want to display the channels separately, which means keeping a list of data for each channel. Initially let's just plot the $e^+e^-$ and $\mu^+\mu^-$ events, and then lump everything else together as "miscellaneous".
End of explanation
plt.hist(masses_e, bins=50, range=(0,200), color='b')
plt.hist(masses_m, bins=50, range=(0,200), color='r', alpha=0.5)
plt.hist(masses_misc, bins=50, range=(0,200), color='g', alpha=0.2)
plt.xlabel('Mass [GeV]')
Explanation: We can plot all these histograms on the same axes, using different colours.
End of explanation
data_file = open(filename) # We need to open the file again
channels = {} # Create an empty "dictionary"
for line in data_file: # Reading in the data from each line ...
mass, channel = line.split() #
m = float(mass) # ... works the same as before.
if channel in channels: # If this channel is already in our dictionary ...
channel_data = channels[channel] # ... we find the existing list of data for this channel.
else: # If this channel is *not* already in the dictionary ...
channel_data = [] # ... we create a new, empty list for this channel ...
channels[channel] = channel_data # ... and add it to the dictionary.
channel_data.append(m) # Either way, we add the invariant mass to the list for this channel.
Explanation: To present our data better, there are various tricks we can try. See Basic Data Plotting with Matplotlib Part 3: Histograms for some possibilities.
Reading the channels from the input file
If there are many different channels in the data, it can be tedious and error-prone to write a separate "elif" clause for each one. In other cases we might not even know what channels are in the data file. So we may want to take this information from the data file itself.
We can do this using a Python container object called a dictionary. In this dictionary we will store a list of data for each channel, such that we can easily find the list of data corresponding to a given channel.
End of explanation
print(channels)
Explanation: So all the data is now in the dictionary called channels. What does this look like?
End of explanation
print(channels['g'])
Explanation: For each channel in the input data file, we can see the corresponding channel name (e.g. e) followed by a colon (:) and a list containing the invariant masses for that channel. If we want to look at just one of these lists, we can extract it from the dictionary using the channel name as a key:
End of explanation
pylab.rcParams['figure.figsize'] = 12,2
plot_number = 0
for channel in channels:
plot_number += 1
plt.figure(plot_number)
masses = channels[channel]
plt.hist(masses, bins=50, range=(0,200), label=channel)
plt.legend()
plt.xlabel('invariant mass / GeV')
Explanation: Let's use a loop over all the channels, creating a separate histogram for each one:
End of explanation |
6,250 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Airbnb User Data Exploration
Step1: I wanted to take a look at the user data we have for this competition so I made this little notebook to share my findings and discuss about those. At the moment I've started with the basic user data, I'll take a look at sessions and the other csv files later on this month.
Please, feel free to comment with anything you think it can be improved or fixed. I am not a professional in this field and there will be mistakes or things that can be improved. This is the flow I took and there are some plots not really interesting but I thought on keeping it in case someone see something interesting.
Let's see the data!
Data Exploration
Generally, when I start with a Data Science project I'm looking to answer the following questions
Step2: It's usually a good practice to know the size of the data with which you are working
Step3: Let's get those together so we can work with all the data.
Step4: The data seems to be in an ussable format so the next important thing is to take a look at the missing data.
Missing Data
Usually the missing data comes in the way of NaN, but if we take a look at the DataFrame printed above we can see at the gender column some values being -unknown-. We will need to transform those values into NaN first
Step5: Now let's see how much data we are missing. For this purpose let's compute the NaN percentage of each feature.
Step6: We have quite a lot of NaN in the age and gender wich will yield in lesser performance of the classifiers we will build. The feature date_first_booking has a 67% of NaN values because this feature is not present at the tests users, and therefore, we won't need it at the modeling part.
Step7: The other feature with a high rate of NaN was age. Let's see
Step8: There is some inconsistency in the age of some users as we can see above. It could be because the age inpout field was not sanitized or there was some mistakes handlig the data.
Step9: So far, do we have 830 users with the longest confirmed human lifespan record and 188 little gangsters breaking the Aribnb Eligibility Terms?
Step10: It's seems that the weird values are caused by the appearance of 2014. I didn't figured why, but I supose that might be related with a wrong input being added with the new users.
Step11: The young users seems to be under an acceptable range being the 50% of those users above 16 years old.
We will need to hande the outliers. The simple thing that came to my mind it's to set an acceptance range and put those out of it to NaN.
Step12: Data Types
Let's treat each feature as what they are. This means we need to transform into categorical those features that we treas as categories and the same with the dates
Step13: Visualizing the Data
Usually, looking at tables, percentiles, means, and other several measures at this state is rarely useful unless you know very well your data.
For me, it's usually better to visualize the data in some way. Visualization makes me see the outliers and errors immediately!
Gender
Step14: As we've seen before at this plot we can see the ammount of missing data in perspective. Also, notice that there is a slight difference between user gender.
Next thing it might be interesting to see if there is any gender preferences when travelling
Step15: There are no big differences between the 2 main genders, so this plot it's not really ussefull except to know the relative destination frecuency of the countries. Let's see it clear here
Step16: The first thing we can see that if there is a reservation, it's likely to be inside the US. But there is a 45% of people that never did a reservation.
Age
Now that I know there is no difference between male and female reservations at first sight I'll dig into the age.
Step17: As expected, the common age to travel is between 25 and 40. Let's see if, for example, older people travel in a different way. Let's pick an arbitrary age to split into two groups. Maybe 45?
Step18: We can see that the young people tends to stay in the US, and the older people choose to travel outside the country. Of vourse, there are no big differences between them and we must remember that we do not have the 42% of the ages.
The first thing I thought when reading the problem was the importance of the native lenguage when choosing the destination country. So let's see how manny users use english as main language
Step19: With the 96% of users using English as their language, it is understandable that a lot of people stay in the US. Someone maybe thinking, if the language is important, why not travel to GB? We need to remember that there is also a lot of factor we are not acounting so making assumpions or predictions like that might be dangerous.
Dates
To see the dates of our users and the timespan of them, let's plot the number of accounts created by time
Step20: It's appreciable how fast Airbnb has grown over the last 3 years. Does this correlate with the date when the user was active for the first time? It should be very similar, so doing this is a way to check the data!
Step21: We can se that's almost the same as date_account_created, and also, notice the small peaks. We can, either smooth the graph or dig into those peaks. Let's dig in
Step22: At first sight we can see a small pattern, there are some peaks at the same distance. Looking more closely
Step23: The local minimums where the Sundays(where the people use less the Internet), and it's usually to hit a maximum at Tuesdays!
The last date related plot I want to see is the next
Step24: It's a clean comparision of usual destinations then and now, where we can see how the new users, register more and book less, and when they book they stay at the US.
Affiliate Information | Python Code:
# Draw inline
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import datetime
# Set figure aesthetics
sns.set_style("white", {'ytick.major.size': 10.0})
sns.set_context("poster", font_scale=1.1)
Explanation: Airbnb User Data Exploration
End of explanation
# Load the data into DataFrames
path = '../data/'
train_users = pd.read_csv(path + 'train_users.csv')
test_users = pd.read_csv(path + 'test_users.csv')
Explanation: I wanted to take a look at the user data we have for this competition so I made this little notebook to share my findings and discuss about those. At the moment I've started with the basic user data, I'll take a look at sessions and the other csv files later on this month.
Please, feel free to comment with anything you think it can be improved or fixed. I am not a professional in this field and there will be mistakes or things that can be improved. This is the flow I took and there are some plots not really interesting but I thought on keeping it in case someone see something interesting.
Let's see the data!
Data Exploration
Generally, when I start with a Data Science project I'm looking to answer the following questions:
- Is there any mistakes in the data?
- Does the data have peculiar behavior?
- Do I need to fix or remove any of the data to be more realistic?
End of explanation
print("We have", train_users.shape[0], "users in the training set and",
test_users.shape[0], "in the test set.")
print("In total we have", train_users.shape[0] + test_users.shape[0], "users.")
Explanation: It's usually a good practice to know the size of the data with which you are working:
End of explanation
# Merge train and test users
users = pd.concat((train_users, test_users), axis=0, ignore_index=True)
# Remove ID's since now we are not interested in making predictions
users.drop('id', axis=1, inplace=True)
users.head()
Explanation: Let's get those together so we can work with all the data.
End of explanation
users.gender.replace('-unknown-', np.nan, inplace=True)
users.first_browser.replace('-unknown-', np.nan, inplace=True)
Explanation: The data seems to be in an ussable format so the next important thing is to take a look at the missing data.
Missing Data
Usually the missing data comes in the way of NaN, but if we take a look at the DataFrame printed above we can see at the gender column some values being -unknown-. We will need to transform those values into NaN first:
End of explanation
users_nan = (users.isnull().sum() / users.shape[0]) * 100
users_nan[users_nan > 0].drop('country_destination')
Explanation: Now let's see how much data we are missing. For this purpose let's compute the NaN percentage of each feature.
End of explanation
print("Just for the sake of curiosity; we have",
int((train_users.date_first_booking.isnull().sum() / train_users.shape[0]) * 100),
"% of missing values at date_first_booking in the training data")
Explanation: We have quite a lot of NaN in the age and gender wich will yield in lesser performance of the classifiers we will build. The feature date_first_booking has a 67% of NaN values because this feature is not present at the tests users, and therefore, we won't need it at the modeling part.
End of explanation
users.age.describe()
Explanation: The other feature with a high rate of NaN was age. Let's see:
End of explanation
print(sum(users.age > 122))
print(sum(users.age < 18))
Explanation: There is some inconsistency in the age of some users as we can see above. It could be because the age inpout field was not sanitized or there was some mistakes handlig the data.
End of explanation
users[users.age > 122]['age'].describe()
Explanation: So far, do we have 830 users with the longest confirmed human lifespan record and 188 little gangsters breaking the Aribnb Eligibility Terms?
End of explanation
users[users.age < 18]['age'].describe()
Explanation: It's seems that the weird values are caused by the appearance of 2014. I didn't figured why, but I supose that might be related with a wrong input being added with the new users.
End of explanation
users.loc[users.age > 95, 'age'] = np.nan
users.loc[users.age < 13, 'age'] = np.nan
Explanation: The young users seems to be under an acceptable range being the 50% of those users above 16 years old.
We will need to hande the outliers. The simple thing that came to my mind it's to set an acceptance range and put those out of it to NaN.
End of explanation
categorical_features = [
'affiliate_channel',
'affiliate_provider',
'country_destination',
'first_affiliate_tracked',
'first_browser',
'first_device_type',
'gender',
'language',
'signup_app',
'signup_method'
]
for categorical_feature in categorical_features:
users[categorical_feature] = users[categorical_feature].astype('category')
users['date_account_created'] = pd.to_datetime(users['date_account_created'])
users['date_first_booking'] = pd.to_datetime(users['date_first_booking'])
users['date_first_active'] = pd.to_datetime(users['timestamp_first_active'], format='%Y%m%d%H%M%S')
Explanation: Data Types
Let's treat each feature as what they are. This means we need to transform into categorical those features that we treas as categories and the same with the dates:
End of explanation
users.gender.value_counts(dropna=False).plot(kind='bar', color='#FD5C64', rot=0)
plt.xlabel('Gender')
sns.despine()
Explanation: Visualizing the Data
Usually, looking at tables, percentiles, means, and other several measures at this state is rarely useful unless you know very well your data.
For me, it's usually better to visualize the data in some way. Visualization makes me see the outliers and errors immediately!
Gender
End of explanation
women = sum(users['gender'] == 'FEMALE')
men = sum(users['gender'] == 'MALE')
female_destinations = users.loc[users['gender'] == 'FEMALE', 'country_destination'].value_counts() / women * 100
male_destinations = users.loc[users['gender'] == 'MALE', 'country_destination'].value_counts() / men * 100
# Bar width
width = 0.4
male_destinations.plot(kind='bar', width=width, color='#4DD3C9', position=0, label='Male', rot=0)
female_destinations.plot(kind='bar', width=width, color='#FFA35D', position=1, label='Female', rot=0)
plt.legend()
plt.xlabel('Destination Country')
plt.ylabel('Percentage')
sns.despine()
plt.show()
Explanation: As we've seen before at this plot we can see the ammount of missing data in perspective. Also, notice that there is a slight difference between user gender.
Next thing it might be interesting to see if there is any gender preferences when travelling:
End of explanation
sns.countplot(x="country_destination", data=users, order=list(users.country_destination.value_counts().keys()))
plt.xlabel('Destination Country')
plt.ylabel('Percentage')
sns.despine()
Explanation: There are no big differences between the 2 main genders, so this plot it's not really ussefull except to know the relative destination frecuency of the countries. Let's see it clear here:
End of explanation
sns.distplot(users.age.dropna(), color='#FD5C64')
plt.xlabel('Age')
sns.despine()
Explanation: The first thing we can see that if there is a reservation, it's likely to be inside the US. But there is a 45% of people that never did a reservation.
Age
Now that I know there is no difference between male and female reservations at first sight I'll dig into the age.
End of explanation
age = 45
younger = sum(users.loc[users['age'] < age, 'country_destination'].value_counts())
older = sum(users.loc[users['age'] > age, 'country_destination'].value_counts())
younger_destinations = users.loc[users['age'] < age, 'country_destination'].value_counts() / younger * 100
older_destinations = users.loc[users['age'] > age, 'country_destination'].value_counts() / older * 100
younger_destinations.plot(kind='bar', width=width, color='#63EA55', position=0, label='Youngers', rot=0)
older_destinations.plot(kind='bar', width=width, color='#4DD3C9', position=1, label='Olders', rot=0)
plt.legend()
plt.xlabel('Destination Country')
plt.ylabel('Percentage')
sns.despine()
plt.show()
Explanation: As expected, the common age to travel is between 25 and 40. Let's see if, for example, older people travel in a different way. Let's pick an arbitrary age to split into two groups. Maybe 45?
End of explanation
print((sum(users.language == 'en') / users.shape[0])*100)
Explanation: We can see that the young people tends to stay in the US, and the older people choose to travel outside the country. Of vourse, there are no big differences between them and we must remember that we do not have the 42% of the ages.
The first thing I thought when reading the problem was the importance of the native lenguage when choosing the destination country. So let's see how manny users use english as main language:
End of explanation
sns.set_style("whitegrid", {'axes.edgecolor': '0'})
sns.set_context("poster", font_scale=1.1)
users.date_account_created.value_counts().plot(kind='line', linewidth=1.2, color='#FD5C64')
Explanation: With the 96% of users using English as their language, it is understandable that a lot of people stay in the US. Someone maybe thinking, if the language is important, why not travel to GB? We need to remember that there is also a lot of factor we are not acounting so making assumpions or predictions like that might be dangerous.
Dates
To see the dates of our users and the timespan of them, let's plot the number of accounts created by time:
End of explanation
date_first_active = users.date_first_active.apply(lambda x: datetime.datetime(x.year, x.month, x.day))
date_first_active.value_counts().plot(kind='line', linewidth=1.2, color='#FD5C64')
Explanation: It's appreciable how fast Airbnb has grown over the last 3 years. Does this correlate with the date when the user was active for the first time? It should be very similar, so doing this is a way to check the data!
End of explanation
users_2013 = users[users['date_first_active'] > pd.to_datetime(20130101, format='%Y%m%d')]
users_2013 = users_2013[users_2013['date_first_active'] < pd.to_datetime(20140101, format='%Y%m%d')]
date_first_active = users_2013.date_first_active.apply(lambda x: datetime.datetime(x.year, x.month, x.day))
date_first_active.value_counts().plot(kind='line', linewidth=2, color='#FD5C64')
plt.show()
Explanation: We can se that's almost the same as date_account_created, and also, notice the small peaks. We can, either smooth the graph or dig into those peaks. Let's dig in:
End of explanation
weekdays = []
for date in users.date_account_created:
weekdays.append(date.weekday())
weekdays = pd.Series(weekdays)
sns.barplot(x = weekdays.value_counts().index, y=weekdays.value_counts().values, order=range(0,7))
plt.xlabel('Week Day')
sns.despine()
Explanation: At first sight we can see a small pattern, there are some peaks at the same distance. Looking more closely:
End of explanation
date = pd.to_datetime(20140101, format='%Y%m%d')
before = sum(users.loc[users['date_first_active'] < date, 'country_destination'].value_counts())
after = sum(users.loc[users['date_first_active'] > date, 'country_destination'].value_counts())
before_destinations = users.loc[users['date_first_active'] < date,
'country_destination'].value_counts() / before * 100
after_destinations = users.loc[users['date_first_active'] > date,
'country_destination'].value_counts() / after * 100
before_destinations.plot(kind='bar', width=width, color='#63EA55', position=0, label='Before 2014', rot=0)
after_destinations.plot(kind='bar', width=width, color='#4DD3C9', position=1, label='After 2014', rot=0)
plt.legend()
plt.xlabel('Destination Country')
plt.ylabel('Percentage')
sns.despine()
plt.show()
Explanation: The local minimums where the Sundays(where the people use less the Internet), and it's usually to hit a maximum at Tuesdays!
The last date related plot I want to see is the next:
End of explanation
users.affiliate_channel.value_counts()
users.affiliate_provider.value_counts()
users.first_affiliate_tracked.value_counts()
Explanation: It's a clean comparision of usual destinations then and now, where we can see how the new users, register more and book less, and when they book they stay at the US.
Affiliate Information
End of explanation |
6,251 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tiny offset from zero here, but overall it looks pretty good.
Step1: With default analogRead settings, 12-bit resolution, we see 4 µA measured current noise on 10 mA full scale, with no effort to reduce the bandwidth of any of the components. | Python Code:
np.std(df.y_scaled[np.logical_and(df.x_scaled < 1, df.x_scaled > 0.5)], ddof=1)*1000
Explanation: Tiny offset from zero here, but overall it looks pretty good.
End of explanation
(4./10000)**-1
Explanation: With default analogRead settings, 12-bit resolution, we see 4 µA measured current noise on 10 mA full scale, with no effort to reduce the bandwidth of any of the components.
End of explanation |
6,252 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overview
In this notebook the full dataset is broken into 26 subsets for each of the unique values of the kinase key. A random forest is fit to each of the subsets and evaluated using $k=5$-fold cross validation. The purpose in doing this is to determine which of the kinases are more difficult to classify and to verify reports that the fgfr1 kinase may have "bad" data.
Step1: Load the data
The data is distributed amongst 282 50mb .csv files, so the glob module is used to allow to read these files using a generic template path into a list, then the list elements are concated along the horizontal axis to create the full dataset. This takes about 10 minutes.
Step2: after loading the data, extract the receptor names so that it is possible to form the seperate data subsets.
Step3: Now iterate through the list of receptors and extract the corresponding subset of training data from the full dataframe. Keep in mind that the number of examples in each set are not the same, therefore the average f1 of each subset is stored into a list for later visualization. A random forest is fit to each of the subsets using $k=5$-fold cross validation with the scoring metric set to the f1-score in order to capture presence type I (precision) and type II (recall) errors. Accuracy is not used due to the fact that the data is imbalanced and that a good accuracy score may be misleading regarding the performance of the classifier on correctly classifying positive training/testing examples.
\begin{equation} Precision = \frac{TP}{TP + FP} \end{equation}
\begin{equation} Recall = \frac{TP}{TP + FN} \end{equation}
\begin{equation} F1 = 2 \frac{PrecisionRecall}{Precision + Recall} \end{equation}
Comparison of Classification with Random Forest Optimized using Randomized Grid Search
Step4: Visualize the Results
To get an idea of how the random forest tends to perform across the subsets of data, a violin plot is used to communicate the median and inner-quartile ranges of the data as well as visualize the estimated density of the samples at each point. As one can see the distribution is multimodal which implies some abnormality if we expect the f1-scores for any particular subset to lie within a single peaked gaussian. | Python Code:
import pandas as pd
import time
import glob
import numpy as np
from scipy.stats import randint as sp_randint
from prettytable import PrettyTable
from sklearn.preprocessing import Imputer
from sklearn.model_selection import train_test_split, cross_val_score, RandomizedSearchCV
from sklearn.metrics import f1_score, accuracy_score, make_scorer
from sklearn.ensemble import RandomForestClassifier
Explanation: Overview
In this notebook the full dataset is broken into 26 subsets for each of the unique values of the kinase key. A random forest is fit to each of the subsets and evaluated using $k=5$-fold cross validation. The purpose in doing this is to determine which of the kinases are more difficult to classify and to verify reports that the fgfr1 kinase may have "bad" data.
End of explanation
load_data_t0 = time.clock()
df = pd.concat([pd.read_csv(filename, index_col=[1,0], na_values=['na'], engine='c', header=0) for filename in glob.glob("data/parser_output/csv/*.csv")],axis=0)
load_data_t1 = time.clock()
print ("data loaded in ~", ((load_data_t1 - load_data_t0)/60), "minutes.")
Explanation: Load the data
The data is distributed amongst 282 50mb .csv files, so the glob module is used to allow to read these files using a generic template path into a list, then the list elements are concated along the horizontal axis to create the full dataset. This takes about 10 minutes.
End of explanation
receptor_names = list(df.index.get_level_values(0).unique())
Explanation: after loading the data, extract the receptor names so that it is possible to form the seperate data subsets.
End of explanation
rforest_params = {"n_estimators": sp_randint(pow(2,5),pow(2,7))}
cv_score_list = []
outputTable = PrettyTable()
outputTable.field_names = ["receptor","N","%positive","Mean F1","Min F1","Max F1"]
for receptor in receptor_names:
receptor_df = df.iloc[df.index.get_level_values(0) == receptor]
X = Imputer().fit_transform(receptor_df.drop('label', axis=1).as_matrix())
y = pd.to_numeric(receptor_df['label']).as_matrix()
#rforest = RandomizedSearchCV(RandomForestClassifier(oob_score=True, class_weight='balanced'), rforest_params, cv = 3, scoring = make_scorer(f1_score),n_jobs=3)
rforest = RandomForestClassifier(oob_score=True, class_weight='balanced',n_estimators=100)
cv_score = cross_val_score(rforest,X,y,scoring='f1',cv=5)
cv_score_list.append(np.mean(cv_score))
outputTable.add_row([receptor,receptor_df.shape[0],(100*(y[y==1].shape[0]/y.shape[0])),np.mean(cv_score),np.min(cv_score),np.max(cv_score)])
del rforest
del X
del y
print(outputTable)
Explanation: Now iterate through the list of receptors and extract the corresponding subset of training data from the full dataframe. Keep in mind that the number of examples in each set are not the same, therefore the average f1 of each subset is stored into a list for later visualization. A random forest is fit to each of the subsets using $k=5$-fold cross validation with the scoring metric set to the f1-score in order to capture presence type I (precision) and type II (recall) errors. Accuracy is not used due to the fact that the data is imbalanced and that a good accuracy score may be misleading regarding the performance of the classifier on correctly classifying positive training/testing examples.
\begin{equation} Precision = \frac{TP}{TP + FP} \end{equation}
\begin{equation} Recall = \frac{TP}{TP + FN} \end{equation}
\begin{equation} F1 = 2 \frac{PrecisionRecall}{Precision + Recall} \end{equation}
Comparison of Classification with Random Forest Optimized using Randomized Grid Search
End of explanation
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
plt.figure(figsize=[15,6])
plt.xlabel("mean_f1")
sns.violinplot(x=cv_score_list, cut=0)
print ("Mean F1:",np.mean(cv_score_list),"\tMin F1:",np.min(cv_score_list),"\tMax F1:",np.max(cv_score_list))
Explanation: Visualize the Results
To get an idea of how the random forest tends to perform across the subsets of data, a violin plot is used to communicate the median and inner-quartile ranges of the data as well as visualize the estimated density of the samples at each point. As one can see the distribution is multimodal which implies some abnormality if we expect the f1-scores for any particular subset to lie within a single peaked gaussian.
End of explanation |
6,253 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Motivation
We can view pretty much all of machine learning (ML) (and this is one of many possible views) as an optimization exercise. Our challenge in supervized learning is to find a function that maps the inputs of a certain system to its outputs. Since we don't have direct access to that function, we have to estimate it. We aim to find the best possible estimate. Whenever we use the word "best" in mathematics, we imply some kind of optimization. Thus we either maximize some performance function, which increases for better estimates, or minimize some loss function, which decreases for better estimates. In general, we refer to the function that we optimize as the objective function.
There are elements of both science and art in the choice of performance/loss functions. For now let us focus on optimization itself.
Univariate functions
From school many of us remember how to optimize functions of a single scalar variable — univariate functions, such as, for example,
$$f(x) = -2x^2 + 6x + 9.$$
In Python we would define this function as
Step1: So we can pass values of $x$ to it as arguments and obtain the corresponding values $f(x)$ as the function's return value
Step2: Whenever we are dealing with functions, it is always a good idea to visually examine their graphs
Step3: Unsurprisingly (if we remember high school mathematics), the graph of our univariate quadratic (because the highest power of $x$ in it comes as $x^2$) function is a parabola. We are lucky
Step4: Now consider the function
$$f(x) = \frac{1}{x} \sin(x).$$
It has a single global maximum, two global minima, and infinitely many local maxima and minima.
Step5: High school optimization
Many of us remember from school this method of optimising functions. For our function, say
$$f(x) = -2x^2 + 6x + 9,$$
find the function's derivative. If we forgot how to differentiate functions, we can look up the rules of differentiation, say, on Wikipedia. In our example, differentiation is straightforward, and yields
$$\frac{d}{dx}f(x) = -4x + 6.$$
However, if we have completely forgotten the rules of differentiation, one particular Python library — the one for doing symbolic maths — comes in useful
Step6: Our next step is to find such $x$ (we'll call it $x_{\text{max}}$), at which this derivative becomes zero. This notation is somewhat misleading, because it is $f(x_{\text{max}})$ that is maximum, not $x_{\text{max}}$ itself; $x_{\text{max}}$ is the location of the function's maximum
Step7: In order to check that the value is indeed a local maximum and not a local minimum (and not a saddle point, look them up), we look at the second derivative of the function,
$$\frac{d^2}{dx^2}f(x_{\text{max}}) = -4.$$
Since this second derivative is negative at $x_{\text{max}}$, we are indeed looking at an (at least local) maximum. In this case we are lucky
Step8: Let us label this maximum on the function's graph
Step9: Multivariate functions
So far we have considered the optimization of real-valued functions of a single real variable, i.e. $f
Step10: Let's plot its graph. First, we need to compute the values of the function on a two-dimensional mesh grid
Step11: Then we can use the following code to produce a 3D plot
Step12: It may be more convenient to implement multivariate functions as functions of a single vector (more precisely, rank-1 NumPy array) in Python
Step13: Optimising multivariate functions analytically
The analytical method of finding the optimum of a multivariate function is similar to that for univariate functions. As the function has mutliple arguments, we need to find its so-called partial derivative with respect to each argument. They are computed similarly to normal derivatives, while pretending that all the other arguments are constants
Step14: The Jacobian
Notice that, for multivalued (not just multivariate) functions, $\mathbb{R}^n \rightarrow \mathbb{R}^m$, the gradient vector of partial derivatives generalizes to the Jacobian matrix
Step15: We have already found that its derivative is given by
$$\frac{df}{dx}(x) = -4x + 6.$$
Step16: The Newton-Raphson method starts with some initial guess, $x_0$, and then proceeds iteratively
Step17: Now let's apply it to our function
Step18: We see that the method converges quite quickly to (one of the) roots. Notice that, which of the two roots we converge to depends on the initial guess
Step19: Newton-Raphson is a root finding, not an optimization, algorithm. However, recall that optimization is equivalent to finding the root of the derivative function. Thus we can apply this algorithm to the derivative function (we also need to provide the second derivative function) to find a local optimum of the function
Step20: The result is consistent with our analytical solution.
Newton's method for multivariate functions
Newton's method can be generalized to mutlivariate functions. For multivalued multivariate functions $f
Step21: and the Jacobian as
Step22: Let's see how we can convert NumPy stuff to rank-2 arrays. For rank-1 arrays
Step23: if we want a column (rather than row) vector, which is probably a sensible default. If we wanted a row vector, we could do
Step24: Existing rank-2 arrays remain unchanged by this
Step25: For scalars, np.shape(a)[0] won't work, as their shape is (), so we need to do something special. Based on this information, let us implement the auxiliary function to_rank_2
Step26: And test it
Step27: Now let's generalize our implementation of the Newton-Raphson method
Step28: NB! TODO
Step30: Grid search
What we have considered so far isn't the most straightforward optimization procedure. A natural first thing to do is often the grid search.
In grid search, we pick a subset of the parameter search, usually a rectangular grid, evaluate the value at each grid point and pick the point where the function is largest (smallest) as the approximate location of the maximum (minimum).
As a by-product of the grid search we get a heat-map — an excellent way of visualising the magnitude of the function on the parameter space.
If we have more than two parameters, we can produce heatmaps for each parameter pair. (E.g., for a three-dimensional function, $(x_1, x_2)$, $(x_1, x_3)$, $(x_2, x_3)$.)
Grid search is often useful for tuning machine learning hyperparameters and finding optimal values for trading (and other) strategies, in which case a single evaluation of the objective function may correspond to a single backtest run over all available data.
Let us use the following auxiliary function from https | Python Code:
def func(x): return -2. * x**2 + 6. * x + 9.
Explanation: Motivation
We can view pretty much all of machine learning (ML) (and this is one of many possible views) as an optimization exercise. Our challenge in supervized learning is to find a function that maps the inputs of a certain system to its outputs. Since we don't have direct access to that function, we have to estimate it. We aim to find the best possible estimate. Whenever we use the word "best" in mathematics, we imply some kind of optimization. Thus we either maximize some performance function, which increases for better estimates, or minimize some loss function, which decreases for better estimates. In general, we refer to the function that we optimize as the objective function.
There are elements of both science and art in the choice of performance/loss functions. For now let us focus on optimization itself.
Univariate functions
From school many of us remember how to optimize functions of a single scalar variable — univariate functions, such as, for example,
$$f(x) = -2x^2 + 6x + 9.$$
In Python we would define this function as
End of explanation
func(0.)
Explanation: So we can pass values of $x$ to it as arguments and obtain the corresponding values $f(x)$ as the function's return value:
End of explanation
xs = np.linspace(-10., 10., 100)
fs = [func(x) for x in xs]
plt.plot(xs, fs, 'o');
Explanation: Whenever we are dealing with functions, it is always a good idea to visually examine their graphs:
End of explanation
xs = np.linspace(-100., 100., 1000)
fs = xs * np.cos(xs)
plt.plot(xs, fs);
Explanation: Unsurprisingly (if we remember high school mathematics), the graph of our univariate quadratic (because the highest power of $x$ in it comes as $x^2$) function is a parabola. We are lucky: this function is concave — if we join any two points on its graph, the straight line joining them will always lie below the graph. For such functions we can usually find the global optimum (minimum or maximum, in this case the function has a single global maximum).
Global versus local optima
We say global optimum, because a function may have multiple optima. All of them are called local optima, but only the largest maxima (the smallest minima) are referred to as global.
Consider the function
$$f(x) = x \cos(x).$$
It has numerous local minima and local maxima over $x \in \mathbb{R}$, but no global minimum/maximum:
End of explanation
xs = np.linspace(-100., 100., 1000)
fs = (1./xs) * np.sin(xs)
plt.plot(xs, fs);
Explanation: Now consider the function
$$f(x) = \frac{1}{x} \sin(x).$$
It has a single global maximum, two global minima, and infinitely many local maxima and minima.
End of explanation
import sympy
x = sympy.symbols('x')
func_diff = sympy.diff(-2. * x**2 + 6. * x + 9, x)
func_diff
Explanation: High school optimization
Many of us remember from school this method of optimising functions. For our function, say
$$f(x) = -2x^2 + 6x + 9,$$
find the function's derivative. If we forgot how to differentiate functions, we can look up the rules of differentiation, say, on Wikipedia. In our example, differentiation is straightforward, and yields
$$\frac{d}{dx}f(x) = -4x + 6.$$
However, if we have completely forgotten the rules of differentiation, one particular Python library — the one for doing symbolic maths — comes in useful:
End of explanation
roots = sympy.solve(func_diff, x)
roots
x_max = roots[0]
Explanation: Our next step is to find such $x$ (we'll call it $x_{\text{max}}$), at which this derivative becomes zero. This notation is somewhat misleading, because it is $f(x_{\text{max}})$ that is maximum, not $x_{\text{max}}$ itself; $x_{\text{max}}$ is the location of the function's maximum:
$$\frac{d}{dx}f(x_{\text{max}}) = 0,$$
i.e.
$$-4x_{\text{max}} + 6 = 0.$$
Hence the solution is
$$x_{\text{max}} = -6 / (-4) = 3/2 = 1.5$$
We could also use SymPy to solve the above equation:
End of explanation
f_max = func(x_max)
f_max
Explanation: In order to check that the value is indeed a local maximum and not a local minimum (and not a saddle point, look them up), we look at the second derivative of the function,
$$\frac{d^2}{dx^2}f(x_{\text{max}}) = -4.$$
Since this second derivative is negative at $x_{\text{max}}$, we are indeed looking at an (at least local) maximum. In this case we are lucky: this is also a global maximum. However, in general, it isn't easy to check mathematically whether an optimum global or not. This is one of the major challenges in optimization.
Let us now find the value of the function at the maximum by plugging in $x_{\text{max}}$ into $f$:
$$f_{\text{max}} = f(x_{\text{max}}) = -2 x_{\text{max}}^2 + 6 x_{\text{max}} + 9 = -2 \cdot 1.5^2 + 6 \cdot 1.5 + 9 = 13.5.$$
End of explanation
xs = np.linspace(-10., 10., 100)
fs = [func(x) for x in xs]
plt.plot(xs, fs, 'o')
plt.plot(x_max, f_max, 'o', color='red')
plt.axvline(x_max, color='red')
plt.axhline(f_max, color='red');
Explanation: Let us label this maximum on the function's graph:
End of explanation
def func(x1, x2): return -x1**2 - x2**2 + 6.*x1 + 3.*x2 + 9.
Explanation: Multivariate functions
So far we have considered the optimization of real-valued functions of a single real variable, i.e. $f: \mathbb{R} \rightarrow \mathbb{R}$.
However, most functions that we encounter in data science and machine learning are multivariate, i.e. $f: \mathbb{R}^n \rightarrow \mathbb{R}$. Moreover, some are also multivalued, i.e. $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$.
(Note: univariate/multivariate refers to the function's argument, whereas single-valued/multi-valued to the function's output.)
Consider, for example, the following single-valued, multivariate function:
$$f(x_1, x_2) = -x_1^2 - x_2^2 + 6x_1 + 3x_2 + 9.$$
We could define it in Python as
End of explanation
x1s, x2s = np.meshgrid(np.linspace(-100., 100., 100), np.linspace(-100., 100., 100))
fs = func(x1s, x2s)
np.shape(fs)
Explanation: Let's plot its graph. First, we need to compute the values of the function on a two-dimensional mesh grid:
End of explanation
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.contour3D(x1s, x2s, fs, 50);
Explanation: Then we can use the following code to produce a 3D plot:
End of explanation
def func(x): return -x[0]**2 - x[1]**2 + 6.*x[0] + 3.*x[1] + 9.
Explanation: It may be more convenient to implement multivariate functions as functions of a single vector (more precisely, rank-1 NumPy array) in Python:
End of explanation
func([3, 1.5])
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.contour3D(x1s, x2s, fs, 50)
ax.plot([3], [1.5], [20.25], 'o', color='red', markersize=20);
Explanation: Optimising multivariate functions analytically
The analytical method of finding the optimum of a multivariate function is similar to that for univariate functions. As the function has mutliple arguments, we need to find its so-called partial derivative with respect to each argument. They are computed similarly to normal derivatives, while pretending that all the other arguments are constants:
$$\frac{\partial}{\partial x_1} f(x_1, x_2) = -2x_1 + 6,$$
$$\frac{\partial}{\partial x_2} f(x_1, x_2) = -2x_2 + 3.$$
We call the vector of the function's partial derivatives its gradient vector, or grad:
$$\nabla f(x_1, x_2) = \begin{pmatrix} \frac{\partial}{\partial x_1} f(x_1, x_2) \ \frac{\partial}{\partial x_2} f(x_1, x_2) \end{pmatrix}.$$
When the function is continuous and differentiable, all the partial derivatives will be 0 at a local maximum or minimum point. Saying that all the partial derivatives are zero at a point, $(x_1^, x_2^)$, is the same as saying the gradient at that point is the zero vector:
$$\nabla f(x_1^, x_2^) = \begin{pmatrix} \frac{\partial}{\partial x_1} f(x_1^, x_2^) \ \frac{\partial}{\partial x_2} f(x_1^, x_2^) \end{pmatrix} = \begin{pmatrix} 0 \ 0 \end{pmatrix} = \mathbf{0}.$$
In our example, we can easily establish that the gradient vector is zero at $x_1^ = 3$, $x_2^ = 1.5$. And the maximum value that is achieved is
End of explanation
def func(x): return -2. * x**2 + 6. * x + 9.
Explanation: The Jacobian
Notice that, for multivalued (not just multivariate) functions, $\mathbb{R}^n \rightarrow \mathbb{R}^m$, the gradient vector of partial derivatives generalizes to the Jacobian matrix:
$$\mathbf{J} = \begin{pmatrix} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} & \cdots & \frac{\partial f_1}{\partial x_n} \ \vdots & \vdots & \ddots & \vdots \ \frac{\partial f_m}{\partial x_1} & \frac{\partial f_m}{\partial x_2} & \cdots & \frac{\partial f_m}{\partial x_n} \end{pmatrix}.$$
Newton-Raphson's method
Newton-Raphson's method is a numerical procedure for finding zeros (roots) of functions.
For example, consider again the function
$$f(x) = -2x^2 + 6x + 9.$$
End of explanation
def func_diff(x): return -4. * x + 6.
Explanation: We have already found that its derivative is given by
$$\frac{df}{dx}(x) = -4x + 6.$$
End of explanation
def newton_raphson_method(f, fdiff, x0, iter_count=10):
x = x0
print('x_0', x0)
for i in range(iter_count):
x = x - f(x) / fdiff(x)
print('x_%d' % (i+1), x)
return x
Explanation: The Newton-Raphson method starts with some initial guess, $x_0$, and then proceeds iteratively:
$$x_{n+1} = x_n - \frac{f(x_n)}{\frac{d}{dx}f(x_n)}$$
Let's code it up:
End of explanation
newton_raphson_method(func, func_diff, -5.)
Explanation: Now let's apply it to our function:
End of explanation
newton_raphson_method(func, func_diff, x0=5.)
Explanation: We see that the method converges quite quickly to (one of the) roots. Notice that, which of the two roots we converge to depends on the initial guess:
End of explanation
def func_diff2(x): return -4.
newton_raphson_method(func_diff, func_diff2, -5.)
Explanation: Newton-Raphson is a root finding, not an optimization, algorithm. However, recall that optimization is equivalent to finding the root of the derivative function. Thus we can apply this algorithm to the derivative function (we also need to provide the second derivative function) to find a local optimum of the function:
End of explanation
def func(x): return -x[0]**2 - x[1]**2 + 6.*x[0] + 3.*x[1] + 9.
Explanation: The result is consistent with our analytical solution.
Newton's method for multivariate functions
Newton's method can be generalized to mutlivariate functions. For multivalued multivariate functions $f: \mathbb{R}^k \rightarrow \mathbb{R}^k$, the method becomes
$$x_{n+1} = x_n - \mathbf{J}(x_n)^{-1} f(x_n),$$
where $\mathbf{J}$ is the Jacobian.
Since inverses are only defined for square matrices, for functions $f: \mathbb{R}^k \rightarrow \mathbb{R}^m$, we use the Moore-Penrose pseudoinverse $\mathbf{J}^+ = (\mathbf{J}^T \mathbf{J})^{-1} \mathbf{J}^T$ instead of $\mathbf{J}^{-1}$. Let's code this up.
Inside our generalized implementation of Newton-Raphson, we'll be working with vectors. It's probably a good idea to assume that the function and the Jacobian return rank-2 NumPy arrays.
However, one may have coded up the function as
End of explanation
def func_diff(x): return np.array([-2.*x[0] + 6., -2.*x[1] + 3.])
Explanation: and the Jacobian as
End of explanation
a = np.array([3., 5., 7.])
np.reshape(a, (np.shape(a)[0], -1))
Explanation: Let's see how we can convert NumPy stuff to rank-2 arrays. For rank-1 arrays:
End of explanation
np.reshape(a, (-1, np.shape(a)[0]))
Explanation: if we want a column (rather than row) vector, which is probably a sensible default. If we wanted a row vector, we could do
End of explanation
a = np.array([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]])
np.reshape(a, (np.shape(a)[0], -1))
np.reshape(a, (-1, np.shape(a)[0]))
Explanation: Existing rank-2 arrays remain unchanged by this:
End of explanation
def to_rank_2(arg, row_vector=False):
shape = np.shape(arg)
size = 1 if len(shape) == 0 else shape[0]
new_shape = (-1, size) if row_vector else (size, -1)
return np.reshape(arg, new_shape)
Explanation: For scalars, np.shape(a)[0] won't work, as their shape is (), so we need to do something special. Based on this information, let us implement the auxiliary function to_rank_2:
End of explanation
to_rank_2(5.)
to_rank_2([1., 2., 3.])
to_rank_2([[1.], [2.], [3.]])
to_rank_2([[1., 2., 3.]])
to_rank_2([[1., 2., 3], [4., 5., 6.]])
Explanation: And test it:
End of explanation
def newton_raphson_method(f, fdiff, x0, iter_count=10):
x = to_rank_2(x0)
for i in range(iter_count):
f_x = to_rank_2(f(x))
fdiff_x = to_rank_2(fdiff(x), row_vector=True)
non_square_jacobian_inv = np.dot(np.linalg.inv(np.dot(fdiff_x.T, fdiff_x)), fdiff_x.T)
x = x - np.dot(non_square_jacobian_inv, f_x)
print('x_%d' % (i+1), x)
return x
newton_raphson_method(func, func_diff, np.array([-10., -10.]), iter_count=5)
func_diff([-80.25, 25.125])
Explanation: Now let's generalize our implementation of the Newton-Raphson method:
End of explanation
import scipy.optimize
scipy.optimize.minimize(lambda x: -func(x), np.array([-80., 25.]), method='BFGS')
Explanation: NB! TODO: The above doesn't seem to work at the moment. The returned optimum is wrong. Can you spot a problem with the above implementation?
Quasi-Newton method
In practice, we may not always have access to the Jacobian of a function. There are numerical methods, known as quasi-Newton methods, which approximate the Jacobian numerically.
One such method is the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. It is generally a bad idea to implement these algorithms by hand, since their implementations are often nuanced and nontrivial.
Fortunately, Python libraries provide excellent implementations of optimization algorithms.
Let us use SciPy to optimize our function.
Remember that to maximize a function we simply minimize its negative, which is what we achieve with the Python lambda below:
End of explanation
def heatmap(data, row_labels, col_labels, ax=None,
cbar_kw={}, cbarlabel="", **kwargs):
Create a heatmap from a numpy array and two lists of labels.
Arguments:
data : A 2D numpy array of shape (N,M)
row_labels : A list or array of length N with the labels
for the rows
col_labels : A list or array of length M with the labels
for the columns
Optional arguments:
ax : A matplotlib.axes.Axes instance to which the heatmap
is plotted. If not provided, use current axes or
create a new one.
cbar_kw : A dictionary with arguments to
:meth:`matplotlib.Figure.colorbar`.
cbarlabel : The label for the colorbar
All other arguments are directly passed on to the imshow call.
if not ax:
ax = plt.gca()
# Plot the heatmap
im = ax.imshow(data, **kwargs)
# Create colorbar
cbar = ax.figure.colorbar(im, ax=ax, **cbar_kw)
cbar.ax.set_ylabel(cbarlabel, rotation=-90, va="bottom")
# We want to show all ticks...
ax.set_xticks(np.arange(data.shape[1]))
ax.set_yticks(np.arange(data.shape[0]))
# ... and label them with the respective list entries.
ax.set_xticklabels(col_labels)
ax.set_yticklabels(row_labels)
# Let the horizontal axes labeling appear on top.
ax.tick_params(top=True, bottom=False,
labeltop=True, labelbottom=False)
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=-30, ha="right",
rotation_mode="anchor")
# Turn spines off and create white grid.
for edge, spine in ax.spines.items():
spine.set_visible(False)
ax.set_xticks(np.arange(data.shape[1]+1)-.5, minor=True)
ax.set_yticks(np.arange(data.shape[0]+1)-.5, minor=True)
ax.grid(which="minor", color="w", linestyle='-', linewidth=3)
ax.tick_params(which="minor", bottom=False, left=False)
return im, cbar
def func(x1, x2): return -x1**2 - x2**2 + 6.*x1 + 3.*x2 + 9.
x1s_ = np.linspace(-100., 100., 10)
x2s_ = np.linspace(-100., 100., 10)
x1s, x2s = np.meshgrid(x1s_, x2s_)
fs = func(x1s, x2s)
np.shape(fs)
heatmap(fs, x1s_, x2s_)[0];
Explanation: Grid search
What we have considered so far isn't the most straightforward optimization procedure. A natural first thing to do is often the grid search.
In grid search, we pick a subset of the parameter search, usually a rectangular grid, evaluate the value at each grid point and pick the point where the function is largest (smallest) as the approximate location of the maximum (minimum).
As a by-product of the grid search we get a heat-map — an excellent way of visualising the magnitude of the function on the parameter space.
If we have more than two parameters, we can produce heatmaps for each parameter pair. (E.g., for a three-dimensional function, $(x_1, x_2)$, $(x_1, x_3)$, $(x_2, x_3)$.)
Grid search is often useful for tuning machine learning hyperparameters and finding optimal values for trading (and other) strategies, in which case a single evaluation of the objective function may correspond to a single backtest run over all available data.
Let us use the following auxiliary function from https://matplotlib.org/gallery/images_contours_and_fields/image_annotated_heatmap.html
End of explanation |
6,254 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Benign or not?
Predicting the incidence of breast cancer diagnosis using multiple cytological characteristics
Kyle Willett (12 Jul 2016)
This project focuses on analyzing results of a study measuring several different physical characteristics of a tumor, and then building a classifier to predict whether the tumor is ultimately benign (good for the patient) or malignant (bad). The data were originally collected from a study at the University of Wisconsin and retrieved from the Machine Learning Repository at UC Irvine.
The data schema are described in detail on the UCI site. The dataset is a single file containing 699 rows, each corresponding to properties of a tumor measured in a particular patient. For each row (instance), there is a (presumably) random ID, nine attributes that measure properties of the tumor, and a class label that indicates whether the tumor was benign or malignant. Each of the nine attributes is scaled to be an integer between [1,10]. The goal is to use the values of these attributes to predict the class.
Wolberg & Mangasarian (1990) achieved an accuracy of 93.5% using two pairs of parallel hyperplanes to separate the data. Our minimum goal is to improve upon that benchmark.
Initial inspection and plotting
Step1: The original data set had 16 rows with at least one missing attribute (designated as "?") in the data. These instances included 14 benign and 2 malignant tumors, which is a significantly different ratio than the roughly 65%-35% distribution of benign-to-malignant classifiers over the entire dataset. This could be an indicator that missing experimental values are correlated with the class label.
Since this is a very small fraction of the total dataset (16/699 = 2.2%), the dataset used in this notebook simply eliminates any row where "?" appears.
Step2: As an initial look, let's plot the data using a version of the Seaborn plotting package. Since there are nine attributes, this plots each against each other to visualize the relative correlations.
Plots in the lower left of the grid are 2-D kernel-density estimates (KDE), which show a smoothed version of the relationship between the attributes. Plots in the upper right show the same data as a scatter plot; this is more difficult to interpret since each value can only be an integer from 1 to 10 and most of the points overlap. The plots along the diagonal show both a histogram and a 1-D KDE of each distribution.
Step4: The plot above shows the relationships between all attributes (with the exception of the mitoses vector, which had undetermined mathematical issues measuring a unique KDE). Looking along the diagonals, most of the attributes are relatively unbalanced, having the majority of their values $\leq 2$. The exceptions are clump_thickness and bland_chromatin, which have higher fractions of values $>5$.
Creating a classifier
Selection of an estimator is done in part by assessing the traits of the dataset and the desired predictions. The ultimate goal is to predict categorical data (benign or malignant), so regression models aren't appropriate. The data are pre-labeled with their class, so clustering isn't necessary. The total amount of data is $\lesssim1000$ samples, so a simple implementation of a support vector machine classifier (SVC) will be able to handle it.
SVCs should be useful because they can classify high-dimensional data ($N=9$ here) and have a number of different parameters that can be optimized for the kernel function. The probability estimates for each class can be estimated later using $k$-fold cross-validation.
Step5: Simple training/test split at 40%
To avoid overfitting the model, the data are split into a training sample on which the model is fit and a test sample on which it is evaluated. This begins with a 60%-40% split for the test and training data.
Step6: So there are 409 samples in the training data and 274 in the test data. Now let's fit the model to the training data and assess the accuracy on the test.
Step7: This is excellent; without any fine-tuning, our 97% accuracy exceeds the initial benchmark of 93% from the published paper.
Cross-validate
To do a better job of assessing the model accuracy than just using a single training-test split (which could be biased), cross-validation can be used to run the same comparison many times using different splits. The implementation below runs 5-fold validation; the variance between the results gives an estimate on the uncertainty of the accuracy.
Step9: This is a very close result to the single test above, indicating that the initial split was fairly unbiased.
Parameter estimation through grid search
This is initially a good result, but can be fine-tuned. The SVM has several parameters that control the features of the separating hyperplanes, including the nature of the kernel, the kernel coefficient, and the penalty parameter of the error term. These can be optimized to provide a better fit.
To prevent from overfitting through optimization, the data set are now split into three sets
Step10: Tuning the parameters in the grid can attempt to optimize different predictive value; the grid above examines both precision and recall. For this particular dataset, the best parameters are the same no matter which one is optimized, which is useful. The overall accuracy is still the same as the initial run.
Step11: Results | Python Code:
# Load some basic plotting and data analysis packages from Python
%matplotlib inline
from matplotlib import pyplot as plt
import pandas as pd
import seaborn as sns;
Explanation: Benign or not?
Predicting the incidence of breast cancer diagnosis using multiple cytological characteristics
Kyle Willett (12 Jul 2016)
This project focuses on analyzing results of a study measuring several different physical characteristics of a tumor, and then building a classifier to predict whether the tumor is ultimately benign (good for the patient) or malignant (bad). The data were originally collected from a study at the University of Wisconsin and retrieved from the Machine Learning Repository at UC Irvine.
The data schema are described in detail on the UCI site. The dataset is a single file containing 699 rows, each corresponding to properties of a tumor measured in a particular patient. For each row (instance), there is a (presumably) random ID, nine attributes that measure properties of the tumor, and a class label that indicates whether the tumor was benign or malignant. Each of the nine attributes is scaled to be an integer between [1,10]. The goal is to use the values of these attributes to predict the class.
Wolberg & Mangasarian (1990) achieved an accuracy of 93.5% using two pairs of parallel hyperplanes to separate the data. Our minimum goal is to improve upon that benchmark.
Initial inspection and plotting
End of explanation
# Read in the cleaned dataset as a pandas dataframe.
names = ["sample", "clump_thickness", "uniformity_size", "uniformity_shape",
"adhesion", "single_epithelial", "bare_nuclei", "bland_chromatin",
"normal_nucleoli", "mitoses", "class"]
data = pd.read_csv("../dc/breast-cancer-wisconsin-cleaned.data",names=names)
Explanation: The original data set had 16 rows with at least one missing attribute (designated as "?") in the data. These instances included 14 benign and 2 malignant tumors, which is a significantly different ratio than the roughly 65%-35% distribution of benign-to-malignant classifiers over the entire dataset. This could be an indicator that missing experimental values are correlated with the class label.
Since this is a very small fraction of the total dataset (16/699 = 2.2%), the dataset used in this notebook simply eliminates any row where "?" appears.
End of explanation
# Plot the relationships in the full dataset on a large grid
with sns.plotting_context("notebook", font_scale=2):
g = sns.PairGrid(data[names[1:-1]], diag_sharey=False)
try:
g.map_lower(sns.kdeplot, cmap="Blues_d",dropna=True)
except ValueError:
pass
try:
g.map_upper(plt.scatter)
except ValueError:
pass
try:
g.map_diag(sns.distplot)
except ValueError:
pass
Explanation: As an initial look, let's plot the data using a version of the Seaborn plotting package. Since there are nine attributes, this plots each against each other to visualize the relative correlations.
Plots in the lower left of the grid are 2-D kernel-density estimates (KDE), which show a smoothed version of the relationship between the attributes. Plots in the upper right show the same data as a scatter plot; this is more difficult to interpret since each value can only be an integer from 1 to 10 and most of the points overlap. The plots along the diagonal show both a histogram and a 1-D KDE of each distribution.
End of explanation
# Import the modules for the SVM from scikit-learn
import numpy as np
from sklearn.svm import LinearSVC
from sklearn import cross_validation
Load the data
SVM expects the attributes to be an array in the shape (N,M)
and the labels as an array with shape (N,)
where N = number of rows (samples)
and M = number of attributes
X = np.array(data[names[1:-1]])
y = np.array(data[['class']]).ravel()
Explanation: The plot above shows the relationships between all attributes (with the exception of the mitoses vector, which had undetermined mathematical issues measuring a unique KDE). Looking along the diagonals, most of the attributes are relatively unbalanced, having the majority of their values $\leq 2$. The exceptions are clump_thickness and bland_chromatin, which have higher fractions of values $>5$.
Creating a classifier
Selection of an estimator is done in part by assessing the traits of the dataset and the desired predictions. The ultimate goal is to predict categorical data (benign or malignant), so regression models aren't appropriate. The data are pre-labeled with their class, so clustering isn't necessary. The total amount of data is $\lesssim1000$ samples, so a simple implementation of a support vector machine classifier (SVC) will be able to handle it.
SVCs should be useful because they can classify high-dimensional data ($N=9$ here) and have a number of different parameters that can be optimized for the kernel function. The probability estimates for each class can be estimated later using $k$-fold cross-validation.
End of explanation
X_train, X_test, y_train, y_test = cross_validation.train_test_split(
X, y, test_size=0.4, random_state=1)
print X_train.shape, y_train.shape
print X_test.shape, y_test.shape
Explanation: Simple training/test split at 40%
To avoid overfitting the model, the data are split into a training sample on which the model is fit and a test sample on which it is evaluated. This begins with a 60%-40% split for the test and training data.
End of explanation
clf = LinearSVC(C=1).fit(X_train, y_train)
print("Accuracy: %0.2f" % clf.score(X_test, y_test))
Explanation: So there are 409 samples in the training data and 274 in the test data. Now let's fit the model to the training data and assess the accuracy on the test.
End of explanation
clf = LinearSVC(C=1)
scores = cross_validation.cross_val_score(clf, X, y, cv=5)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std()))
Explanation: This is excellent; without any fine-tuning, our 97% accuracy exceeds the initial benchmark of 93% from the published paper.
Cross-validate
To do a better job of assessing the model accuracy than just using a single training-test split (which could be biased), cross-validation can be used to run the same comparison many times using different splits. The implementation below runs 5-fold validation; the variance between the results gives an estimate on the uncertainty of the accuracy.
End of explanation
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
tuned_parameters = [{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4],
'C': [1, 10, 100, 1000]},
{'kernel': ['linear'], 'C': [1, 10, 100, 1000]}]
# Limit to a linear kernel; this will allow relative ranking of
# the features
tuned_parameters = [{'C': np.logspace(-2,0,25)}]
scores = ['precision', 'recall']
for score in scores:
print("\nTuning hyper-parameters for %s" % score)
clf = GridSearchCV(LinearSVC(), tuned_parameters, cv=5,
scoring='%s_weighted' % score)
clf.fit(X_train, y_train)
print("Best parameters set found on development set:")
print(clf.best_params_)
best_params = clf.best_params_
Explanation: This is a very close result to the single test above, indicating that the initial split was fairly unbiased.
Parameter estimation through grid search
This is initially a good result, but can be fine-tuned. The SVM has several parameters that control the features of the separating hyperplanes, including the nature of the kernel, the kernel coefficient, and the penalty parameter of the error term. These can be optimized to provide a better fit.
To prevent from overfitting through optimization, the data set are now split into three sets: training, test, and validation. The SVM parameters will be optimized on the training and test sets and the accuracy evaluated on the validation set.
End of explanation
# Test again with the new parameters and cross-validation
clf = LinearSVC(C=best_params['C']).fit(X_train,y_train)
scores = cross_validation.cross_val_score(clf, X, y, cv=5)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std()))
Explanation: Tuning the parameters in the grid can attempt to optimize different predictive value; the grid above examines both precision and recall. For this particular dataset, the best parameters are the same no matter which one is optimized, which is useful. The overall accuracy is still the same as the initial run.
End of explanation
benign_test = X_test[y_test == 2]
malignant_test = X_test[y_test == 4]
n = len(X_test)
predicted_benign = clf.predict(benign_test)
predicted_malignant = clf.predict(malignant_test)
print "True positive rate: {}/{}".format(sum(predicted_benign == 2),len(benign_test))
print "True negative rate: {}/{}".format(sum(predicted_malignant == 4),len(malignant_test))
print "\nFalse positive rate: {}/{} ({:.1f}% of all cases)".format(
sum(predicted_benign == 4),len(benign_test),sum(predicted_benign == 4)/float(n)*100.)
print "False negative rate: {}/{} ({:.1f}% of all cases)".format(
sum(predicted_malignant == 2),len(malignant_test),sum(predicted_malignant == 2)/float(n)*100.)
Explanation: Results
End of explanation |
6,255 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Microbiome experiment step-by-step analysis
This is a jupyter notebook example of how to load, process and plot data from a microbiome experiment using Calour.
Setup
Import the calour module
Step1: (optional) Set the level of feedback messages from calour
can use
Step2: Also enable interactive plots inside the jupyter notebook
Step3: Loading the data
For an amplicon experiment we use ca.read_amplicon()
First parameter is the location+name of the biom table file (can be hdf5/json/txt biom table - see here for details)
Second (optional) parameter is the sample mapping file locaion+name. First column should be the sample id (identical to the sample ids in the biom table). Rest of the column are information fields about each sample.
normalize=XXX
Step4: Process the data
Get rid of the features (bacteria) with small amount of reads
We throw away all features with total reads (over all samples) < 10 (after each sample was normalized to 10k reads/sample). So a bacteria present (with 1 read) in 10 samples will be kept, as well as a bacteria present in only one sample, with 10 reads in this sample.
Note alternatively we could filter based on mean reads/sample or fraction of samples where the feature is present. Each method filters away slightly different bacteria. See filtering notebook for details on the filtering functions.
Step5: Cluster (reorder) the features so similarly behaving bacteria are close to each other
Features are clustered (hierarchical clustering) based on euaclidian distance between features (over all samples) following normalizing each feature to mean 0 std 1. For more details and examples, see sorting notebook or cluster_features documentation
Note that if we have a lot of features, clustering is slow, so it is recommended to first filter away the non-interesting features.
Step6: Sort the samples according to physical functioning and Disease state
Note that order within each group of similar value is maintained. We first sort by physical functioning, then sort by the disease state. So within each disease state, samples will still be sorted by physical functioning.
Step7: Plotting the data
Columns (x-axis) are the samples, rows (y-axis) are the features. We will show on the x-axis the host-individual field of each sample.
we will use the jupyter notebook GUI so we will see the interactive plot in the notebook. Alternatively we could use the qt5 GUI to see the plot in a separate standalone window.
A few cool things we can do with the interactive plot
Step8: Adding a field to the top bar
Now let's add the values of the "Sex" field into the xbar on top
First we'll also sort by sex, so values will be continuous (note we then sort by the disease state to get the two groups separated).
Step9: Differential abundance testing
Let's look for bacteria separating sick from healthy
We ask it to find all bacteria significantly different between samples with 'Control' and 'Patient' in the 'Subject' field.
By default calour uses the mean of the ranks of each feature (over all samples), with dsFDR multiple hypothesis correction.
For more information, see notebook and function doc
Step10: Plotting the differentially abundant features
Let's plot to see the behavior of these bacteria.
The output of diff_abundance is an Experiment with only the significant bacteria, which are sorted by the effect size. On the bottom is the bacteria with the largest effect size (higher in Control compared to Patient).
Step11: dbBact term enrichment
We can ask what is special in the bacteria significanly higher in the Control vs. the Patient group and vice versa.
We supply the parameter ignore_exp=[12] to ignore annotations regarding this experiment (expid=12) since it is already in the dbBact database.
Note since we need to get the per-feature annotations from dbBact, we need a live internet connection to run this command.
Step12: The enriched terms are in a calour experiment class (terms are features, bacteria are samples), so we can see the
list of enriched terms with the p-value (pval) and effect size (odif) | Python Code:
import calour as ca
Explanation: Microbiome experiment step-by-step analysis
This is a jupyter notebook example of how to load, process and plot data from a microbiome experiment using Calour.
Setup
Import the calour module
End of explanation
ca.set_log_level(11)
Explanation: (optional) Set the level of feedback messages from calour
can use:
1 for debug (lots of feedback on each command)
11 for info (useful information from some commands)
21 for warning (just warning messages)
The Calour default is warning (21)
End of explanation
%matplotlib notebook
Explanation: Also enable interactive plots inside the jupyter notebook
End of explanation
dat=ca.read_amplicon('data/chronic-fatigue-syndrome.biom',
'data/chronic-fatigue-syndrome.sample.txt',
normalize=10000,min_reads=1000)
print(dat)
Explanation: Loading the data
For an amplicon experiment we use ca.read_amplicon()
First parameter is the location+name of the biom table file (can be hdf5/json/txt biom table - see here for details)
Second (optional) parameter is the sample mapping file locaion+name. First column should be the sample id (identical to the sample ids in the biom table). Rest of the column are information fields about each sample.
normalize=XXX : tells calour to rescale each sample to XXX reads (by dividing each feature frequency by the total number of reads in the sample and multiplying by XXX). Alternatively, can use normalize=None to skip normalization (i.e. in the case the biom table is already rarified)
min_reads=XXX : throw away samples with less than min_reads total (before normalization). Useful to get rid of samples with small number of reads. Can use min_reads=None to keep all samples.
We will use the data from:
Giloteaux, L., Goodrich, J.K., Walters, W.A., Levine, S.M., Ley, R.E. and Hanson, M.R., 2016.
Reduced diversity and altered composition of the gut microbiome in individuals with myalgic encephalomyelitis/chronic fatigue syndrome.
Microbiome, 4(1), p.30.
End of explanation
dat=dat.filter_abundance(10)
Explanation: Process the data
Get rid of the features (bacteria) with small amount of reads
We throw away all features with total reads (over all samples) < 10 (after each sample was normalized to 10k reads/sample). So a bacteria present (with 1 read) in 10 samples will be kept, as well as a bacteria present in only one sample, with 10 reads in this sample.
Note alternatively we could filter based on mean reads/sample or fraction of samples where the feature is present. Each method filters away slightly different bacteria. See filtering notebook for details on the filtering functions.
End of explanation
datc=dat.cluster_features()
Explanation: Cluster (reorder) the features so similarly behaving bacteria are close to each other
Features are clustered (hierarchical clustering) based on euaclidian distance between features (over all samples) following normalizing each feature to mean 0 std 1. For more details and examples, see sorting notebook or cluster_features documentation
Note that if we have a lot of features, clustering is slow, so it is recommended to first filter away the non-interesting features.
End of explanation
datc=datc.sort_samples('Physical_functioning')
datc=datc.sort_samples('Subject')
Explanation: Sort the samples according to physical functioning and Disease state
Note that order within each group of similar value is maintained. We first sort by physical functioning, then sort by the disease state. So within each disease state, samples will still be sorted by physical functioning.
End of explanation
datc.plot(sample_field='Subject', gui='jupyter')
Explanation: Plotting the data
Columns (x-axis) are the samples, rows (y-axis) are the features. We will show on the x-axis the host-individual field of each sample.
we will use the jupyter notebook GUI so we will see the interactive plot in the notebook. Alternatively we could use the qt5 GUI to see the plot in a separate standalone window.
A few cool things we can do with the interactive plot:
Click with the mouse on the heatmap to see details about the feature/sample selected (including information from dbBact).
use SHIFT+UP or SHIFT+DOWN to zoom in/out on the features
use UP/DOWN to scroll up/down on the features
use SHIFT+RIGHT or SHIFT+LEFT to zoom in/out on the samples
use RIGHT/LEFT to scroll left/right on the samples
See here for more details
End of explanation
datc=datc.sort_samples('Sex')
datc=datc.sort_samples('Subject')
datc.plot(sample_field='Subject', gui='jupyter',barx_fields=['Sex'])
Explanation: Adding a field to the top bar
Now let's add the values of the "Sex" field into the xbar on top
First we'll also sort by sex, so values will be continuous (note we then sort by the disease state to get the two groups separated).
End of explanation
dd=datc.diff_abundance(field='Subject',val1='Control',val2='Patient', random_seed=2018)
Explanation: Differential abundance testing
Let's look for bacteria separating sick from healthy
We ask it to find all bacteria significantly different between samples with 'Control' and 'Patient' in the 'Subject' field.
By default calour uses the mean of the ranks of each feature (over all samples), with dsFDR multiple hypothesis correction.
For more information, see notebook and function doc
End of explanation
dd.plot(sample_field='Subject', gui='jupyter')
Explanation: Plotting the differentially abundant features
Let's plot to see the behavior of these bacteria.
The output of diff_abundance is an Experiment with only the significant bacteria, which are sorted by the effect size. On the bottom is the bacteria with the largest effect size (higher in Control compared to Patient).
End of explanation
ax, enriched=dd.plot_diff_abundance_enrichment(term_type='combined',ignore_exp=[12])
Explanation: dbBact term enrichment
We can ask what is special in the bacteria significanly higher in the Control vs. the Patient group and vice versa.
We supply the parameter ignore_exp=[12] to ignore annotations regarding this experiment (expid=12) since it is already in the dbBact database.
Note since we need to get the per-feature annotations from dbBact, we need a live internet connection to run this command.
End of explanation
enriched.feature_metadata
Explanation: The enriched terms are in a calour experiment class (terms are features, bacteria are samples), so we can see the
list of enriched terms with the p-value (pval) and effect size (odif)
End of explanation |
6,256 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Aod Plus Ccn
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 13.3. External Mixture
Is Required
Step59: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step60: 14.2. Shortwave Bands
Is Required
Step61: 14.3. Longwave Bands
Is Required
Step62: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step63: 15.2. Twomey
Is Required
Step64: 15.3. Twomey Minimum Ccn
Is Required
Step65: 15.4. Drizzle
Is Required
Step66: 15.5. Cloud Lifetime
Is Required
Step67: 15.6. Longwave Bands
Is Required
Step68: 16. Model
Aerosol model
16.1. Overview
Is Required
Step69: 16.2. Processes
Is Required
Step70: 16.3. Coupling
Is Required
Step71: 16.4. Gas Phase Precursors
Is Required
Step72: 16.5. Scheme Type
Is Required
Step73: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-3', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: IPSL
Source ID: SANDBOX-3
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 70 (38 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_aod_plus_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Aod Plus Ccn
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.external_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.3. External Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol external mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
6,257 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: If the conflict is active print a statement
Step2: If the conflict is active print a statement, if not, print a different statement
Step3: If the conflict is active print a statement, if not, print a different statement, if unknown, state a third statement. | Python Code:
conflict_active = 1
Explanation: Title: if and if else
Slug: if_and_if_else_statements
Summary: if and if else
Date: 2016-05-01 12:00
Category: Python
Tags: Basics
Authors: Chris Albon
Create a variable with the status of the conflict.
1 if the conflict is active
0 if the conflict is not active
unknown if the status of the conflict is unknwon
End of explanation
if conflict_active == 1:
print('The conflict is active.')
Explanation: If the conflict is active print a statement
End of explanation
if conflict_active == 1:
print('The conflict is active.')
else:
print('The conflict is not active.')
Explanation: If the conflict is active print a statement, if not, print a different statement
End of explanation
if conflict_active == 1:
print('The conflict is active.')
elif conflict_active == 'unknown':
print('The status of the conflict is unknown')
else:
print('The conflict is not active.')
Explanation: If the conflict is active print a statement, if not, print a different statement, if unknown, state a third statement.
End of explanation |
6,258 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations Exercise 3
Imports
Step1: Damped, driven nonlinear pendulum
The equations of motion for a simple pendulum of mass $m$, length $l$ are
Step4: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$.
Step5: Simple pendulum
Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy.
Integrate the equations of motion.
Plot $E/m$ versus time.
Plot $\theta(t)$ and $\omega(t)$ versus time.
Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant.
Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable.
Step7: Damped pendulum
Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$.
Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$.
Decrease your atol and rtol even futher and make sure your solutions have converged.
Make a parametric plot of $[\theta(t),\omega(t)]$ versus time.
Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$
Label your axes and customize your plot to make it beautiful and effective.
Step8: Here is an example of the output of your plot_pendulum function that should show a decaying spiral.
Step9: Use interact to explore the plot_pendulum function with | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
Explanation: Ordinary Differential Equations Exercise 3
Imports
End of explanation
g = 9.81 # m/s^2
l = 0.5 # length of pendulum, in meters
tmax = 50. # seconds
t = np.linspace(0, tmax, int(100*tmax))
Explanation: Damped, driven nonlinear pendulum
The equations of motion for a simple pendulum of mass $m$, length $l$ are:
$$
\frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta
$$
When a damping and periodic driving force are added the resulting system has much richer and interesting dynamics:
$$
\frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta - a \omega - b \sin(\omega_0 t)
$$
In this equation:
$a$ governs the strength of the damping.
$b$ governs the strength of the driving force.
$\omega_0$ is the angular frequency of the driving force.
When $a=0$ and $b=0$, the energy/mass is conserved:
$$E/m =g\ell(1-\cos(\theta)) + \frac{1}{2}\ell^2\omega^2$$
Basic setup
Here are the basic parameters we are going to use for this exercise:
End of explanation
def derivs(y, t, a, b, omega0):
Compute the derivatives of the damped, driven pendulum.
Parameters
----------
y : ndarray
The solution vector at the current time t[i]: [theta[i],omega[i]].
t : float
The current time t[i].
a, b, omega0: float
The parameters in the differential equation.
Returns
-------
dy : ndarray
The vector of derviatives at t[i]: [dtheta[i],domega[i]].
theta=y[0]
omega=y[1]
dtheta=omega
domega=-g/l*np.sin(theta)-a*omega-b*np.sin(omega0*t)
dy=dtheta,domega
return dy
assert np.allclose(derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0), [1.,-1.])
def energy(y):
Compute the energy for the state array y.
The state array y can have two forms:
1. It could be an ndim=1 array of np.array([theta,omega]) at a single time.
2. It could be an ndim=2 array where each row is the [theta,omega] at single
time.
Parameters
----------
y : ndarray, list, tuple
A solution vector
Returns
-------
E/m : float (ndim=1) or ndarray (ndim=2)
The energy per mass.
if np.ndim(y)==1:
theta=y[0]
omega=y[1]
if np.ndim(y)==2:
theta=y[:,0]
omega=y[:,1]
energy=g*l*(1-np.cos(theta))+.5*l**2*omega**2
return(energy)
assert np.allclose(energy(np.array([np.pi,0])),g)
assert np.allclose(energy(np.ones((10,2))), np.ones(10)*energy(np.array([1,1])))
Explanation: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$.
End of explanation
def SHM(a,b,omega0,ic):
ysolns=odeint(derivs,ic,t,args=(a,b,omega0),atol=1e-3,rtol=1e-2)
return ysolns
y=SHM(0,0,0,[0,0])
e=energy(y)
plt.plot(t,e)
thetha=y[:,0]
omega=y[:,1]
plt.figure(figsize=(11,8))
plt.plot(t,thetha,label='theta(t)',color='r')
plt.plot(t,omega,label='omega(t)',color='g')
plt.title("Omega and Thetha vs T")
plt.xlabel('t')
plt.ylabel('y')
plt.legend()
plt.grid(False)
assert True # leave this to grade the two plots and their tuning of atol, rtol.
Explanation: Simple pendulum
Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy.
Integrate the equations of motion.
Plot $E/m$ versus time.
Plot $\theta(t)$ and $\omega(t)$ versus time.
Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant.
Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable.
End of explanation
def plot_pendulum(a=0.0, b=0.0, omega0=0.0):
Integrate the damped, driven pendulum and make a phase plot of the solution.
ic=[(-np.pi+0.1),0]
ysolns=odeint(derivs,ic,t,args=(a,b,omega0),atol=1e-5,rtol=1e-4)
plt.figure(figsize=(11,8))
plt.plot(ysolns[:,0],ysolns[:,1],color="r")
plt.xlim(-2*np.pi,2*np.pi)
plt.ylim(-10,10)
plt.title('Theta(t) vs Omega(t)')
plt.ylabel('Omega(t)')
plt.xlabel('Thetha(t)')
plt.grid(False)
Explanation: Damped pendulum
Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$.
Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$.
Decrease your atol and rtol even futher and make sure your solutions have converged.
Make a parametric plot of $[\theta(t),\omega(t)]$ versus time.
Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$
Label your axes and customize your plot to make it beautiful and effective.
End of explanation
plot_pendulum(0.5, 0.0, 0.0)
Explanation: Here is an example of the output of your plot_pendulum function that should show a decaying spiral.
End of explanation
interact(plot_pendulum,a=(0.0,1.0,0.1),b=(0.0,1.0,0.1),omega=(0.0,10.0,0.1))
Explanation: Use interact to explore the plot_pendulum function with:
a: a float slider over the interval $[0.0,1.0]$ with steps of $0.1$.
b: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$.
omega0: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$.
End of explanation |
6,259 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Leverage
Make sure to watch the video and slides for this lecture for the full explanation!
$ Leverage Ratio = \frac{Debt + Capital Base}{Capital Base}$
Leverage from Algorithm
Make sure to watch the video for this! Basically run this and grab your own backtestid as shown in the video. More info
Step1: Backtest Info
Step2: High Leverage Example
You can actually specify to borrow on margin (NOT RECOMMENDED)
Step3: Set Hard Limit on Leverage
http | Python Code:
def initialize(context):
context.amzn = sid(16841)
context.ibm = sid(3766)
schedule_function(rebalance,
date_rules.every_day(),
time_rules.market_open())
schedule_function(record_vars,
date_rules.every_day(),
time_rules.market_close())
def rebalance(context,data):
order_target_percent(context.amzn, 0.5)
order_target_percent(context.ibm, -0.5)
def record_vars(context,data):
record(amzn_close=data.current(context.amzn, 'close'))
record(ibm_close=data.current(context.ibm, 'close'))
record(Leverage = context.account.leverage)
record(Exposure = context.account.net_leverage)
Explanation: Leverage
Make sure to watch the video and slides for this lecture for the full explanation!
$ Leverage Ratio = \frac{Debt + Capital Base}{Capital Base}$
Leverage from Algorithm
Make sure to watch the video for this! Basically run this and grab your own backtestid as shown in the video. More info:
The get_backtest function provides programmatic access to the results of backtests run on the Quantopian platform. It takes a single parameter, the ID of a backtest for which results are desired.
You can find the ID of a backtest in the URL of its full results page, which will be of the form:
https://www.quantopian.com/algorithms/<algorithm_id>/<backtest_id>.
You are only entitled to view the backtests that either:
1) you have created
2) you are a collaborator on
End of explanation
bt = get_backtest('5986b969dbab994fa4264696')
bt.algo_id
bt.recorded_vars
bt.recorded_vars['Leverage'].plot()
bt.recorded_vars['Exposure'].plot()
Explanation: Backtest Info
End of explanation
def initialize(context):
context.amzn = sid(16841)
context.ibm = sid(3766)
schedule_function(rebalance,
date_rules.every_day(),
time_rules.market_open())
schedule_function(record_vars,
date_rules.every_day(),
time_rules.market_close())
def rebalance(context,data):
order_target_percent(context.ibm, -2.0)
order_target_percent(context.amzn, 2.0)
def record_vars(context,data):
record(amzn_close=data.current(context.amzn, 'close'))
record(ibm_close=data.current(context.ibm, 'close'))
record(Leverage = context.account.leverage)
record(Exposure = context.account.net_leverage)
bt = get_backtest('5986bd68ceda5554428a005b')
bt.recorded_vars['Leverage'].plot()
Explanation: High Leverage Example
You can actually specify to borrow on margin (NOT RECOMMENDED)
End of explanation
def initialize(context):
context.amzn = sid(16841)
context.ibm = sid(3766)
set_max_leverage(1.03)
schedule_function(rebalance,
date_rules.every_day(),
time_rules.market_open())
schedule_function(record_vars,
date_rules.every_day(),
time_rules.market_close())
def rebalance(context,data):
order_target_percent(context.ibm, -0.5)
order_target_percent(context.amzn, 0.5)
def record_vars(context,data):
record(amzn_close=data.current(context.amzn,'close'))
record(ibm_close=data.current(context.ibm,'close'))
record(Leverage = context.account.leverage)
record(Exposure = context.account.net_leverage)
Explanation: Set Hard Limit on Leverage
http://www.zipline.io/appendix.html?highlight=leverage#zipline.api.set_max_leverage
End of explanation |
6,260 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DV360 Bulk Targeting Editor
Allows bulk targeting DV360 through Sheets and BigQuery.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter DV360 Bulk Targeting Editor Recipe Parameters
Select Load, click Save + Run, a sheet called DV Targeter will be created.
In the Partners sheet tab, fill in Filter column then select Load, click Save + Run.
In the Advertisers sheet tab, fill in Filter column. then select Load, click Save + Run.
Check the First And Third Party option to load audiences, which may be slow. If not loaded, user will enter audience ids into the sheet manually.
On the Line Items sheet tab, the Filter is used only to limit drop down choices in the rest of the tool.
Optionally edit or filter the Targeting Options or Inventory Sources sheets to limit choices.
Make targeting updates, fill in changes on all tabs with colored fields (RED FIELDS ARE NOT IMPLEMENTED, IGNORE).
Select Preview, click Save + Run then check the Preview tabs.
Select Update, click Save + Run then check the Success and Error tabs.
Load and Update can be run multiple times.
If an update fails, all parts of the update failed, break it up into multiple updates.
To refresh the Partner, Advertiser, or Line Item list, remove the filters and run load.
Modify the values below for your use case, can be done multiple times, then click play.
Step3: 4. Execute DV360 Bulk Targeting Editor
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: DV360 Bulk Targeting Editor
Allows bulk targeting DV360 through Sheets and BigQuery.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'auth_dv':'user', # Credentials used for dv.
'auth_sheet':'user', # Credentials used for sheet.
'auth_bigquery':'service', # Credentials used for bigquery.
'recipe_name':'', # Name of Google Sheet to create.
'recipe_slug':'', # Name of Google BigQuery dataset to create.
'command':'Load', # Action to take.
'first_and_third':False, # Load first and third party data (may be slow). If not selected, enter audience identifiers into sheet manually.
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter DV360 Bulk Targeting Editor Recipe Parameters
Select Load, click Save + Run, a sheet called DV Targeter will be created.
In the Partners sheet tab, fill in Filter column then select Load, click Save + Run.
In the Advertisers sheet tab, fill in Filter column. then select Load, click Save + Run.
Check the First And Third Party option to load audiences, which may be slow. If not loaded, user will enter audience ids into the sheet manually.
On the Line Items sheet tab, the Filter is used only to limit drop down choices in the rest of the tool.
Optionally edit or filter the Targeting Options or Inventory Sources sheets to limit choices.
Make targeting updates, fill in changes on all tabs with colored fields (RED FIELDS ARE NOT IMPLEMENTED, IGNORE).
Select Preview, click Save + Run then check the Preview tabs.
Select Update, click Save + Run then check the Success and Error tabs.
Load and Update can be run multiple times.
If an update fails, all parts of the update failed, break it up into multiple updates.
To refresh the Partner, Advertiser, or Line Item list, remove the filters and run load.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'dataset':{
'__comment__':'Ensure dataset exists.',
'auth':{'field':{'name':'auth_bigquery','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':2,'default':'','description':'Name of Google BigQuery dataset to create.'}}
}
},
{
'drive':{
'__comment__':'Copy the default template to sheet with the recipe name',
'auth':{'field':{'name':'auth_sheet','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},
'copy':{
'source':'https://docs.google.com/spreadsheets/d/1ARkIvh0D-gltZeiwniUonMNrm0Mi1s2meZ9FUjutXOE/',
'destination':{'field':{'name':'recipe_name','suffix':' DV Targeter','kind':'string','order':3,'default':'','description':'Name of Google Sheet to create.'}}
}
}
},
{
'dv_targeter':{
'__comment':'Depending on users choice, execute a different part of the solution.',
'auth_dv':{'field':{'name':'auth_dv','kind':'authentication','order':1,'default':'user','description':'Credentials used for dv.'}},
'auth_sheets':{'field':{'name':'auth_sheet','kind':'authentication','order':2,'default':'user','description':'Credentials used for sheet.'}},
'auth_bigquery':{'field':{'name':'auth_bigquery','kind':'authentication','order':3,'default':'service','description':'Credentials used for bigquery.'}},
'sheet':{'field':{'name':'recipe_name','suffix':' DV Targeter','kind':'string','order':4,'default':'','description':'Name of Google Sheet to create.'}},
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':5,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'command':{'field':{'name':'command','kind':'choice','choices':['Clear','Load','Preview','Update'],'order':6,'default':'Load','description':'Action to take.'}},
'first_and_third':{'field':{'name':'first_and_third','kind':'boolean','order':6,'default':False,'description':'Load first and third party data (may be slow). If not selected, enter audience identifiers into sheet manually.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute DV360 Bulk Targeting Editor
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
6,261 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cross-validation for parameter tuning, model selection, and feature selection
From the video series
Step1: Question
Step2: Dataset contains 25 observations (numbered 0 through 24)
5-fold cross-validation, thus it runs for 5 iterations
For each iteration, every observation is either in the training set or the testing set, but not both
Every observation is in the testing set exactly once
Comparing cross-validation to train/test split
Advantages of cross-validation
Step3: Cross-validation example
Step4: Cross-validation example
Step5: Improvements to cross-validation
Repeated cross-validation
Repeat cross-validation multiple times (with different random splits of the data) and average the results
More reliable estimate of out-of-sample performance by reducing the variance associated with a single trial of cross-validation
Creating a hold-out set
"Hold out" a portion of the data before beginning the model building process
Locate the best model using cross-validation on the remaining data, and test it using the hold-out set
More reliable estimate of out-of-sample performance since hold-out set is truly out-of-sample
Feature engineering and selection within cross-validation iterations
Normally, feature engineering and selection occurs before cross-validation
Instead, perform all feature engineering and selection within each cross-validation iteration
More reliable estimate of out-of-sample performance since it better mimics the application of the model to out-of-sample data
Resources
scikit-learn documentation | Python Code:
from sklearn.datasets import load_iris
from sklearn.cross_validation import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn import metrics
# read in the iris data
iris = load_iris()
# create X (features) and y (response)
X = iris.data
y = iris.target
# use train/test split with different random_state values
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=4)
# check classification accuracy of KNN with K=5
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print(metrics.accuracy_score(y_test, y_pred))
Explanation: Cross-validation for parameter tuning, model selection, and feature selection
From the video series: Introduction to machine learning with scikit-learn
Agenda
What is the drawback of using the train/test split procedure for model evaluation?
How does K-fold cross-validation overcome this limitation?
How can cross-validation be used for selecting tuning parameters, choosing between models, and selecting features?
What are some possible improvements to cross-validation?
Review of model evaluation procedures
Motivation: Need a way to choose between machine learning models
Goal is to estimate likely performance of a model on out-of-sample data
Initial idea: Train and test on the same data
But, maximizing training accuracy rewards overly complex models which overfit the training data
Alternative idea: Train/test split
Split the dataset into two pieces, so that the model can be trained and tested on different data
Testing accuracy is a better estimate than training accuracy of out-of-sample performance
But, it provides a high variance estimate since changing which observations happen to be in the testing set can significantly change testing accuracy
End of explanation
# simulate splitting a dataset of 25 observations into 5 folds
from sklearn.cross_validation import KFold
kf = KFold(25, n_folds=5, shuffle=False)
# print the contents of each training and testing set
print('{} {:^61} {}'.format('Iteration', 'Training set observations', 'Testing set observations'))
for iteration, data in enumerate(kf, start=1):
print('{:^9} {} {:^25}'.format(iteration, data[0], data[1]))
Explanation: Question: What if we created a bunch of train/test splits, calculated the testing accuracy for each, and averaged the results together?
Answer: That's the essense of cross-validation!
Steps for K-fold cross-validation
Split the dataset into K equal partitions (or "folds").
Use fold 1 as the testing set and the union of the other folds as the training set.
Calculate testing accuracy.
Repeat steps 2 and 3 K times, using a different fold as the testing set each time.
Use the average testing accuracy as the estimate of out-of-sample accuracy.
Diagram of 5-fold cross-validation:
End of explanation
from sklearn.cross_validation import cross_val_score
# 10-fold cross-validation with K=5 for KNN (the n_neighbors parameter)
knn = KNeighborsClassifier(n_neighbors=5)
scores = cross_val_score(knn, X, y, cv=10, scoring='accuracy')
print(scores)
# use average accuracy as an estimate of out-of-sample accuracy
print(scores.mean())
# search for an optimal value of K for KNN
k_range = list(range(1, 31))
k_scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
scores = cross_val_score(knn, X, y, cv=10, scoring='accuracy')
k_scores.append(scores.mean())
print(k_scores)
import matplotlib.pyplot as plt
%matplotlib inline
# plot the value of K for KNN (x-axis) versus the cross-validated accuracy (y-axis)
plt.plot(k_range, k_scores)
plt.xlabel('Value of K for KNN')
plt.ylabel('Cross-Validated Accuracy')
Explanation: Dataset contains 25 observations (numbered 0 through 24)
5-fold cross-validation, thus it runs for 5 iterations
For each iteration, every observation is either in the training set or the testing set, but not both
Every observation is in the testing set exactly once
Comparing cross-validation to train/test split
Advantages of cross-validation:
More accurate estimate of out-of-sample accuracy
More "efficient" use of data (every observation is used for both training and testing)
Advantages of train/test split:
Runs K times faster than K-fold cross-validation
Simpler to examine the detailed results of the testing process
Cross-validation recommendations
K can be any number, but K=10 is generally recommended
For classification problems, stratified sampling is recommended for creating the folds
Each response class should be represented with equal proportions in each of the K folds
scikit-learn's cross_val_score function does this by default
Cross-validation example: parameter tuning
Goal: Select the best tuning parameters (aka "hyperparameters") for KNN on the iris dataset
End of explanation
# 10-fold cross-validation with the best KNN model
knn = KNeighborsClassifier(n_neighbors=20)
print(cross_val_score(knn, X, y, cv=10, scoring='accuracy').mean())
# 10-fold cross-validation with logistic regression
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
print(cross_val_score(logreg, X, y, cv=10, scoring='accuracy').mean())
Explanation: Cross-validation example: model selection
Goal: Compare the best KNN model with logistic regression on the iris dataset
End of explanation
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
# read in the advertising dataset
data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
# create a Python list of three feature names
feature_cols = ['TV', 'Radio', 'Newspaper']
# use the list to select a subset of the DataFrame (X)
X = data[feature_cols]
# select the Sales column as the response (y)
y = data.Sales
# 10-fold cross-validation with all three features
lm = LinearRegression()
scores = cross_val_score(lm, X, y, cv=10, scoring='mean_squared_error')
print(scores)
# fix the sign of MSE scores
mse_scores = -scores
print(mse_scores)
# convert from MSE to RMSE
rmse_scores = np.sqrt(mse_scores)
print(rmse_scores)
# calculate the average RMSE
print(rmse_scores.mean())
# 10-fold cross-validation with two features (excluding Newspaper)
feature_cols = ['TV', 'Radio']
X = data[feature_cols]
print(np.sqrt(-cross_val_score(lm, X, y, cv=10, scoring='mean_squared_error')).mean())
Explanation: Cross-validation example: feature selection
Goal: Select whether the Newspaper feature should be included in the linear regression model on the advertising dataset
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: Improvements to cross-validation
Repeated cross-validation
Repeat cross-validation multiple times (with different random splits of the data) and average the results
More reliable estimate of out-of-sample performance by reducing the variance associated with a single trial of cross-validation
Creating a hold-out set
"Hold out" a portion of the data before beginning the model building process
Locate the best model using cross-validation on the remaining data, and test it using the hold-out set
More reliable estimate of out-of-sample performance since hold-out set is truly out-of-sample
Feature engineering and selection within cross-validation iterations
Normally, feature engineering and selection occurs before cross-validation
Instead, perform all feature engineering and selection within each cross-validation iteration
More reliable estimate of out-of-sample performance since it better mimics the application of the model to out-of-sample data
Resources
scikit-learn documentation: Cross-validation, Model evaluation
scikit-learn issue on GitHub: MSE is negative when returned by cross_val_score
Section 5.1 of An Introduction to Statistical Learning (11 pages) and related videos: K-fold and leave-one-out cross-validation (14 minutes), Cross-validation the right and wrong ways (10 minutes)
Scott Fortmann-Roe: Accurately Measuring Model Prediction Error
Machine Learning Mastery: An Introduction to Feature Selection
Harvard CS109: Cross-Validation: The Right and Wrong Way
Journal of Cheminformatics: Cross-validation pitfalls when selecting and assessing regression and classification models
Comments or Questions?
Email: kevin@dataschool.io
Website: http://dataschool.io
Twitter: @justmarkham
End of explanation |
6,262 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Regression
In the nature (or in a real life situation), it is unusual to observe a variable, let's say $y$, and its exact mathematical relationship with another variables $x$. Suppose now that we would like to model the (linear) relationship between a dependent variable $y$ and the explanatory variable $x$. Usually, a first modeling would take this form
\begin{equation}
y = \beta_{0} + \beta_{1}x + \varepsilon \ \ ,
\end{equation}
where $\varepsilon$ is a random variable that we CAN NOT observe and who adds noise to the linear relationship between the dependent and independent variables. Altough the noise $\varepsilon$ is unobservable, we can still estimate the real model. The relationship between the dependent and independent variables will be now estimated by
\begin{equation}
\hat{y} = \hat{\beta}_0 + \hat{\beta}_1 x_1 \ \ .
\end{equation}
Q.This is great, but how can we estimate $\hat{y}$ ?
A.By estimating $\hat{\beta}$.
Q.How can we estimate $\hat{\beta}$ ?
A.Good question! But first, let's create some artificial data
Step1: The red curve is defined by the function
\begin{equation}
\ f(x) = e^{\ 3 x} \ \ ,
\end{equation}
and the blue dots are actually the dependent variable defined by
\begin{equation}
y = e^{\ 3 x} + \varepsilon \ \ \text{where} \ \ \varepsilon \sim \mathcal{N}(0,50^2) \ \ .
\end{equation}
Here are the available Line2D properties (See matplotlib tutorial for more pyplot options here). For the list of generating random numbers see here.
We can check the histogram of y (for histogram and other matplot pyplots see here).
Step2: Let's fit a simple linear model on $y$ and $x$.
First of all, we need to import the librairy scikit.learn. There is a lot of algorithms and each of them is well explained. Stop wasting your time with cat video, be a data scientist. We won't talk about how we can theorically fit the model, but you may find the information here.
Step3: Well, that's not really good... We can do better!
Exercise
Replace $x$ with $x^2$ and regress $y$ as defined earlier on $x^2$.
The 'new' fitted model will be
\begin{equation}
\hat{y}=\hat{\beta}_0+ \hat{\beta}_1 x^2 \ .
\end{equation}
Step4: Question
Which one do you prefer?
Classification
In the last example, the dependent variable was continuous. Now suppose that the dependent $y$ variable is binary. This makes the cross-road between regression and classification. First, let's create the binary outcome.
Step5: Linear regression
Now that the new dependent variable $z$ takes binary values (-1 or 1), we could still think of it as a real-valued variable on which we can do standard linear regression! Thus, the gaussian noise model on $z$ doesn't make sense anymore, but we can still do least-squares approximation to estimate the parameters of a linear decision boundary.
Step6: We create 2 functions. The first one, called plotbc should plot the predictions done (and their accurancy) by the linear regression. The second one calculate the classification rate.
Step7: We now call the functions previously defined.
Step8: Let's compute the confusion rate.
Step9: But maybe we could get more information with the confusion matrix!
Step10: Logistic regression
We will now perform classification using logistic regression. The modelisation behind the logistic regression is
\begin{equation}
\mathbb{P}(y^{(i)} = 1 \ | \ \ x^{(i)}) = \frac{1}{1+e^{-\beta_{0}-\beta_{1}x_{1}^{(i)}}}
\end{equation}
where $\boldsymbol{\beta}$ is estimated with maximum likelihood
\begin{equation}
\widehat{\boldsymbol{\beta}} = \arg!\max_{\boldsymbol{\beta}} \prod_{i=1}^{n} \mathbb{P}(y^{(i)} = 1 \ | \ \ x^{(i)})^{y^{(i)}} \big(1-\mathbb{P}(y^{(i)} = 1 \ | \ \ x^{(i)})\big)^{ -y^{(i)}} \ \ .
\end{equation}
Step11: The classification rate seem slightly better...
Cross-validation and estimation of generalization error
In a classification problem, where we try to predict a discret outcome given some observed informations, the classification error rate calculated on the dataset wich we used to fit (or train) the model should be lower than the error rate calculated on an independent, external dataset. Thus, we cannot rely on the classification error rate based on the dataset on wich we fitted the model because it may be overoptimistic. Then, in machine learning, once we trained the model on a specific dataset (the TRAIN set), we wish to test its reactions with new observations. This specific set of new observations is called the VALIDATION set). One lazy way to get a new set of observation is to split the original dataset in two parts
Step12: This being said, we can now calculate the prediction error rate on the train and the test sets.
Question
a) Which algorithm should present the best performances?
b) Can we rely on this results? Why?
Step13: We created the train and validation sets randomly. Hence, considering that the original dataset has a small number of observations, 200, the division of the data may favorized either the train or even the validation dataset.
One way to counteract the randomness of the data division is to simply iterate the above commande. Here are the main steps
Step14: Support Vector Machines (SVM)
We just learned 2 linear classifiers
Step15: The maximal margin hyperplane is shown in as a solide black line. The margin is the distance form the solid line to either of the dashed lines. The two blue points and the purple point that lie on the dashed lines are the support vectors. The blue and the purple grid indicates the decision rule made by a classifier based on this separating hyperplane.
Step16: Some motivations behind the SVM method have their roots in the linearly separable concept. Sometimes, the data is not linearly separable. Thus we can't use a maximal margin classifier.
Step17: A good strategy could be to consider a classifier based on a hyperplane that does not perfectly separate the two classes. Thus, it could be worthwile to misclassify somes observations in order to do a better job in classifying the remaining observations. We call this technic the support vector classifier (with soft margin).
Step18: Sometimes, good margins don't even exist and support vector classifier are useless.
Step19: In this specific case, a smart strategy would be to enlarge the feature space with a non-linear transformation. Then, find a good margin.
Step20: The new margin (in $\mathbb{R}^2$) corresponds to the following margins in $\mathbb{R}$.
Step21: Support Vector Machines (SVM) with RBF kernel
We will now use the Support Vector Machines (SVM) method to perform classification. But first, let's create some new artificial data. The explanatory variable's distribution is a gaussian mixture where
\begin{equation}
X_{1} \sim \mathcal{N}{2}(\mu = (1,1) , \sigma^{2} = I{2}) \
X_{2} \sim \mathcal{N}{2}(\mu = (3,3) , \sigma^{2} = I{2}) \ \ .
\end{equation}
We finally plot the data. We can observe that each distribution is associated with a specific color.
Step22: for the list of colormap options see colormap help and to learn more about SVM and related options check svm tutorial and support vector classification (svc) examples.
Back to our original problem
Finally, we can compare the logistic regression's and the SVM's performances on the first dataset that we created earlier. | Python Code:
import numpy as np
n=200
x_tr = np.linspace(0.0, 2.0, n)
y_tr = np.exp(3*x_tr)
import random
mu, sigma = 0,50
random.seed(1)
y = y_tr + np.random.normal(loc=mu, scale= sigma, size=len(x_tr))
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(x_tr,y,".",mew=3);
plt.plot(x_tr, y_tr,"--r",lw=3);
plt.xlabel('Explanatory variable (x)')
plt.ylabel('Dependent variable (y)')
Explanation: Linear Regression
In the nature (or in a real life situation), it is unusual to observe a variable, let's say $y$, and its exact mathematical relationship with another variables $x$. Suppose now that we would like to model the (linear) relationship between a dependent variable $y$ and the explanatory variable $x$. Usually, a first modeling would take this form
\begin{equation}
y = \beta_{0} + \beta_{1}x + \varepsilon \ \ ,
\end{equation}
where $\varepsilon$ is a random variable that we CAN NOT observe and who adds noise to the linear relationship between the dependent and independent variables. Altough the noise $\varepsilon$ is unobservable, we can still estimate the real model. The relationship between the dependent and independent variables will be now estimated by
\begin{equation}
\hat{y} = \hat{\beta}_0 + \hat{\beta}_1 x_1 \ \ .
\end{equation}
Q.This is great, but how can we estimate $\hat{y}$ ?
A.By estimating $\hat{\beta}$.
Q.How can we estimate $\hat{\beta}$ ?
A.Good question! But first, let's create some artificial data :D
The dependent variable $x$ will variate between 0 and 2. The TRUE relationship between $y$ and $x$ will take this form
\begin{equation}
y = e^{\ 3 x} + \varepsilon \ \ \text{where} \ \ \varepsilon \sim \mathcal{N}(0,50^2) \ \ ,
\end{equation}
where the noise $\varepsilon$ will follow a normal distribution of mean $\mu=0$ and standard deviation $\sigma=50$. Let's produce $n=200$ observations defined by the above equation.
End of explanation
ignored=plt.hist(y,30, color="g")
Explanation: The red curve is defined by the function
\begin{equation}
\ f(x) = e^{\ 3 x} \ \ ,
\end{equation}
and the blue dots are actually the dependent variable defined by
\begin{equation}
y = e^{\ 3 x} + \varepsilon \ \ \text{where} \ \ \varepsilon \sim \mathcal{N}(0,50^2) \ \ .
\end{equation}
Here are the available Line2D properties (See matplotlib tutorial for more pyplot options here). For the list of generating random numbers see here.
We can check the histogram of y (for histogram and other matplot pyplots see here).
End of explanation
import sklearn.linear_model as lm
lr=lm.LinearRegression()
#We can see that the dimensions indicated are different
#In fact, the data in the second expression is "reshape"
#This is necessary if we want to use the linear regression command with scikit learn
#Otherwise, python send us a message error
print np.shape(x_tr)
print np.shape(x_tr[:, np.newaxis])
#We regress y on x, then estimate y
lr.fit(x_tr[:, np.newaxis],y)
y_hat=lr.predict(x_tr[:, np.newaxis])
plt.plot(x_tr,y,".",mew=2)
plt.plot(x_tr, y_hat,"-g",lw=4, label='Estimations with linear regression')
plt.xlabel('Explanatory variable (x)')
plt.ylabel('Dependent variable (y)')
plt.legend(bbox_to_anchor=(1.8, 1.03))
Explanation: Let's fit a simple linear model on $y$ and $x$.
First of all, we need to import the librairy scikit.learn. There is a lot of algorithms and each of them is well explained. Stop wasting your time with cat video, be a data scientist. We won't talk about how we can theorically fit the model, but you may find the information here.
End of explanation
#And then fit the model
lr.fit(x_tr[:, np.newaxis]**2,y)
y_hat2=lr.predict(x_tr[:, np.newaxis]**2)
#Let's check it out
plt.plot(x_tr,y,".",mew=2);
plt.plot(x_tr, y_hat,"-g",lw=4, label='Estimations with linear regression')
plt.plot(x_tr, y_hat2,"-r",lw=4, label='Estimations with linear regression (Quadratic term)');
plt.xlabel('Explanatory variable (x)')
plt.ylabel('Dependent variable (y)')
plt.legend(bbox_to_anchor=(2.1, 1.03))
Explanation: Well, that's not really good... We can do better!
Exercise
Replace $x$ with $x^2$ and regress $y$ as defined earlier on $x^2$.
The 'new' fitted model will be
\begin{equation}
\hat{y}=\hat{\beta}_0+ \hat{\beta}_1 x^2 \ .
\end{equation}
End of explanation
index=y>90
z=(1*(y>90)-0.5)*2
#print index, z
#The tilt symbol ~ below means the opposite of the boolean value
plt.figure()
plt.plot(x_tr[index],z[index],".r",mew=3)
plt.plot(x_tr[~index],z[~index],".b",mew=3)
plt.ylim(-1.5,1.5)
plt.xlabel('Explanatory variable (x)')
plt.ylabel('Dependent variable (y)')
Explanation: Question
Which one do you prefer?
Classification
In the last example, the dependent variable was continuous. Now suppose that the dependent $y$ variable is binary. This makes the cross-road between regression and classification. First, let's create the binary outcome.
End of explanation
lr.fit(x_tr[:, np.newaxis],z)
z_hat=lr.predict(x_tr[:, np.newaxis])
#We define a threshold overwhat the z estimation will be considered as 1
threshold = 0
z_class= 2*(z_hat>threshold) - 1
Explanation: Linear regression
Now that the new dependent variable $z$ takes binary values (-1 or 1), we could still think of it as a real-valued variable on which we can do standard linear regression! Thus, the gaussian noise model on $z$ doesn't make sense anymore, but we can still do least-squares approximation to estimate the parameters of a linear decision boundary.
End of explanation
#This function simply calculate the classification rate on the training set
def plotbc(x, y, z):
#Plot the classification
plt.plot(x[z==1],z[z==1],".r", markersize=3, label='True positive')
plt.plot(x[z==-1],z[z==-1],".b", markersize=3, label='True negative')
#Plot the classification errors
plt.plot(x[(z==-1) & (y==1)],z[(z==-1) & (y==1)],"^y", markersize=10, label='False negative')
plt.plot(x[(z==1) & (y==-1)],z[(z==1) & (y==-1)],"^c", markersize=10, label='False positive')
plt.legend(bbox_to_anchor=(1.55, 1.03))
plt.ylim(-1.5,1.5)
#This function simply calculate the classification rate on the training set
def precision(y, z):
print "The classification rate is :"
print np.mean(y==z)
Explanation: We create 2 functions. The first one, called plotbc should plot the predictions done (and their accurancy) by the linear regression. The second one calculate the classification rate.
End of explanation
plotbc(x_tr, z, z_class)
plt.plot(x_tr,z_hat,"-g",lw=1, label='Predictions by the linear regression model');
plt.legend(bbox_to_anchor=(2, 1.03))
plt.xlabel('Explanatory variable (x)')
plt.ylabel('Dependent variable (y)')
Explanation: We now call the functions previously defined.
End of explanation
precision(z_class, z)
Explanation: Let's compute the confusion rate.
End of explanation
from sklearn.metrics import confusion_matrix
confusion_matrix(z,z_class)/float(len(z))
Explanation: But maybe we could get more information with the confusion matrix!
End of explanation
from sklearn import linear_model, datasets
#The C parameter (Strictly positive) controls the regularization strength
#Smaller values specify stronger regularization
logreg = linear_model.LogisticRegression(C=1e5)
logreg.fit(x_tr[:, np.newaxis], z)
z_hat=logreg.predict(x_tr[:, np.newaxis])
plotbc(x_tr, z, z_hat)
plt.plot(x_tr,z_hat,"-g",lw=1, label='Predictions by the logistic regression');
plt.legend(bbox_to_anchor=(2.3, 1.03))
confusion_matrix(z,z_hat)/float(len(z))
Explanation: Logistic regression
We will now perform classification using logistic regression. The modelisation behind the logistic regression is
\begin{equation}
\mathbb{P}(y^{(i)} = 1 \ | \ \ x^{(i)}) = \frac{1}{1+e^{-\beta_{0}-\beta_{1}x_{1}^{(i)}}}
\end{equation}
where $\boldsymbol{\beta}$ is estimated with maximum likelihood
\begin{equation}
\widehat{\boldsymbol{\beta}} = \arg!\max_{\boldsymbol{\beta}} \prod_{i=1}^{n} \mathbb{P}(y^{(i)} = 1 \ | \ \ x^{(i)})^{y^{(i)}} \big(1-\mathbb{P}(y^{(i)} = 1 \ | \ \ x^{(i)})\big)^{ -y^{(i)}} \ \ .
\end{equation}
End of explanation
from sklearn.model_selection import train_test_split
x_train, x_valid, z_train, z_valid = train_test_split(x_tr, z, test_size=0.2, random_state=3)
Explanation: The classification rate seem slightly better...
Cross-validation and estimation of generalization error
In a classification problem, where we try to predict a discret outcome given some observed informations, the classification error rate calculated on the dataset wich we used to fit (or train) the model should be lower than the error rate calculated on an independent, external dataset. Thus, we cannot rely on the classification error rate based on the dataset on wich we fitted the model because it may be overoptimistic. Then, in machine learning, once we trained the model on a specific dataset (the TRAIN set), we wish to test its reactions with new observations. This specific set of new observations is called the VALIDATION set). One lazy way to get a new set of observation is to split the original dataset in two parts : the training set (composed of 80% the observations) and the validation set.
Question
What percentage of the original dataset's observation is include in the validation set?
Now, let's just split the original dataset with the train_test_split function.
End of explanation
clf = logreg.fit(x_train[:, np.newaxis], z_train)
#z_hat_train=logreg.predict(x_train[:, np.newaxis])
#z_hat_test=logreg.predict(x_test[:, np.newaxis])
score_train = clf.score(x_train[:, np.newaxis], z_train)
score_valid = clf.score(x_valid[:, np.newaxis], z_valid)
print("The prediction error rate on the train set is : ")
print(score_train)
print("The prediction error rate on the test set is : ")
print(score_valid)
Explanation: This being said, we can now calculate the prediction error rate on the train and the test sets.
Question
a) Which algorithm should present the best performances?
b) Can we rely on this results? Why?
End of explanation
#Number of iterations
n=1000
score_train_vec_log = np.zeros(n)
score_valid_vec_log = np.zeros(n)
#Loop of iterations
for k in np.arange(n):
x_train, x_valid, z_train, z_valid = train_test_split(x_tr, z, test_size=0.2, random_state=k)
clf = logreg.fit(x_train[:, np.newaxis], z_train)
score_train_vec_log[k] = clf.score(x_train[:, np.newaxis], z_train)
score_valid_vec_log[k] = clf.score(x_valid[:, np.newaxis], z_valid)
print("The average prediction error rate on the train set is : ")
print(np.mean(score_train_vec_log))
print("The average prediction error rate on the test set is : ")
print(np.mean(score_valid_vec_log))
Explanation: We created the train and validation sets randomly. Hence, considering that the original dataset has a small number of observations, 200, the division of the data may favorized either the train or even the validation dataset.
One way to counteract the randomness of the data division is to simply iterate the above commande. Here are the main steps :
1) Repeat a large number of time the original dataset's division in train and validation sets.
2) For each division (or iteration), we fit the model and then after calculate the prediction error rate on the corresponding train and validation set.
3) Average the prediction errors overall train and validation sets.
End of explanation
img = plt.imread("../data/hyperplanes.png")
plt.imshow(img)
plt.axis("off")
Explanation: Support Vector Machines (SVM)
We just learned 2 linear classifiers : univariate regression and logistic regression. Linear classifiers are a famous algorithms family where classification of the data is done with a descriminative hyperplane. In the present section, we will talk about the Support Vector Machine (SVM), a widely used (linear) classifier in machine learning. Let's start with some some textbook problems...
There are two classes of observations, shown in blue and in purple, each of which has measurements on two variables. Three separating hyperplanes, out of many possible, are shown in black. Which one should we use?
End of explanation
img = plt.imread("../data/maximal.margin.png")
plt.imshow(img)
plt.axis("off")
Explanation: The maximal margin hyperplane is shown in as a solide black line. The margin is the distance form the solid line to either of the dashed lines. The two blue points and the purple point that lie on the dashed lines are the support vectors. The blue and the purple grid indicates the decision rule made by a classifier based on this separating hyperplane.
End of explanation
img = plt.imread("../data/non.separable.png")
plt.imshow(img)
plt.axis("off")
Explanation: Some motivations behind the SVM method have their roots in the linearly separable concept. Sometimes, the data is not linearly separable. Thus we can't use a maximal margin classifier.
End of explanation
img = plt.imread("../data/support.vector.png")
plt.imshow(img)
plt.axis("off")
Explanation: A good strategy could be to consider a classifier based on a hyperplane that does not perfectly separate the two classes. Thus, it could be worthwile to misclassify somes observations in order to do a better job in classifying the remaining observations. We call this technic the support vector classifier (with soft margin).
End of explanation
img = plt.imread("../data/kernel.example.1.png")
plt.imshow(img)
plt.axis("off")
Explanation: Sometimes, good margins don't even exist and support vector classifier are useless.
End of explanation
img = plt.imread("../data/kernel.example.2.png")
plt.imshow(img)
plt.axis("off")
Explanation: In this specific case, a smart strategy would be to enlarge the feature space with a non-linear transformation. Then, find a good margin.
End of explanation
img = plt.imread("../data/kernel.example.3.png")
plt.imshow(img)
plt.axis("off")
Explanation: The new margin (in $\mathbb{R}^2$) corresponds to the following margins in $\mathbb{R}$.
End of explanation
n=100
np.random.seed(0)
X=np.vstack((np.random.multivariate_normal([1,1],[[1,0],[0,1]] ,n), np.random.multivariate_normal([3,3],[[1,0],[0,1]] ,n)))
Y =np.array([0] * n + [1] * n)
index=(Y==0)
plt.scatter(X[index,0], X[index,1], color="r", label='X1 distribution')
plt.scatter(X[~index,0], X[~index,1], color="b", label='X2 distribution')
plt.xlabel('First dimension')
plt.ylabel('Second dimension')
plt.legend(bbox_to_anchor=(1.5, 1.03))
from sklearn import svm
clf = svm.SVC(kernel="rbf", gamma=2 ,C=10).fit(X,Y)
Z=clf.predict(X)
index=(Z==0)
plt.scatter(X[index,0], X[index,1], edgecolors="b")
plt.scatter(X[~index,0], X[~index,1], edgecolors="r")
xx, yy = np.meshgrid(np.linspace(-3, 6, 500), np.linspace(-3, 6, 500))
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, alpha=1, cmap=plt.cm.seismic)
plt.scatter(X[:, 0], X[:, 1], c=Y, s=2, alpha=0.9, cmap=plt.cm.spectral)
Explanation: Support Vector Machines (SVM) with RBF kernel
We will now use the Support Vector Machines (SVM) method to perform classification. But first, let's create some new artificial data. The explanatory variable's distribution is a gaussian mixture where
\begin{equation}
X_{1} \sim \mathcal{N}{2}(\mu = (1,1) , \sigma^{2} = I{2}) \
X_{2} \sim \mathcal{N}{2}(\mu = (3,3) , \sigma^{2} = I{2}) \ \ .
\end{equation}
We finally plot the data. We can observe that each distribution is associated with a specific color.
End of explanation
#Number of iterations
n=1000
score_train_vec_svm = np.zeros(n)
score_valid_vec_svm = np.zeros(n)
#Loop of iterations
for k in np.arange(n):
x_train, x_valid, z_train, z_valid = train_test_split(x_tr, z, test_size=0.2, random_state=k)
#Command for the SVM
clf = svm.SVC(kernel='rbf', C=.1, gamma=3.2).fit(x_train[:, np.newaxis], z_train)
score_train_vec_svm[k] = clf.score(x_train[:, np.newaxis], z_train)
score_valid_vec_svm[k] = clf.score(x_valid[:, np.newaxis], z_valid)
print("The SVM's average prediction error rate on the train set is : ")
print(np.mean(score_train_vec_svm))
print("The SVM's average prediction error rate on the test set is : ")
print(np.mean(score_valid_vec_svm))
print("The logistic regression's average prediction error rate on the train set is : ")
print(np.mean(score_train_vec_log))
print("The logistic regression's average prediction error rate on the test set is : ")
print(np.mean(score_valid_vec_log))
Explanation: for the list of colormap options see colormap help and to learn more about SVM and related options check svm tutorial and support vector classification (svc) examples.
Back to our original problem
Finally, we can compare the logistic regression's and the SVM's performances on the first dataset that we created earlier.
End of explanation |
6,263 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced indexing
Step1: Functionality and API
Indexing a 1D array with a Boolean (mask) array
Supported via get/set_mask_selection() and .vindex[]. Also supported via get/set_orthogonal_selection() and .oindex[].
Step2: Indexing a 1D array with a 1D integer (coordinate) array
Supported via get/set_coordinate_selection() and .vindex[]. Also supported via get/set_orthogonal_selection() and .oindex[].
Step3: Indexing a 1D array with a multi-dimensional integer (coordinate) array
Supported via get/set_coordinate_selection() and .vindex[].
Step4: Slicing a 1D array with step > 1
Slices with step > 1 are supported via get/set_basic_selection(), get/set_orthogonal_selection(), __getitem__ and .oindex[]. Negative steps are not supported.
Step5: Orthogonal (outer) indexing of multi-dimensional arrays
Orthogonal (a.k.a. outer) indexing is supported with either Boolean or integer arrays, in combination with integers and slices. This functionality is provided via the get/set_orthogonal_selection() methods. For convenience, this functionality is also available via the .oindex[] property.
Step6: Coordinate indexing of multi-dimensional arrays
Selecting arbitrary points from a multi-dimensional array by indexing with integer (coordinate) arrays is supported. This functionality is provided via the get/set_coordinate_selection() methods. For convenience, this functionality is also available via the .vindex[] property.
Step7: Mask indexing of multi-dimensional arrays
Selecting arbitrary points from a multi-dimensional array by a Boolean array is supported. This functionality is provided via the get/set_mask_selection() methods. For convenience, this functionality is also available via the .vindex[] property.
Step8: Selecting fields from arrays with a structured dtype
All get/set_selection_...() methods support a fields argument which allows retrieving/replacing data for a specific field or fields. Also h5py-like API is supported where fields can be provided within __getitem__, .oindex[] and .vindex[].
Step9: Note that this API differs from numpy when selecting multiple fields. E.g.
Step10: 1D Benchmarking
Step11: bool dense selection
Step12: Method nonzero is being called internally within numpy to convert bool to int selections, no way to avoid.
Step13: .vindex[] is a bit slower, possibly because internally it converts to a coordinate array first.
int dense selection
Step14: When indices are not sorted, zarr needs to partially sort them so the occur in chunk order, so we only have to visit each chunk once. This sorting dominates the processing time and is unavoidable AFAIK.
bool sparse selection
Step15: int sparse selection
Step16: For sparse selections, processing time is dominated by decompression, so we can't do any better.
sparse bool selection as zarr array
Step17: slice with step
Step18: 2D Benchmarking
Step19: bool orthogonal selection
Step20: int orthogonal selection
Step21: coordinate (point) selection
Step22: Points need to be partially sorted so all points in the same chunk are grouped and processed together. This requires argsort which dominates time.
h5py comparison
N.B., not really fair because using slower compressor, but for interest... | Python Code:
import sys
sys.path.insert(0, '..')
import zarr
import numpy as np
np.random.seed(42)
import cProfile
zarr.__version__
Explanation: Advanced indexing
End of explanation
a = np.arange(10)
za = zarr.array(a, chunks=2)
ix = [False, True, False, True, False, True, False, True, False, True]
# get items
za.vindex[ix]
# get items
za.oindex[ix]
# set items
za.vindex[ix] = a[ix] * 10
za[:]
# set items
za.oindex[ix] = a[ix] * 100
za[:]
# if using .oindex, indexing array can be any array-like, e.g., Zarr array
zix = zarr.array(ix, chunks=2)
za = zarr.array(a, chunks=2)
za.oindex[zix] # will not load all zix into memory
Explanation: Functionality and API
Indexing a 1D array with a Boolean (mask) array
Supported via get/set_mask_selection() and .vindex[]. Also supported via get/set_orthogonal_selection() and .oindex[].
End of explanation
a = np.arange(10)
za = zarr.array(a, chunks=2)
ix = [1, 3, 5, 7, 9]
# get items
za.vindex[ix]
# get items
za.oindex[ix]
# set items
za.vindex[ix] = a[ix] * 10
za[:]
# set items
za.oindex[ix] = a[ix] * 100
za[:]
Explanation: Indexing a 1D array with a 1D integer (coordinate) array
Supported via get/set_coordinate_selection() and .vindex[]. Also supported via get/set_orthogonal_selection() and .oindex[].
End of explanation
a = np.arange(10)
za = zarr.array(a, chunks=2)
ix = np.array([[1, 3, 5], [2, 4, 6]])
# get items
za.vindex[ix]
# set items
za.vindex[ix] = a[ix] * 10
za[:]
Explanation: Indexing a 1D array with a multi-dimensional integer (coordinate) array
Supported via get/set_coordinate_selection() and .vindex[].
End of explanation
a = np.arange(10)
za = zarr.array(a, chunks=2)
# get items
za[1::2]
# set items
za.oindex[1::2] = a[1::2] * 10
za[:]
Explanation: Slicing a 1D array with step > 1
Slices with step > 1 are supported via get/set_basic_selection(), get/set_orthogonal_selection(), __getitem__ and .oindex[]. Negative steps are not supported.
End of explanation
a = np.arange(15).reshape(5, 3)
za = zarr.array(a, chunks=(3, 2))
za[:]
# orthogonal indexing with Boolean arrays
ix0 = [False, True, False, True, False]
ix1 = [True, False, True]
za.get_orthogonal_selection((ix0, ix1))
# alternative API
za.oindex[ix0, ix1]
# orthogonal indexing with integer arrays
ix0 = [1, 3]
ix1 = [0, 2]
za.get_orthogonal_selection((ix0, ix1))
# alternative API
za.oindex[ix0, ix1]
# combine with slice
za.oindex[[1, 3], :]
# combine with slice
za.oindex[:, [0, 2]]
# set items via Boolean selection
ix0 = [False, True, False, True, False]
ix1 = [True, False, True]
selection = ix0, ix1
value = 42
za.set_orthogonal_selection(selection, value)
za[:]
# alternative API
za.oindex[ix0, ix1] = 44
za[:]
# set items via integer selection
ix0 = [1, 3]
ix1 = [0, 2]
selection = ix0, ix1
value = 46
za.set_orthogonal_selection(selection, value)
za[:]
# alternative API
za.oindex[ix0, ix1] = 48
za[:]
Explanation: Orthogonal (outer) indexing of multi-dimensional arrays
Orthogonal (a.k.a. outer) indexing is supported with either Boolean or integer arrays, in combination with integers and slices. This functionality is provided via the get/set_orthogonal_selection() methods. For convenience, this functionality is also available via the .oindex[] property.
End of explanation
a = np.arange(15).reshape(5, 3)
za = zarr.array(a, chunks=(3, 2))
za[:]
# get items
ix0 = [1, 3]
ix1 = [0, 2]
za.get_coordinate_selection((ix0, ix1))
# alternative API
za.vindex[ix0, ix1]
# set items
za.set_coordinate_selection((ix0, ix1), 42)
za[:]
# alternative API
za.vindex[ix0, ix1] = 44
za[:]
Explanation: Coordinate indexing of multi-dimensional arrays
Selecting arbitrary points from a multi-dimensional array by indexing with integer (coordinate) arrays is supported. This functionality is provided via the get/set_coordinate_selection() methods. For convenience, this functionality is also available via the .vindex[] property.
End of explanation
a = np.arange(15).reshape(5, 3)
za = zarr.array(a, chunks=(3, 2))
za[:]
ix = np.zeros_like(a, dtype=bool)
ix[1, 0] = True
ix[3, 2] = True
za.get_mask_selection(ix)
za.vindex[ix]
za.set_mask_selection(ix, 42)
za[:]
za.vindex[ix] = 44
za[:]
Explanation: Mask indexing of multi-dimensional arrays
Selecting arbitrary points from a multi-dimensional array by a Boolean array is supported. This functionality is provided via the get/set_mask_selection() methods. For convenience, this functionality is also available via the .vindex[] property.
End of explanation
a = np.array([(b'aaa', 1, 4.2),
(b'bbb', 2, 8.4),
(b'ccc', 3, 12.6)],
dtype=[('foo', 'S3'), ('bar', 'i4'), ('baz', 'f8')])
za = zarr.array(a, chunks=2, fill_value=None)
za[:]
za['foo']
za['foo', 'baz']
za[:2, 'foo']
za[:2, 'foo', 'baz']
za.oindex[[0, 2], 'foo']
za.vindex[[0, 2], 'foo']
za['bar'] = 42
za[:]
za[:2, 'bar'] = 84
za[:]
Explanation: Selecting fields from arrays with a structured dtype
All get/set_selection_...() methods support a fields argument which allows retrieving/replacing data for a specific field or fields. Also h5py-like API is supported where fields can be provided within __getitem__, .oindex[] and .vindex[].
End of explanation
a['foo', 'baz']
a[['foo', 'baz']]
za['foo', 'baz']
za[['foo', 'baz']]
Explanation: Note that this API differs from numpy when selecting multiple fields. E.g.:
End of explanation
c = np.arange(100000000)
c.nbytes
%time zc = zarr.array(c)
zc.info
%timeit c.copy()
%timeit zc[:]
Explanation: 1D Benchmarking
End of explanation
# relatively dense selection - 10%
ix_dense_bool = np.random.binomial(1, 0.1, size=c.shape[0]).astype(bool)
np.count_nonzero(ix_dense_bool)
%timeit c[ix_dense_bool]
%timeit zc.oindex[ix_dense_bool]
%timeit zc.vindex[ix_dense_bool]
import tempfile
import cProfile
import pstats
def profile(statement, sort='time', restrictions=(7,)):
with tempfile.NamedTemporaryFile() as f:
cProfile.run(statement, filename=f.name)
pstats.Stats(f.name).sort_stats(sort).print_stats(*restrictions)
profile('zc.oindex[ix_dense_bool]')
Explanation: bool dense selection
End of explanation
profile('zc.vindex[ix_dense_bool]')
Explanation: Method nonzero is being called internally within numpy to convert bool to int selections, no way to avoid.
End of explanation
ix_dense_int = np.random.choice(c.shape[0], size=c.shape[0]//10, replace=True)
ix_dense_int_sorted = ix_dense_int.copy()
ix_dense_int_sorted.sort()
len(ix_dense_int)
%timeit c[ix_dense_int_sorted]
%timeit zc.oindex[ix_dense_int_sorted]
%timeit zc.vindex[ix_dense_int_sorted]
%timeit c[ix_dense_int]
%timeit zc.oindex[ix_dense_int]
%timeit zc.vindex[ix_dense_int]
profile('zc.oindex[ix_dense_int_sorted]')
profile('zc.vindex[ix_dense_int_sorted]')
profile('zc.oindex[ix_dense_int]')
profile('zc.vindex[ix_dense_int]')
Explanation: .vindex[] is a bit slower, possibly because internally it converts to a coordinate array first.
int dense selection
End of explanation
# relatively sparse selection
ix_sparse_bool = np.random.binomial(1, 0.0001, size=c.shape[0]).astype(bool)
np.count_nonzero(ix_sparse_bool)
%timeit c[ix_sparse_bool]
%timeit zc.oindex[ix_sparse_bool]
%timeit zc.vindex[ix_sparse_bool]
profile('zc.oindex[ix_sparse_bool]')
profile('zc.vindex[ix_sparse_bool]')
Explanation: When indices are not sorted, zarr needs to partially sort them so the occur in chunk order, so we only have to visit each chunk once. This sorting dominates the processing time and is unavoidable AFAIK.
bool sparse selection
End of explanation
ix_sparse_int = np.random.choice(c.shape[0], size=c.shape[0]//10000, replace=True)
ix_sparse_int_sorted = ix_sparse_int.copy()
ix_sparse_int_sorted.sort()
len(ix_sparse_int)
%timeit c[ix_sparse_int_sorted]
%timeit c[ix_sparse_int]
%timeit zc.oindex[ix_sparse_int_sorted]
%timeit zc.vindex[ix_sparse_int_sorted]
%timeit zc.oindex[ix_sparse_int]
%timeit zc.vindex[ix_sparse_int]
profile('zc.oindex[ix_sparse_int]')
profile('zc.vindex[ix_sparse_int]')
Explanation: int sparse selection
End of explanation
zix_sparse_bool = zarr.array(ix_sparse_bool)
zix_sparse_bool.info
%timeit zc.oindex[zix_sparse_bool]
Explanation: For sparse selections, processing time is dominated by decompression, so we can't do any better.
sparse bool selection as zarr array
End of explanation
%timeit np.array(c[::2])
%timeit zc[::2]
%timeit zc[::10]
%timeit zc[::100]
%timeit zc[::1000]
profile('zc[::2]')
Explanation: slice with step
End of explanation
c.shape
d = c.reshape(-1, 1000)
d.shape
zd = zarr.array(d)
zd.info
Explanation: 2D Benchmarking
End of explanation
ix0 = np.random.binomial(1, 0.5, size=d.shape[0]).astype(bool)
ix1 = np.random.binomial(1, 0.5, size=d.shape[1]).astype(bool)
%timeit d[np.ix_(ix0, ix1)]
%timeit zd.oindex[ix0, ix1]
Explanation: bool orthogonal selection
End of explanation
ix0 = np.random.choice(d.shape[0], size=int(d.shape[0] * .5), replace=True)
ix1 = np.random.choice(d.shape[1], size=int(d.shape[1] * .5), replace=True)
%timeit d[np.ix_(ix0, ix1)]
%timeit zd.oindex[ix0, ix1]
Explanation: int orthogonal selection
End of explanation
n = int(d.size * .1)
ix0 = np.random.choice(d.shape[0], size=n, replace=True)
ix1 = np.random.choice(d.shape[1], size=n, replace=True)
n
%timeit d[ix0, ix1]
%timeit zd.vindex[ix0, ix1]
profile('zd.vindex[ix0, ix1]')
Explanation: coordinate (point) selection
End of explanation
import h5py
import tempfile
h5f = h5py.File(tempfile.mktemp(), driver='core', backing_store=False)
hc = h5f.create_dataset('c', data=c, compression='gzip', compression_opts=1, chunks=zc.chunks, shuffle=True)
hc
%time hc[:]
%time hc[ix_sparse_bool]
# # this is pathological, takes minutes
# %time hc[ix_dense_bool]
# this is pretty slow
%time hc[::1000]
Explanation: Points need to be partially sorted so all points in the same chunk are grouped and processed together. This requires argsort which dominates time.
h5py comparison
N.B., not really fair because using slower compressor, but for interest...
End of explanation |
6,264 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
12-752 Course Project
Xiaowen Gu (xiaoweng), Kenan Zhang (kenanz)
Step1: 1. Load Data
1.1 Energy Data of Gates
Step2: As is shown in the figure, the load data is cumulated energy consumption from Dec 2014 to Dec 2015 with some time missing. The regular time interval is 15min.
1.2 Occupancy Data of Gates
Step3: As is shown in the figure, generally the occupancy data is from Sep 2014 to Dec 2015 with some time missing. And the time interval is around 20min but not regular.
1.3 Energy Data of Other Building
Step4: 1.4 Temperature Data
Step5: As there is no gap, change time interval to 15 minutes.
Step6: 2. Clean Data
2.1 Energy Data
Calculate the electricity consumption for each interval
Step7: 2.2 Occupancy Data
Find the latest starting time and the earliest ending time
Step8: Test startEndOffice function
Step9: Find and print the latest starting time and the earliest ending time of all offices
Step10: The latest starting date is 2014-10-28 Tue; the earliest ending date is 2015-12-10 Thu
Initiate the Time Series of Building Occupancy
Step11: Test generateTimeSeries function
Step12: Generate an occupancy time series with starting time 2014-11-03 00
Step13: 3. Prepare Data
3.1 Harmonize Time Series
Step14: 3.2 Calculate Occupancy Level
Test occupancy interpolation
Step15: Test interpOcc function
Step16: Interpolate occupancy data and calculate occupancy level
Step17: As is shown in the figure, there is missing data in Aug and Sep. Besides, the occupancy level was relatively low during the Jan and late Jun due to the winter and summer break. Therefore, we further zoom into spring and fall semester to determine the time period used in our study.
3.3 Crop Study Period
According the academic calendar of CMU, Fall 2014 ended by Dec 12 while Spring 2015 started from Jan 12. Therefore, we firstly plot the occupancy from Nov 3 to Dec 12.
Step18: Similarly, we plot spring and summer semester of 2015, specifically from Jan 12 to May 15 and from May 18 to Aug 7. Fall 2015 started from Aug 31.
Step19: Based on figures above, we choose Nov 3 to Dec 12 2014 (6w), Jan 12 to Apr 3 (12w), Apr 20 to May 15 (4w), Jun 22 to Jul 17 (4w) and Sep 28 to Dec 4 (10w), 36 weeks in total, to build our models.
Although we choose discrete time periods, for each of them the number of weeks is even thus we could split the traning set and test set week by week.
Define start and end arrays
Step20: Crop data
Step21: Test cropData function
Step22: Crop the occupancy data
Step23: 3.4 Harmonize data for temperature, electricity & occupancy
Step24: 4. Building Energy Model without Occupancy
4.1 Design Matrix with/without Occupancy
Step25: 4.2 Coefficient Estimation
Step26: 4.3 Define Training and Test Data
Step27: T-test & R-square
Step28: The model has 486 coefficients therefore H0 hypothesis of all vairables is rejected
5. Occupancy Pattern Model
5.1 Define State of Occupancy Level
Define occupany state based on the pattern of real occupancy level
To define the occupancy state, we firstly plot the occupancy level of a sample week and a sample
Step29: Sample week occupancy data
Step30: As is shown in the figures, there is little difference among weekdays, and the range of daily occupancy is between 0.1 to 0.8. Therefore, we define the occupancy states as [0.1,0.3,0.6,0.8].
Step31: We also plot the sample day occupancy level and the occupancy state to validate our assumption
Step32: Build occupancy state Matrix
Calculate the difference between real occupancy and occupancy state
Step33: Assign real occupancy level to a single state and build occupancy state matrix
Step34: Reshape the occupancy matrix (# time_indicator, # state, # week)
Check if there is missing data in certain week
Step35: The 25th week only has 384 sample, thus we need to discard this week.
Step36: Now reshape the occupancy matrix
Step37: Test the occupancy matrix is reshaped correctly
Step38: 5.2 Define Transition Matrices
Initiate transition matrices (# state, # state, # time_indicator)
Step39: Calculate parameters of each transition matrix
Step40: Test T function
Step41: Calculate transition matrices of each time interval
Step42: Print one transition matrix
Step43: Plot transition probability from 0.1 to 0.3 and from 0.8 to 0.5
Step44: As is showen in the figure, the occupancy is very likely to increase around 9am, and to decrease around 12pm, 5pm and 9pm, which correctly reponds to the daily schedule
5.3 Blended Transition Matrices
Mix of transition matrices
Step45: Weight of each transition matrix
Step46: Value of each time slot
Step47: Sigmoid function
Step48: Calculate blended transition matrices
Step49: Plot increase and decrease transition probabilities again
Step50: 6. Occupancy Pattern Simulation
6.1 Simulate Occupancy
Step51: Test markovSim function
Step52: Test occSim function
Step53: Simulate occupancy
Step54: Pring the first day occupancy simulation result
Step55: Calculate the simulated occupancy level
Step56: 6.2 Compare Simulated Occupancy with Ground Truth
Step57: As is shown in the figure, the simulated occupancy perferctly matches the real occupancy. Therefore, we replace the real occupancy with the simulation result in the energy prediction model in the next section. To do this, we need to crop the occupancy to match the energy data.
6.3 Crop Simulated Occupancy
The energy data used in this study has 4 time period
Step58: Also, remember we have cropped a week data.
Step59: 7. Energy Prediction Model with Occupancy
7.1 Harmonize Energy and Occupancy Data
Step60: 7.2 Fit Building Energy Model with Real Occupancy
Step61: 7.3 Fit Building Energy Model with Simulated Occupancy
Step62: T-test and R-square
Step63: Unfortunately, occupancy variale is rejected
7.5 Relationship between Occupancy and Energy Consumption | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import datetime as dt
%matplotlib inline
Explanation: 12-752 Course Project
Xiaowen Gu (xiaoweng), Kenan Zhang (kenanz)
End of explanation
gatesDateConverter = lambda d : dt.datetime.strptime(d,'%m/%d/%Y %H:%M')
gates_elect = np.genfromtxt('pointData_gates.csv',delimiter=",",names=True,
dtype=[dt.datetime,'f8'],converters={0: gatesDateConverter})
print gates_elect['Time'][0],gates_elect['Time'][-1]
print gates_elect['Time'][1]-gates_elect['Time'][0]
print np.max(np.diff(gates_elect['Time'])),np.min(np.diff(gates_elect['Time']))
plt.figure(figsize=(15,5))
plt.plot(gates_elect['Time'],gates_elect['Value'],'r')
plt.xlabel('Time Stamp')
plt.ylabel('Electricity Consumption')
plt.title('GHC Electricity Consumption')
Explanation: 1. Load Data
1.1 Energy Data of Gates
End of explanation
occDateConverter = lambda d : dt.datetime.strptime(d,'%d-%b-%Y %H:%M:%S')
occ_all = np.genfromtxt('occ_clean.csv',delimiter=",",
dtype=[('timestamp', type(dt.datetime.now)),('id','f8'),('occ', 'f8')],
converters={0: occDateConverter}, skip_header=1)
print "First sample is Office %d with occupancy %d on %s"%(occ_all['id'][0],occ_all['occ'][0],occ_all['timestamp'][0])
print occ_all['timestamp'][1]-occ_all['timestamp'][0]
sample_office=occ_all[np.where(occ_all['id']==1)]
plt.figure(figsize=(15,5))
plt.plot(sample_office['timestamp'],sample_office['occ'],'r')
plt.xlabel('Time Stamp')
plt.ylabel('Occupancy')
plt.title('Occupancy of Office 1')
plt.ylim(0,1.5)
Explanation: As is shown in the figure, the load data is cumulated energy consumption from Dec 2014 to Dec 2015 with some time missing. The regular time interval is 15min.
1.2 Occupancy Data of Gates
End of explanation
bakerDateConverter = lambda d : dt.datetime.strptime(d,'%m/%d/%Y %H:%M')
baker_elect = np.genfromtxt('baker_energy.csv',delimiter=",",names=True,
dtype=[dt.datetime,'f8'],converters={0: bakerDateConverter})
plt.figure(figsize=(15,5))
plt.plot(baker_elect['Time'],gates_elect['BakerHall'],'r')
plt.xlabel('Time Stamp')
plt.ylabel('Electricity Consumption')
plt.title('Baker Hall Electricity Consumption')
Explanation: As is shown in the figure, generally the occupancy data is from Sep 2014 to Dec 2015 with some time missing. And the time interval is around 20min but not regular.
1.3 Energy Data of Other Building
End of explanation
temperatureDateConverter = lambda d : dt.datetime.strptime(d,'%Y-%m-%d %H:%M:%S')
temperature = np.genfromtxt('temperature.csv',delimiter=",",
dtype=[('timestamp', type(dt.datetime.now)),('tempF', 'f8')],
converters={0: temperatureDateConverter}, skip_header=1)
plt.plot(temperature['timestamp'])
print "The minimum difference between any two consecutive timestamps is: " + str(np.min(np.diff(temperature['timestamp'])))
print "The maximum difference between any two consecutive timestamps is: " + str(np.max(np.diff(temperature['timestamp'])))
Explanation: 1.4 Temperature Data
End of explanation
new_temperature = temperature[0:-1:3]
print "First timestamp is on \t{}. \nLast timestamp is on \t{}.".format(temperature['timestamp'][0], temperature['timestamp'][-1])
Explanation: As there is no gap, change time interval to 15 minutes.
End of explanation
gates_elect_value = []
for i in range (1, len(gates_elect['Value'])):
gates_elect_value.append(gates_elect['Value'][i]-gates_elect['Value'][i-1])
print(len(gates_elect_value))
print(len(gates_elect['Value']))
plt.figure(figsize=(15,5))
plt.plot(gates_elect['Time'][1:],gates_elect_value)
plt.xlabel('Time')
plt.ylabel('Electricity [kWh]')
plt.ylim(-500,3000)
gates_elect_value = np.array(gates_elect_value)
value = np.where((gates_elect_value>10)*(gates_elect_value<200))
plt.figure(figsize=(15,5))
plt.plot(gates_elect['Time'][1:][value],gates_elect_value[value],'-b')
plt.xlabel('Time')
plt.ylabel('Electricity [kWh]')
plt.ylim(80,200)
print "Power data from {0} to {1}.\nTemperature data from {2} to {3}".format(gates_elect['Time'][1:][value][0], gates_elect['Time'][1:][value][-1],new_temperature['timestamp'][2:][0], new_temperature['timestamp'][2:][-1])
new_temperature = new_temperature[0:-24]
newElectValues = interp(gates_elect['Time'][1:][value], gates_elect_value[value], new_temperature['timestamp'][2:])
toposix = lambda d: (d - dt.datetime(1970,1,1,0,0,0)).total_seconds()
timestamp_in_seconds = map(toposix,new_temperature['timestamp'])
timestamps = new_temperature['timestamp'][2:]
temp_values = new_temperature['tempF'][2:]
elect_values = newElectValues
print(timestamps[0],timestamps[-1])
len(temp_values)==len(elect_values)
weekday = map(lambda t: t.weekday(), timestamps)
weekday = np.array(weekday)
weekends = np.where(weekday>=5) ## Note that depending on how you do this, the result could be a tuple of ndarrays.
weekdays = np.where(weekday<5)
plt.figure(figsize=(15,5))
plt.plot(timestamps[weekdays[0]],elect_values[weekdays[0]],'--b')
plt.figure(figsize=(15,5))
plt.plot(timestamps[weekdays[0]], temp_values[weekdays[0]], '--r')
Explanation: 2. Clean Data
2.1 Energy Data
Calculate the electricity consumption for each interval
End of explanation
def startEndOffice(id_office,occ):
n=len(id_office)
startEndList=np.empty([n,2],dtype=type(dt.datetime.now))
for i in range(n):
office=occ[np.where(occ['id']==id_office[i])]
startEndList[i,:]=np.array([office['timestamp'][0],office['timestamp'][-1]])
return startEndList
Explanation: 2.2 Occupancy Data
Find the latest starting time and the earliest ending time
End of explanation
sample_start_end_office=startEndOffice([1],sample_office)
print min(sample_start_end_office[:,0])
print max(sample_start_end_office[:,1])
Explanation: Test startEndOffice function
End of explanation
id_office=np.unique(occ_all['id'])
start_end_office=startEndOffice(id_office,occ_all)
print max(start_end_office[:,0]),min(start_end_office[:,1])
date_1=dt.datetime(2014,10,28)
date_2=dt.datetime(2015,12,10)
print date_1.weekday(),date_2.weekday()
Explanation: Find and print the latest starting time and the earliest ending time of all offices
End of explanation
def generateTimeSeries(start,end,step):
#generat a time series with input start, end and step
#skip weekends
ts = []
t=start
while t <= end:
if t.weekday() < 5:
ts.append(t)
t += step
return ts
Explanation: The latest starting date is 2014-10-28 Tue; the earliest ending date is 2015-12-10 Thu
Initiate the Time Series of Building Occupancy
End of explanation
start=dt.datetime(2015,12,3)
end=dt.datetime(2015,12,5,23,45)
step = dt.timedelta(minutes=15)
print start,end,step
ts_test=generateTimeSeries(start,end,step)
print ts_test[0],ts_test[-1]
Explanation: Test generateTimeSeries function
End of explanation
start=dt.datetime(2014,11,3,0,0)
end=dt.datetime(2015,12,4,23,45)
step = dt.timedelta(minutes=15)
ts_occ=generateTimeSeries(start,end,step)
print ts_occ[0],ts_occ[-1]
print np.shape(ts_occ)
Explanation: Generate an occupancy time series with starting time 2014-11-03 00:00:00 Mon, ending time 2015-12-04 23:45:00 Fri and time step 15min without weekends.
End of explanation
def interp(tP, P, tT):
# This function assumes that the input is an numpy.ndarray of datetime objects
# Most useful interpolation tools don't work well with datetime objects
# so we convert all datetime objects into the number of seconds elapsed
# since 1/1/1970 at midnight (also called the UNIX Epoch, or POSIX time):
toposix = lambda d: (d - dt.datetime(1970,1,1,0,0,0)).total_seconds()
tP = map(toposix, tP)
tT = map(toposix, tT)
# Now we interpolate
from scipy.interpolate import interp1d
f = interp1d(tP, P,'linear')
return f(tT)
Explanation: 3. Prepare Data
3.1 Harmonize Time Series
End of explanation
sample_interp=interp(sample_office['timestamp'],sample_office['occ'],ts_occ)
print np.shape(sample_interp)
plt.figure(figsize=(15,5))
plt.plot(ts_occ,sample_interp,'r')
plt.xlabel('Time Stamp')
plt.ylabel('Occupancy')
plt.title('Interpolated Occupancy of Office 1')
plt.ylim(0,1.5)
def interpOcc(id_office,occ_all,ts_occ):
occ=np.zeros(len(ts_occ))
for i in range(len(id_office)):
office=occ_all[np.where(occ_all['id']==id_office[i])]
occ=np.add(occ,interp(office['timestamp'],office['occ'],ts_occ))
occ_interp=np.ndarray(shape=(len(ts_occ)),dtype=[('timestamp',dt.datetime),('occ',float)])
occ_interp['timestamp']=ts_occ
occ_interp['occ']=np.divide(occ,len(id_office))
return occ_interp
Explanation: 3.2 Calculate Occupancy Level
Test occupancy interpolation
End of explanation
sample_office_2=occ_all[(occ_all['id']==1)|(occ_all['id']==2)]
print np.shape(sample_office_2)
sample_interp_2=interpOcc([1,2],sample_office_2,ts_occ)
plt.figure(figsize=(15,5))
plt.plot(ts_occ,sample_interp_2['occ'],'r')
plt.xlabel('Time Stamp')
plt.ylabel('Occupancy')
plt.title('Interpolated Occupancy of Office 1 & 2')
plt.ylim(0,1.5)
Explanation: Test interpOcc function
End of explanation
occ_interp=interpOcc(id_office,occ_all,ts_occ)
plt.figure(figsize=(15,5))
plt.plot(ts_occ,occ_interp['occ'],'b')
plt.xlabel('Time Stamp')
plt.ylabel('Occupancy')
plt.title('Interpolated GHC Occupancy Level')
plt.ylim(0,1)
Explanation: Interpolate occupancy data and calculate occupancy level
End of explanation
t_1=dt.datetime(2014,12,13)
occ_1=occ_interp[np.where(occ_interp['timestamp']<t_1)]
print np.shape(occ_1)
plt.figure(figsize=(15,5))
plt.plot(occ_1['timestamp'],occ_1['occ'],'b')
plt.xlabel('Time Stamp')
plt.ylabel('Occupancy')
plt.title('GHC Occupancy Level Fall 2014 (Nov 3 to Dec 12)')
plt.ylim(0,1)
Explanation: As is shown in the figure, there is missing data in Aug and Sep. Besides, the occupancy level was relatively low during the Jan and late Jun due to the winter and summer break. Therefore, we further zoom into spring and fall semester to determine the time period used in our study.
3.3 Crop Study Period
According the academic calendar of CMU, Fall 2014 ended by Dec 12 while Spring 2015 started from Jan 12. Therefore, we firstly plot the occupancy from Nov 3 to Dec 12.
End of explanation
t_2=dt.datetime(2015,1,12)
t_3=dt.datetime(2015,5,16)
t_4=dt.datetime(2015,5,18)
t_5=dt.datetime(2015,8,8)
t_6=dt.datetime(2015,8,31)
occ_2=occ_interp[np.where((occ_interp['timestamp']>=t_2)&(occ_interp['timestamp']<t_3))]
occ_3=occ_interp[np.where((occ_interp['timestamp']>=t_4)&(occ_interp['timestamp']<t_5))]
occ_4=occ_interp[np.where(occ_interp['timestamp']>=t_6)]
print np.shape(occ_2),np.shape(occ_3),np.shape(occ_4)
plt.figure(figsize=(15,5))
plt.plot(occ_2['timestamp'],occ_2['occ'],'b')
plt.xlabel('Time Stamp')
plt.ylabel('Occupancy')
plt.title('GHC Occupancy Level Spring 2015 (Jan 12 to May 15)')
plt.ylim(0,1)
plt.figure(figsize=(15,5))
plt.plot(occ_3['timestamp'],occ_3['occ'],'b')
plt.xlabel('Time Stamp')
plt.ylabel('Occupancy')
plt.title('GHC Occupancy Level Summer 2015 (May 18 to Aug 7)')
plt.ylim(0,1)
plt.figure(figsize=(15,5))
plt.plot(occ_4['timestamp'],occ_4['occ'],'b')
plt.xlabel('Time Stamp')
plt.ylabel('Occupancy')
plt.title('GHC Occupancy Level Fall 2015 (Aug 31 to Dec 4)')
plt.ylim(0,1)
Explanation: Similarly, we plot spring and summer semester of 2015, specifically from Jan 12 to May 15 and from May 18 to Aug 7. Fall 2015 started from Aug 31.
End of explanation
start=np.array([dt.datetime(2014,11,3),dt.datetime(2015,1,12),dt.datetime(2015,4,20),dt.datetime(2015,6,22),dt.datetime(2015,9,28)])
end=np.array([dt.datetime(2014,12,13),dt.datetime(2015,4,4),dt.datetime(2015,5,16),dt.datetime(2015,7,17),dt.datetime(2015,12,5)])
print start
print end
Explanation: Based on figures above, we choose Nov 3 to Dec 12 2014 (6w), Jan 12 to Apr 3 (12w), Apr 20 to May 15 (4w), Jun 22 to Jul 17 (4w) and Sep 28 to Dec 4 (10w), 36 weeks in total, to build our models.
Although we choose discrete time periods, for each of them the number of weeks is even thus we could split the traning set and test set week by week.
Define start and end arrays
End of explanation
def cropData(data,ts,start,end):
n=np.shape(start)[0]
crop=np.array([],dtype=int)
for i in range(n):
r=np.where((data['timestamp']>=start[i])&(data['timestamp']<end[i]))
crop=np.append(crop,r[0])
return data[crop]
Explanation: Crop data
End of explanation
sample_crop=cropData(sample_interp_2,ts_occ,start,end)
print np.shape(sample_crop)
print sample_crop['timestamp'][0],sample_crop['timestamp'][-1]
Explanation: Test cropData function
End of explanation
occ_crop=cropData(occ_interp,ts_occ,start,end)
plt.figure(figsize=(15,5))
plt.plot(occ_crop['timestamp'],occ_crop['occ'],'b')
plt.xlabel('Time Stamp')
plt.ylabel('Occupancy')
plt.title('Cropped GHC Occupancy Level')
plt.ylim(0,1)
Explanation: Crop the occupancy data
End of explanation
new_temp = temp_values[weekdays[0]][0:22080]
new_elect = elect_values[weekdays[0]][0:22080]
new_timestamp = timestamps[weekdays[0]][0:22080]
temp = []
temp.extend(new_temp[2206:7966])
temp.extend(new_temp[8926:10846])
temp.extend(new_temp[13246:14686])
temp.extend(new_temp[19966:21886])
elect = []
elect.extend(new_elect[2206:7966])
elect.extend(new_elect[8926:10846])
elect.extend(new_elect[13246:14686])
elect.extend(new_elect[19966:21886])
tstamp = []
tstamp.extend(new_timestamp[2206:7966])
tstamp.extend(new_timestamp[8926:10846])
tstamp.extend(new_timestamp[13246:14686])
tstamp.extend(new_timestamp[19966:21886])
Explanation: 3.4 Harmonize data for temperature, electricity & occupancy
End of explanation
def Tc(temperature, T_bound):
# The return value will be a matrix with as many rows as the temperature
# array, and as many columns as len(T_bound) [assuming that 0 is the first boundary]
Tc_matrix = np.zeros((len(temperature), len(T_bound)))
for i in range (len(temperature)):
temp = temperature[i]
if temp > T_bound[0]:
Tc_matrix[i,0] = T_bound[0]
else:
Tc_matrix[i,0] = temp
for j in range (1, len(T_bound)):
if temp > T_bound[j]:
Tc_matrix[i,j] = T_bound[j] - T_bound[j-1]
elif temp > T_bound[j-1]:
Tc_matrix[i,j] = temp - T_bound[j-1]
return Tc_matrix
def DesignMatrix(temperature, timestamps_num, temp_bounds):
#timestamps_num is the number of data points for a week
#The total number of data points will be cleaned before calling this
#function to ensure len(temperature) is divisible by timestamps_num.
weeks_num = len(temperature)//timestamps_num
I = np.identity(timestamps_num)
tiled_I = np.tile(I, (weeks_num,1))
T = Tc(temperature, temp_bounds)
X = np.concatenate((tiled_I, T), axis = 1)
return np.matrix(X)
def DesignMatrix2(temperature, timestamps_num, temp_bounds, occupancy):
weeks_num = len(temperature)//timestamps_num
I = np.identity(timestamps_num)
tiled_I = np.tile(I, (weeks_num,1))
T = Tc(temperature, temp_bounds)
X = np.concatenate((tiled_I, T), axis = 1)
new_X = np.concatenate((X, occupancy), axis = 1)
return np.matrix(new_X)
Explanation: 4. Building Energy Model without Occupancy
4.1 Design Matrix with/without Occupancy
End of explanation
def beta_hat(X, elect_values):
X_T = np.transpose(X)
b = np.linalg.inv(X_T*X) * X_T * elect_values
return b
Explanation: 4.2 Coefficient Estimation
End of explanation
new_temp = temp_values[weekdays[0]][0:22080]
new_elect = elect_values[weekdays[0]][0:22080]
new_timestamp = timestamps[weekdays[0]][0:22080]
train_temp = []
test_temp = []
for i in range(23):
train_temp.extend(new_temp[960*i:960*i+480])
test_temp.extend(new_temp[960*i+480:960*i+960])
train_elect = []
test_elect = []
for i in range(23):
train_elect.extend(new_elect[960*i:960*i+480])
test_elect.extend(new_elect[960*i+480:960*i+960])
train_timestamps = []
test_timestamps = []
for i in range(23):
train_timestamps.extend(new_timestamp[960*i:960*i+480])
test_timestamps.extend(new_timestamp[960*i+480:960*i+960])
print(train_timestamps[0],test_timestamps[-1])
train_T_bound = (40,50,60,70,80,90)
test_T_bound = (40,50,60,70,80,90)
train_X = DesignMatrix(train_temp,480, train_T_bound)
train_elect_trans = np.transpose(np.matrix(train_elect))
betahat = beta_hat(train_X, train_elect_trans)
test_X = DesignMatrix(test_temp,480, test_T_bound)
predict_y = test_X * betahat
plt.figure(figsize=(15,5))
plt.plot(predict_y[1000:2000], 'r-', label = "Predicted")
plt.plot(test_elect[1000:2000], 'b-', label = "Actual")
plt.ylim(100,200)
plt.xlabel("Time",size=15)
plt.ylabel("Electricity [kWh]",size=15)
plt.legend(loc="upper left")
plt.title('Original Model')
Explanation: 4.3 Define Training and Test Data
End of explanation
X = test_X
Y = np.matrix(test_elect).T
y_bar = np.matrix([[np.mean(Y)]*len(Y)]).T
R_squared = 1-((Y-X*betahat).T*(Y-X*betahat))/((Y-y_bar).T*(Y-y_bar))
print R_squared[0,0]
from scipy.stats import t
n = 11040
p = 486
t.isf((1-0.95)/2,n-p)
MSE = (Y-X*betahat).T*(Y-X*betahat)/(n-p)
S_betahat_sqr = np.multiply(MSE, np.linalg.inv(X.T*X))
S_betahatk_sqr = S_betahat_sqr.diagonal()
S_betahatk = np.sqrt(S_betahatk_sqr.T)
T = []
for i in range(486):
T.append(np.array(betahat[i,:])[0][0]/np.array(S_betahatk[i,:])[0][0])
def rej_coef(T):
rej_list = []
for i in range(len(T)):
if T[i] > t.isf((1-0.95)/2,n-p) or T[i] < -t.isf((1-0.95)/2,n-p):
rej_list.append(T[i])
return rej_list
len(rej_coef(T))
Explanation: T-test & R-square
End of explanation
def weekdayArray(data):
weekday = map(lambda t: t.weekday(), data['timestamp'])
return np.asarray(weekday)
def hourArray(data):
hour = map(lambda t: t.hour, data['timestamp'])
return np.asarray(hour)
def minuteArray(data):
minute = map(lambda t: t.minute, data['timestamp'])
return np.asarray(minute)
Explanation: The model has 486 coefficients therefore H0 hypothesis of all vairables is rejected
5. Occupancy Pattern Model
5.1 Define State of Occupancy Level
Define occupany state based on the pattern of real occupancy level
To define the occupancy state, we firstly plot the occupancy level of a sample week and a sample
End of explanation
week_start=np.where((weekdayArray(occ_crop)==0)&(hourArray(occ_crop)==0)&(minuteArray(occ_crop)==0))[0]
print np.shape(week_start)
sample_week=occ_crop[week_start[0]:week_start[1]]
print np.shape(sample_week)
day_start=np.where((hourArray(sample_week)==0)&(minuteArray(sample_week)==0))[0]
print np.shape(day_start)
sample_day=occ_crop[day_start[0]:day_start[1]]
print np.shape(sample_day)
plt.figure(figsize=(15,5))
plt.plot(sample_week['timestamp'],sample_week['occ'],'b')
plt.xlabel('Time Stamp')
plt.ylabel('Occupancy')
plt.title('GHC Occupancy Level of Sample Week')
plt.ylim(0,1)
plt.figure(figsize=(15,5))
plt.plot(sample_day['timestamp'],sample_day['occ'],'b')
plt.xlabel('Time Stamp')
plt.ylabel('Occupancy')
plt.title('GHC Occupancy Level of Sample Day')
plt.ylim(0,1)
Explanation: Sample week occupancy data
End of explanation
state=np.array([0.1,0.3,0.6,0.8])
print state
Explanation: As is shown in the figures, there is little difference among weekdays, and the range of daily occupancy is between 0.1 to 0.8. Therefore, we define the occupancy states as [0.1,0.3,0.6,0.8].
End of explanation
def plotOccState(occ,state):
n=np.shape(occ)[0]
plt.figure(figsize=(15,5))
plt.plot(occ['timestamp'],occ['occ'],'b')
for i in state:
s=i*np.ones(n)
plt.plot(occ['timestamp'],s,'r')
plt.xlabel('Time Stamp')
plt.ylabel('Occupancy')
plt.title('Occupancy Level vs Occupancy State')
plt.ylim(0,1)
plotOccState(sample_day,state)
Explanation: We also plot the sample day occupancy level and the occupancy state to validate our assumption
End of explanation
occ_data=np.matrix(occ_crop['occ']).T
n_data=len(occ_data)
n_state=len(state)
print np.shape(occ_data)
diff=occ_data*np.ones((1,n_state))-np.ones((n_data,1))*state
diff=np.abs(diff)
print diff.shape
print diff[1:10]
Explanation: Build occupancy state Matrix
Calculate the difference between real occupancy and occupancy state
End of explanation
occ_mtx=np.zeros((n_data,n_state))
ind=np.argmin(diff,1)
for i in range(occ_data.shape[0]):
occ_mtx[i,ind[i]]=1
print occ_mtx.shape
occ_state=occ_mtx*state
occ_state=np.sum(occ_state,1)
print np.shape(occ_state)
sample_week_state=occ_state[:len(sample_week)]
plt.figure(figsize=(15,5))
plt.plot(sample_week['timestamp'],sample_week['occ'],'b',label='occupancy')
plt.plot(sample_week['timestamp'],sample_week_state,'r',label='occupancy state')
plt.xlabel('Time Stamp')
plt.ylabel('Occupancy')
plt.title('GHC Occupancy of Sample Week')
plt.ylim(0,1)
plt.legend()
Explanation: Assign real occupancy level to a single state and build occupancy state matrix
End of explanation
for i in range(1,len(week_start)):
if (week_start[i]-week_start[i-1])!=5*len(sample_day):
print i-1,week_start[i]-week_start[i-1]
Explanation: Reshape the occupancy matrix (# time_indicator, # state, # week)
Check if there is missing data in certain week
End of explanation
crop=range(week_start[25],week_start[26])
print len(crop)
occ_mtx=np.delete(occ_mtx,crop,0)
occ_crop_new=np.delete(occ_crop,crop,0)
print np.shape(occ_mtx)
print np.shape(occ_crop_new)
Explanation: The 25th week only has 384 sample, thus we need to discard this week.
End of explanation
n_day=(len(week_start)-1)*5
print n_day
occ_mtx=np.reshape(occ_mtx,(n_day,len(sample_day),n_state))
print occ_mtx.shape
Explanation: Now reshape the occupancy matrix
End of explanation
test_mtx=occ_mtx[0,:,:]
test_mtx=test_mtx*state
test_mtx=np.sum(test_mtx,1)
plt.figure(figsize=(15,5))
plt.plot(sample_day['timestamp'],sample_day['occ'],'b',label='occupancy')
plt.plot(sample_day['timestamp'],test_mtx,'r',label='occupancy state')
plt.xlabel('Time Stamp')
plt.ylabel('Occupancy')
plt.title('GHC Occupancy of Sample Week')
plt.ylim(0,1)
plt.legend()
Explanation: Test the occupancy matrix is reshaped correctly
End of explanation
n_t=len(sample_day)
print n_t
tran_mtx=np.zeros((n_state*n_t,n_state))
tran_mtx=np.reshape(tran_mtx,(n_state,n_state,n_t))
print tran_mtx.shape
Explanation: 5.2 Define Transition Matrices
Initiate transition matrices (# state, # state, # time_indicator)
End of explanation
def T(occ_pre,occ_pri):
n_state=occ_pre.shape[1]
tran_mtx=np.zeros((n_state,n_state))
for i in range(n_state):
for j in range(n_state):
ind_pri=np.where(occ_pri[:,i]==1)
if len(ind_pri[0])==0:
tran_mtx[i,j]=0
else:
pri_pre=occ_pre[ind_pri[0]]
ind_pre=np.where(pri_pre[:,j]==1)
prob=len(ind_pre[0])/float(len(ind_pri[0]))
tran_mtx[i,j]=prob
return tran_mtx
Explanation: Calculate parameters of each transition matrix
End of explanation
occ_pre=np.reshape(occ_mtx[:,1,:],(n_day,n_state))
occ_pri=np.reshape(occ_mtx[:,0,:],(n_day,n_state))
#print occ_pre
#print occ_pri
print T(occ_pre,occ_pri)
Explanation: Test T function
End of explanation
for i in range(n_t-1):
occ_pre=np.reshape(occ_mtx[:,i+1,:],(n_day,n_state))
occ_pri=np.reshape(occ_mtx[:,i,:],(n_day,n_state))
tran_mtx[:,:,i]=T(occ_pre,occ_pri)
print tran_mtx.shape
print tran_mtx[:,:,0]
Explanation: Calculate transition matrices of each time interval
End of explanation
print "Transition matrix of %d:%d"%(sample_day['timestamp'][40].hour,sample_day['timestamp'][40].minute)
print tran_mtx[:,:,40]
Explanation: Print one transition matrix
End of explanation
prob_1=tran_mtx[0,1,:]
prob_2=tran_mtx[3,2,:]
print np.shape(prob_1)
plt.figure(figsize=(15,5))
plt.plot(sample_day['timestamp'],prob_1,'b',label='prob 0.1 to 0.3')
plt.plot(sample_day['timestamp'],prob_2,'r',label='prob 0.8 to 0.6')
plt.xlabel('Time Stamp')
plt.ylabel('Probability')
plt.title('Transition Probability')
plt.ylim(0,1)
plt.legend()
Explanation: Plot transition probability from 0.1 to 0.3 and from 0.8 to 0.5
End of explanation
def T_mix(tran_mtx):
n_state=tran_mtx.shape[0]
n_t=tran_mtx.shape[2]
tran_mix=np.zeros((n_state,n_state,n_t))
for i in range(n_t):
mtx=np.zeros((n_state,n_state))
for j in range(n_t):
b=beta(i,j,n_t)
mtx=mtx+b*tran_mtx[:,:,j]
tran_mix[:,:,i]=mtx
return tran_mix
test_mix=T_mix(tran_mtx[:,:,:2])
print np.shape(test_mix)
print test_mix[:,:,0]
Explanation: As is showen in the figure, the occupancy is very likely to increase around 9am, and to decrease around 12pm, 5pm and 9pm, which correctly reponds to the daily schedule
5.3 Blended Transition Matrices
Mix of transition matrices
End of explanation
def beta(c1,c2,K):
a=np.zeros(K)
for i in range(K):
a[i]=alpha(c1-i)
return a[c2]/sum(a)
Explanation: Weight of each transition matrix
End of explanation
def alpha(x):
a=10.0
d=3.0
theta1=theta(2*a/d*(x+d/2))
theta2=theta(2*a/d*(x-d/2))
return theta1-theta2
Explanation: Value of each time slot
End of explanation
def theta(x):
return 1/(1+np.exp(-x))
Explanation: Sigmoid function
End of explanation
tran_blend=T_mix(tran_mtx)
print np.shape(tran_blend)
Explanation: Calculate blended transition matrices
End of explanation
prob_1=tran_blend[0,1,:]
prob_2=tran_blend[1,2,:]
prob_3=tran_blend[2,3,:]
prob_4=tran_blend[1,0,:]
prob_5=tran_blend[2,1,:]
prob_6=tran_blend[3,2,:]
print np.shape(prob_1)
plt.figure(figsize=(15,5))
plt.plot(sample_day['timestamp'],prob_1,'b',label='prob 0.1 to 0.3')
plt.plot(sample_day['timestamp'],prob_2,'r',label='prob 0.3 to 0.6')
plt.plot(sample_day['timestamp'],prob_3,'y',label='prob 0.6 to 0.8')
plt.plot(sample_day['timestamp'],prob_4,'g',label='prob 0.3 to 0.1')
plt.plot(sample_day['timestamp'],prob_4,'c',label='prob 0.6 to 0.3')
plt.plot(sample_day['timestamp'],prob_4,'m',label='prob 0.8 to 0.6')
plt.xlabel('Time Stamp')
plt.ylabel('Probability')
plt.title('Blended Transition Probability')
plt.ylim(0,1)
plt.legend()
Explanation: Plot increase and decrease transition probabilities again
End of explanation
def occSim(n_state,n_day,tran_mix):
#init occ_sim=[# time indicator, # state, # week]
n_t=tran_mix.shape[2]
occ_sim=np.zeros((n_day,n_t,n_state))
#init occ state
for i in range(n_day):
occ_pri=0
occ_sim[i,0,occ_pri]=1
for j in range(1,n_t):
occ_pre=markovSim(tran_mix[occ_pri,:,j],n_state)
occ_sim[i,j,occ_pre]=1
occ_pri=occ_pre
return occ_sim
def markovSim(pdf,n):
cdf=np.zeros(n)
cdf[0]=pdf[0]
for i in range(1,n):
cdf[i]=cdf[i-1]+pdf[i]
r=np.random.random()
for j in range(n):
if r<cdf[j]:
break
return j
Explanation: 6. Occupancy Pattern Simulation
6.1 Simulate Occupancy
End of explanation
pdf=tran_blend[:,:,30][0,:]
print pdf
print markovSim(pdf,4)
Explanation: Test markovSim function
End of explanation
test_sim=occSim(4,1,tran_blend)
print np.shape(test_sim)
test_sim=np.sum(test_sim[0,:,:]*state,1)
plt.figure(figsize=(15,5))
plt.plot(sample_day['timestamp'],sample_day['occ'],'b',label='occupancy')
plt.plot(sample_day['timestamp'],test_sim,'r',label='simulation')
plt.xlabel('Time Stamp')
plt.ylabel('Occupancy')
plt.title('Test Occupancy Simulation')
plt.ylim(0,1)
plt.legend()
Explanation: Test occSim function
End of explanation
N=100
occ_0=0 #first state
occ_sim=np.zeros((n_t*n_day,n_state))
for i in range(N):
sim=occSim(n_state,n_day,tran_blend)
sim=np.reshape(sim,(n_t*n_day,n_state))
occ_sim=occ_sim+sim
Explanation: Simulate occupancy
End of explanation
print occ_sim[:96]
Explanation: Pring the first day occupancy simulation result
End of explanation
occ_sim=occ_sim*1.0/N
occ_sim=np.sum(occ_sim*state,1)
Explanation: Calculate the simulated occupancy level
End of explanation
sample_week_sim=occ_sim[:len(sample_week)]
plt.figure(figsize=(15,5))
plt.plot(sample_week['timestamp'],sample_week['occ'],'b',label='occupancy')
plt.plot(sample_week['timestamp'],sample_week_sim,'r',label='simulation')
plt.xlabel('Time Stamp')
plt.ylabel('Occupancy')
plt.title('Simulated Occupancy of Sample Week')
plt.ylim(0,1)
plt.legend()
Explanation: 6.2 Compare Simulated Occupancy with Ground Truth
End of explanation
print np.where(occ_crop_new['timestamp']==dt.datetime(2015,1,12))
print np.where(occ_crop_new['timestamp']==dt.datetime(2015,10,26))
occ_sim_crop=np.ndarray(shape=(13920-2880),
dtype=[('timestamp',dt.datetime),('occ',float)])
occ_sim_crop['timestamp']=occ_crop_new['timestamp'][2880:13920]
occ_sim_crop['occ']=occ_sim[2880:13920]
print np.shape(occ_sim_crop)
plt.plot(occ_sim_crop['timestamp'])
Explanation: As is shown in the figure, the simulated occupancy perferctly matches the real occupancy. Therefore, we replace the real occupancy with the simulation result in the energy prediction model in the next section. To do this, we need to crop the occupancy to match the energy data.
6.3 Crop Simulated Occupancy
The energy data used in this study has 4 time period: Jan 12 to Apr 3, Apr 20 to May 15, Jun 22 to Jul 17, and Sep 28 to Dec 4. The time series of occupancy basically overlp with them, while we still need to cut off some data. Simply, we only need to shorten the occupancy time series.
End of explanation
crop_week=occ_crop[week_start[25]:week_start[26]]
print np.shape(crop_week)
print 'The week cropped is from ',crop_week['timestamp'][0],' to ',crop_week['timestamp'][-1]
plt.figure(figsize=(15,5))
plt.plot(occ_crop_new['occ'][2880:13920][1000:2000],'b',label='occupancy')
plt.plot(occ_sim_crop['occ'][1000:2000],'r',label='simulation')
plt.xlabel('Day')
plt.ylabel('Occupancy')
plt.title('Comparison of Real and Simulated Occupancy')
plt.ylim(0,1)
plt.legend()
Explanation: Also, remember we have cropped a week data.
End of explanation
new_temp2 = temp[0:10560]
new_elect2 = elect[0:10560]
new_occup = occ_sim_crop['occ'][0:10560]
new_tstamp = tstamp[0:10560]
train_temp2 = []
test_temp2 = []
for i in range(11):
train_temp2.extend(new_temp2[960*i:960*i+480])
test_temp2.extend(new_temp2[960*i+480:960*i+960])
train_elect2 = []
test_elect2 = []
for i in range(11):
train_elect2.extend(new_elect2[960*i:960*i+480])
test_elect2.extend(new_elect2[960*i+480:960*i+960])
train_occup = []
test_occup = []
for i in range(11):
train_occup.extend(new_occup[960*i:960*i+480])
test_occup.extend(new_occup[960*i+480:960*i+960])
train_tstamp2 = []
test_tstamp2 = []
for i in range(11):
train_tstamp2.extend(new_tstamp[960*i:960*i+480])
test_tstamp2.extend(new_tstamp[960*i+480:960*i+960])
train_occup2 = (np.matrix(train_occup)).T
test_occup2 = (np.matrix(test_occup)).T
Explanation: 7. Energy Prediction Model with Occupancy
7.1 Harmonize Energy and Occupancy Data
End of explanation
occup2 = occ_crop_new['occ'][2880:13920]
new_occup2 = occup2[0:10560]
train_occup2 = []
test_occup2 = []
for i in range(11):
train_occup2.extend(new_occup2[960*i:960*i+480])
test_occup2.extend(new_occup2[960*i+480:960*i+960])
train_occup3 = (np.matrix(train_occup2)).T
test_occup3 = (np.matrix(test_occup2)).T
train_X3 = DesignMatrix2(train_temp2, 480, train_T_bound, train_occup3)
train_elect_trans2 = (np.matrix(train_elect2)).T
betahat3 = beta_hat(train_X3, train_elect_trans2)
test_X3 = DesignMatrix2(test_temp2, 480, test_T_bound, test_occup3)
predict_y3 = test_X3 * betahat3
plt.figure(figsize=(15,5))
plt.plot(predict_y3[1000:2000], 'r-', label = "Predicted")
plt.plot(test_elect2[1000:2000], 'b-', label = "Actual")
plt.ylim(100,200)
plt.xlabel("Time",size=15)
plt.ylabel("Electricity [kWh]",size=15)
plt.legend(loc="upper left")
X3 = test_X3
Y3 = np.matrix(test_elect2).T
y_bar3 = np.matrix([[np.mean(Y3)]*len(Y3)]).T
R_squared3 = 1-((Y3-X3*betahat3).T*(Y3-X3*betahat3))/((Y3-y_bar3).T*(Y3-y_bar3))
R_squared3[0,0]
MSE3 = (Y3-X3*betahat3).T*(Y3-X3*betahat3)/(n2-p2)
S_betahat_sqr3 = np.multiply(MSE3, linalg.inv(X3.T*X3))
S_betahatk_sqr3 = S_betahat_sqr3.diagonal()
S_betahatk3 = np.sqrt(S_betahatk_sqr3.T)
left3 = np.array(betahat3 - t.isf((1-0.95)/2, n2-p2)*S_betahatk3)
right3 = np.array(betahat3 + t.isf((1-0.95)/2, n2-p2)*S_betahatk3)
CI3 = []
for i in range(487):
CI3.append((left3[i][0], right3[i][0]))
CI3
T3 = []
for i in range(487):
T3.append(np.array(betahat3[i,:])[0][0]/np.array(S_betahatk3[i,:])[0][0])
def rej_coef3(T):
rej_list = []
for i in range(len(T)):
if T[i] > t.isf((1-0.95)/2,n2-p2) or T[i] < -t.isf((1-0.95)/2,n2-p2):
rej_list.append(T[i])
else:
print(T[i],i)
return rej_list
rej_coef3(T3)
Explanation: 7.2 Fit Building Energy Model with Real Occupancy
End of explanation
train_X2 = DesignMatrix2(train_temp2, 480, train_T_bound, train_occup2)
train_elect_trans2 = (np.matrix(train_elect2)).T
betahat2 = beta_hat(train_X2, train_elect_trans2)
test_X2 = DesignMatrix2(test_temp2, 480, test_T_bound, test_occup2)
predict_y2 = test_X2 * betahat2
plt.figure(figsize=(15,5))
plt.plot(predict_y2[1000:2000], 'r-', label = "Predicted")
plt.plot(test_elect2[1000:2000], 'b-', label = "Actual")
plt.ylim(100,200)
plt.xlabel("Time",size=15)
plt.ylabel("Electricity [kWh]",size=15)
plt.legend(loc="upper left")
Explanation: 7.3 Fit Building Energy Model with Simulated Occupancy
End of explanation
X2 = test_X2
Y2 = np.matrix(test_elect2).T
y_bar2 = np.matrix([[np.mean(Y2)]*len(Y2)]).T
R_squared2 = 1-((Y2-X2*betahat2).T*(Y2-X2*betahat2))/((Y2-y_bar2).T*(Y2-y_bar2))
R_squared2[0,0]
n2 = 5280
p2 = 487
t.isf((1-0.95)/2,n2-p2)
MSE2 = (Y2-X2*betahat2).T*(Y2-X2*betahat2)/(n2-p2)
S_betahat_sqr2 = np.multiply(MSE2, np.linalg.inv(X2.T*X2))
S_betahatk_sqr2 = S_betahat_sqr2.diagonal()
S_betahatk2 = np.sqrt(S_betahatk_sqr2.T)
T2 = []
for i in range(487):
T2.append(np.array(betahat2[i,:])[0][0]/np.array(S_betahatk2[i,:])[0][0])
def rej_coef2(T):
rej_list = []
for i in range(len(T)):
if T[i] > t.isf((1-0.95)/2,n2-p2) or T[i] < -t.isf((1-0.95)/2,n2-p2):
rej_list.append(T[i])
else:
print(T[i],i)
return rej_list
len(rej_coef2(T2))
Explanation: T-test and R-square
End of explanation
fig, ax1 = plt.subplots(figsize=(15,5))
ax2 = ax1.twinx()
ax2.plot(new_tstamp[1000:2000],new_occup[1000:2000], 'r-',label = 'Occupany')
ax1.plot(new_tstamp[1000:2000],new_elect2[1000:2000], 'b-',label = 'Electricity')
ax1.set_xlabel('Time')
ax1.set_ylabel('Electricity [kWh]')
ax2.set_ylabel('Occupancy')
ax1.legend()
ax2.legend(loc="upper left")
plt.title('Building Eletricity Consumption vs Occupancy')
plt.show()
Explanation: Unfortunately, occupancy variale is rejected
7.5 Relationship between Occupancy and Energy Consumption
End of explanation |
6,265 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook illustrates the TubeTK tube NumPy array data structure and how to create histograms of the properties of a VesselTube.
First, import the function for reading a tube file in as a NumPy array, and read in the file.
Step1: The result is a NumPy Record Array where the fields of the array correspond to the properties of a VesselTubeSpatialObjectPoint.
Step2: The length of the array corresponds to the number of points that make up the tubes.
Step3: Individual points can be sliced, or views can be created on individual fields.
Step4: We can easily create a histogram of the radii or visualize the point positions. | Python Code:
import os
import sys
# Path for TubeTK libs
TubeTK_BUILD_DIR=None
if 'TubeTK_BUILD_DIR' in os.environ:
TubeTK_BUILD_DIR = os.environ['TubeTK_BUILD_DIR']
if not os.path.exists(TubeTK_BUILD_DIR):
print('TubeTK_BUILD_DIR not found!')
print(' Set environment variable')
sys.exit(1)
sys.path.append(os.path.join(TubeTK_BUILD_DIR,'Base/Python'))
import tubetk
from tubetk.numpy import tubes_from_file
import os
filepath = os.path.join(os.path.dirname(tubetk.__file__),
'..', '..', '..',
'MIDAS_Data', 'VascularNetwork.tre')
tubes = tubes_from_file(filepath)
Explanation: This notebook illustrates the TubeTK tube NumPy array data structure and how to create histograms of the properties of a VesselTube.
First, import the function for reading a tube file in as a NumPy array, and read in the file.
End of explanation
print(type(tubes))
print(tubes.dtype)
Explanation: The result is a NumPy Record Array where the fields of the array correspond to the properties of a VesselTubeSpatialObjectPoint.
End of explanation
print(len(tubes))
print(tubes.shape)
Explanation: The length of the array corresponds to the number of points that make up the tubes.
End of explanation
print('Entire points 0, 2:')
print(tubes[:4:2])
print('\nPosition of points 0, 2')
print(tubes['Position'][:4:2])
Explanation: Individual points can be sliced, or views can be created on individual fields.
End of explanation
%pylab inline
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(16, 6))
ax = fig.add_subplot(1, 2, 1)
ax.hist(tubes['Radius'], bins=100)
ax.set_xlabel('Radius')
ax.set_ylabel('Count')
ax = fig.add_subplot(1, 2, 2, projection='3d')
subsample = 100
position = tubes['Position'][::subsample]
radius = tubes['Radius'][::subsample]
ax.scatter(position[:,0], position[:,1], position[:,2], s=(2*radius)**2)
ax.set_title('Point Positions')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z');
Explanation: We can easily create a histogram of the radii or visualize the point positions.
End of explanation |
6,266 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Minimum count of indices to be skipped for every index of Array to keep sum till that index at most T Function to calculate minimum indices to be skipped so that sum till i remains smaller than T ; Store the sum of all indices before i ; Store the elements that can be skipped ; Traverse the array , A []; Store the total sum of elements that needs to be skipped ; Store the number of elements need to be removed ; Traverse from the back of map so as to take bigger elements first ; Update sum ; Update map with the current element ; Driver code ; Given Input ; Function Call
| Python Code::
def skipIndices(N , T , arr):
sum = 0
count = { }
for i in range(N):
d = sum + arr[i]- T
k = 0
if(d > 0):
for u in list(count . keys())[: : - 1]:
j = u
x = j * count[j]
if(d <= x):
k +=(d + j - 1)// j
break
k += count[j]
d -= x
sum += arr[i]
count[arr[i]] = count . get(arr[i], 0)+ 1
print(k , end = " ▁ ")
if __name__ == ' _ _ main _ _ ' :
N = 7
T = 15
arr =[1 , 2 , 3 , 4 , 5 , 6 , 7]
skipIndices(N , T , arr)
|
6,267 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outline
Glossary
3. Positional Astronomy
Previous
Step1: 3.1 Equatorial Coordinates (RA,DEC)
3.1.1 The Celestial Sphere
We can use a geographical coordinate system to uniquely identify a position on earth. We normally use the coordinates latitude $L_a$ (to measure north and south) and longitude $L_o$ (to measure east and west) to accomplish this. The equatorial coordinate system is depicted in Fig. 3.1.1 ⤵.
<a id='pos
Step2: 3.1.4 Example
Step3: The cartview function also produces a projected map as a byproduct (it takes the form of a 2D numpy array). We can now replot this projected map using matplotlib (see Fig. 3.1.3 ⤵ <!--\ref{pos | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
from IPython.display import HTML
HTML('../style/code_toggle.html')
import healpy as hp
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 10)
import matplotlib
import ephem
Explanation: Outline
Glossary
3. Positional Astronomy
Previous: 3. Positional Astronomy
Next: 3.2 Hour Angle (HA) and Local Sidereal Time (LST)
Import standard modules:
End of explanation
arcturus = ephem.star('Arcturus')
arcturus.compute('2016/2/8',epoch=ephem.J2000)
print('J2000: RA:%s DEC:%s' % (arcturus.ra, arcturus.dec))
arcturus.compute('2016/2/8', epoch=ephem.B1950)
print('B1950: RA:%s DEC:%s' % (arcturus.a_ra, arcturus.a_dec))
Explanation: 3.1 Equatorial Coordinates (RA,DEC)
3.1.1 The Celestial Sphere
We can use a geographical coordinate system to uniquely identify a position on earth. We normally use the coordinates latitude $L_a$ (to measure north and south) and longitude $L_o$ (to measure east and west) to accomplish this. The equatorial coordinate system is depicted in Fig. 3.1.1 ⤵.
<a id='pos:fig:geo'><!--\label{pos:fig:geo}--></a> <img src='figures/geo.svg' width=60%>
Figure 3.1.1: The geographical coordinates latitude $L_a$ and longitude $L_o$.
We also require a coordinate system to map the celestial objects. For all intents and purposes we may think of our universe as being projected onto a sphere of arbitrary radius. This sphere surrounds the Earth and is known as the celestial sphere. This is not a true representation of our universe, but it is a very useful approximate astronomical construct. The celestial equator is obtained by projecting the equator of the earth onto the celestial sphere. The stars themselves do not move on the celestial sphere and therefore have a unique location on it. The Sun is an exception, it changes position in a periodic fashion during the year (as the Earth orbits around the Sun). The path it traverses on the celestial sphere is known as the ecliptic.
3.1.2 The NCP and SCP
The north celestial pole (NCP) is an important location on the celestial sphere and is obtained by projecting the north pole of the earth onto the celestial sphere. The star Polaris is very close to the NCP and serves as a reference when positioning a telescope.
The south celestial pole (SCP) is obtained in a similar way. The imaginary circle known as the celestial equator is in the same plane as the equator of the earth and is obtained by projecting the equator of the earth onto the celestial sphere. The southern hemisphere
counterpart of Polaris is <span style="background-color:cyan">KT:GM: Do you want to add Sigma Octanis to the Glossary?</span> Sigma Octanis.
We use a specific point on the celestial equator from which we measure the location of all other celestial objects. This point is known as the first point of Aries ($\gamma$) <!--\vernal--> or the vernal equinox. The vernal equinox is the point where
the ecliptic intersects the celestial equator (south to north). We discuss the vernal equinox in more detail in $\S$ 3.2.2 ➞ <!--\ref{pos:sec:lst}-->.
3.1.3 Coordinate Definitions:
We use the equatorial coordinates to uniquely identify the location of celestial objects rotating with the celestial sphere around the SCP/NCP axis.
The Right Ascension $\alpha$ - We define the hour circle of an object as the circle on the celestial sphere that crosses the NCP and the object itself, while also perpendicularly intersecting with the celestial equator. The right ascension of an object is the angular distance between the vernal equinox and the hour circle of a celestial object measured along the celestial equator and is measured eastward. It is measured in Hours Minutes Seconds (e.g. $\alpha = 03^\text{h}13^\text{m}32.5^\text{s}$) and spans $360^\circ$ on the celestial sphere from $\alpha = 00^\text{h}00^\text{m}00^\text{s}$ (the coordinates of $\gamma$) to $\alpha = 23^\text{h}59^\text{m}59^\text{s}$.
The Declination $\delta$ - the declination of an object is the angular distance from the celestial equator measured along its hour circle (it is positive in the northern celestial hemisphere and negative in the southern celestial hemisphere). It is measured in Degrees Arcmin Arcsec (e.g. $\delta = -15^\circ23'44''$) which spans from $\delta = -90^\circ00'00''$ (SCP) to $+\delta = 90^\circ00'00''$ (NCP).
The equatorial coordinates are presented graphically in Fig. 3.1.2 ⤵ <!--\ref{pos:fig:equatorial_coordinates}-->.
<div class=warn>
<b>Warning:</b> As for any spherical system, the Right Ascension of the NCP ($\delta=+90^ \circ$) and the SCP ($\delta=-90^ \circ$) are ill-defined. And a source close to the any celestial pole can have an unintuitive Right Ascension.
</div>
<a id='pos:fig:equatorial_coordinates'></a> <!--\label{pos:fig:equatorial_coordinates}--> <img src='figures/equatorial.svg' width=500>
Figure 3.1.2: The equatorial coordinates $\alpha$ and $\delta$. The vernal equinox $\gamma$, the equatorial reference point is also depicted. The vernal
equinox is the point where the ecliptic (the path the sun traverses over one year) intersects the celestial equator. <span style="background-color:cyan">KT:XX: What are the green circles in the image? </span>
<div class=warn>
<b>Warning:</b> One arcminute of the declination axis (e.g. $00^\circ01'00''$) is not equal to one <em>minute</em> in right ascension axis (e.g. $00^\text{h}01^\text{m}00^\text{s}$). <br>
Indeed, in RA, the 24$^\text{h}$ circle is mapped to a 360$^\circ$ circle meaning that 1 hour spans over a section of 15$^\circ$. And as 1$^\text{h}$ is 60$^\text{m}$, therefore 1$^\text{m}$ in RA correspond to $1^\text{m} = \frac{1^\text{h}}{60}=\frac{15^\circ}{60}=0.25'$. <br>
You should be careful about this **factor of 4 difference between RA min and DEC arcmin** (i.e. $\text{RA} \; 00^\text{h}01^\text{m}00^\text{s}\neq \text{DEC} \; 00^\circ01'00''$)
</div>
3.1.3 J2000 and B1950
We will be making use of the <cite data-cite=''>pyephem package</cite> ⤴ package in the rest of this chapter to help us clarify and better understand some theoretical concepts. The two classes we will be using are the Observer and the Body class. The Observer class acts as a proxy for an array, while the Body class embodies a specific celestial object. In this section we will only make use of the Body class.
Earlier in this section I mentioned that the celestial objects do not move on the celestial sphere and therefore have fixed equatorial coordinates. This is not entirely true. Due to the precession (the change in the orientation of the earth's rotational axis) the location of the stars do in fact change minutely during the course of one generation. That is why we need to link the equatorial coordinates of a celestial object in a catalogue to a specific observational epoch (a specific instant in time). We can then easily compute the true coordinates as they would be today given the equatorial coordinates from a specific epoch as a starting point. There are two popular epochs that are often used, namely J2000 and B1950. Expressed in <cite data-cite=''>UT (Universal Time)</cite> ⤴:
* B1950 - 1949/12/31 22:09:50 UT,
* J2000 - 2000/1/1 12:00:00 UT.
The 'B' and the 'J' serve as a shorthand for the Besselian year and the Julian year respectively. They indicate the lenght of time used to measure one year while choosing the exact instant in time associated with J2000 and B1950. The Besselian year is based on the concept of a <cite data-cite=''>tropical year</cite> ⤴ and is not used anymore. The Julian year consists of 365.25 days. In the code snippet below we use pyephem to determine the J2000 and B1950 equatorial coordinates of Arcturus.
End of explanation
haslam = hp.read_map('../data/fits/haslam/lambda_haslam408_nofilt.fits')
matplotlib.rcParams.update({'font.size': 10})
proj_map = hp.cartview(haslam,coord=['G','C'], max=2e5, xsize=2000,return_projected_map=True,title="Haslam 408 MHz with no filtering",cbar=False)
hp.graticule()
Explanation: 3.1.4 Example: The 408 MHz Haslam map
To finish things off, let's make sure that given the concepts we have learned in this section we are able to interpret a radio skymap correctly. We will be plotting and inspecting the <cite data-cite=''>Haslam 408 MHz map</cite> ⤴. We load the Haslam map with read_map and view it with cartview. These two functions form part of the <cite data-cite=''>healpy package</cite> ⤴.
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111)
matplotlib.rcParams.update({'font.size': 22})
#replot the projected healpy map
ax.imshow(proj_map[::-1,:],vmax=2e5, extent=[12,-12,-90,90],aspect='auto')
names = np.array(["Vernal Equinox","Cassiopeia A","Sagitarius A","Cygnus A","Crab Nebula","Fornax A","Pictor A"])
ra = np.array([0,(23 + 23./60 + 24./3600)-24,(17 + 42./60 + 9./3600)-24,(19 + 59./60 + 28./3600)-24,5+34./60+32./3600,3+22./60+41.7/3600,5+19./60+49.7/3600])
dec = np.array([0,58+48./60+54./3600,-28-50./60,40+44./60+2./3600,22+52./3600,-37-12./60-30./3600,-45-46./60-44./3600])
#mark the positions of important radio sources
ax.plot(ra,dec,'ro',ms=20,mfc="None")
for k in xrange(len(names)):
ax.annotate(names[k], xy = (ra[k],dec[k]), xytext=(ra[k]+0.8, dec[k]+5))
#create userdefined axis labels and ticks
ax.set_xlim(12,-12)
ax.set_ylim(-90,90)
ticks = np.array([-90,-80,-70,-60,-50,-40,-30,-20,-10,0,10,20,30,40,50,60,70,80,90])
plt.yticks(ticks)
ticks = np.array([12,10,8,6,4,2,0,-2,-4,-8,-6,-10,-12])
plt.xticks(ticks)
plt.xlabel("Right Ascension [$h$]")
plt.ylabel("Declination [$^{\circ}$]")
plt.title("Haslam 408 MHz with no filtering")
#relabel the tick values
fig.canvas.draw()
labels = [item.get_text() for item in ax.get_xticklabels()]
labels = np.array(["12$^h$","10$^h$","8$^h$","6$^h$","4$^h$","2$^h$","0$^h$","22$^h$","20$^h$","18$^h$","16$^h$","14$^h$","12$^h$"])
ax.set_xticklabels(labels)
labels = [item.get_text() for item in ax.get_yticklabels()]
labels = np.array(["-90$^{\circ}$","-80$^{\circ}$","-70$^{\circ}$","-60$^{\circ}$","-50$^{\circ}$","-40$^{\circ}$","-30$^{\circ}$","-20$^{\circ}$","-10$^{\circ}$","0$^{\circ}$","10$^{\circ}$","20$^{\circ}$","30$^{\circ}$","40$^{\circ}$","50$^{\circ}$","60$^{\circ}$","70$^{\circ}$","80$^{\circ}$","90$^{\circ}$"])
ax.set_yticklabels(labels)
ax.grid('on')
Explanation: The cartview function also produces a projected map as a byproduct (it takes the form of a 2D numpy array). We can now replot this projected map using matplotlib (see Fig. 3.1.3 ⤵ <!--\ref{pos:fig:haslam_map}-->). We do so in the code snippet that follows.
End of explanation |
6,268 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aula 10 Discrete Wavelets Transform
Exercícios
isccsym
Não é fácil projetar um conjunto de testes para garantir que o seu programa esteja correta.
No caso em que o resultado é Falso, i.e., não simétrico, basta um pixel não ser simétrico para
o resultado ser Falso. Com isso, o conjunto de teste com uma imagem enorme com muitos pixels
não simétricos não é bom teste. Por exemplo, neste caso, faltou um teste onde tudo seja
simétrico, com exceção da origem (F[0,0]).
Solução apresentada pelo Marcelo, onde é comparado com a imagem refletida com translação periódica de 1 deslocamento é bem conceitual. A solução do Deângelo parece ser a mais rápida
Step1: Exercícios para a próxima aula
Fazer uma função que amplie/reduza a imagem utilizando interpolação no domínio da frequência,
conforme discutido em aula. Comparar os resultados com o scipy.misc.imresize, tanto de qualidade do
espectro como de tempo de execução.
Os alunos com RA ímpar devem fazer as ampliações e os com RA par devem fazer
as reduções.
Nome da função
Step2: Modificar a função pconv para executar no domínio da frequência, caso o número de
elementos não zeros da menor imagem, é maior que um certo valor, digamos 15.
Nome da função
Step3: Transforma Discreta de Wavelets
Iremos utilizar um notebook que foi um resultado de projeto de anos
anteriores.
DWT | Python Code:
import numpy as np
import sys,os
import matplotlib.image as mpimg
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
Explanation: Aula 10 Discrete Wavelets Transform
Exercícios
isccsym
Não é fácil projetar um conjunto de testes para garantir que o seu programa esteja correta.
No caso em que o resultado é Falso, i.e., não simétrico, basta um pixel não ser simétrico para
o resultado ser Falso. Com isso, o conjunto de teste com uma imagem enorme com muitos pixels
não simétricos não é bom teste. Por exemplo, neste caso, faltou um teste onde tudo seja
simétrico, com exceção da origem (F[0,0]).
Solução apresentada pelo Marcelo, onde é comparado com a imagem refletida com translação periódica de 1 deslocamento é bem conceitual. A solução do Deângelo parece ser a mais rápida: não foi feita nenhuma cópia e comparou apenas com metade dos pixels.
Existe ainda pequeno problema a ser encontrado na questão de utilizar apenas metade dos pixels
para serem comparados.
minify
A redução da imagem deve ser feita com uma filtragem inicial de período de corte 2.r onde r é o
fator de redução da imagem. A seguir, é feita a reamostragem (decimação).
Para se fazer a redução no domínio da frequência, bastaria recortar o espectro da imagem original
e fazer a transforma inversa de Fourier.
resize
Verificou-se que a melhor função de ampliação/redução é a scipy.misc.imresize, tanto na qualidade como
na rapidez.
End of explanation
def imresize(f, size):
'''
Resize an image
Parameters
----------
f: input image
size: integer, float or tuple
- integer: percentage of current size
- float: fraction of current size
- tuple: new dimensions
Returns
-------
output image resized
'''
return f
Explanation: Exercícios para a próxima aula
Fazer uma função que amplie/reduza a imagem utilizando interpolação no domínio da frequência,
conforme discutido em aula. Comparar os resultados com o scipy.misc.imresize, tanto de qualidade do
espectro como de tempo de execução.
Os alunos com RA ímpar devem fazer as ampliações e os com RA par devem fazer
as reduções.
Nome da função: imresize
End of explanation
def pconvfft(f,h):
'''
Periodical convolution.
This is an efficient implementation of the periodical convolution.
This implementation should be comutative, i.e., pconvfft(f,h)==pconvfft(h,f).
This implementation should be fast. If the number of pixels used in the
convolution is larger than 15, it uses the convolution theorem to implement
the convolution.
Parameters:
-----------
f: input image (can be complex, up to 2 dimensions)
h: input kernel (can be complex, up to 2 dimensions)
Outputs:
image of the result of periodical convolution
'''
return f
Explanation: Modificar a função pconv para executar no domínio da frequência, caso o número de
elementos não zeros da menor imagem, é maior que um certo valor, digamos 15.
Nome da função: pconvfft
End of explanation
/home/lotufo/ia898/dev/wavelets.ipynb
Explanation: Transforma Discreta de Wavelets
Iremos utilizar um notebook que foi um resultado de projeto de anos
anteriores.
DWT
End of explanation |
6,269 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sparsified K-means Heuristic Plots
This file generates a bunch of figures showing heuristically how sparsified k-means works.
Step2: Data Matrices
X is the data matrix, U is the centroid matrix. We define these, and then set up a few tricks to mask certain regions of the arrays for plotting selected columns. The idea is to generate two copies of X, one which is used to plot only the column in which we are interested, the other of which is used to plot the remainder of the data. We make two different plots so that we can set different alpha values (transparency) for the column in question vs. the rest of the data.
Step4: Color Functions
We import the colors from files called 'CM.txt' (for main colors) and 'CA.txt' (for alternate colors). These text files are generated from www.paletton.com, by exporting the colors as text files. The text parsing is hacky but works fine for now. This makes it easy to try out different color schemes by directly exporting from patellon.
We use the colors to set up a colormap that we'll apply to the data matrices. We manually set the boundaries on the colormap to agree with how we defined the various matrices above. This way we can get different colored blocks, etc.
Step11: Plotting Functions
Step12: Generate the Plots | Python Code:
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
import numpy as np
from scipy.linalg import hadamard
from scipy.fftpack import dct
%matplotlib inline
n = 10 #dimension of data (rows in plot)
K = 3 #number of centroids
m = 4 #subsampling dimension
p = 6 #number of observations (columns in plot)
np.random.seed(0)
DPI = 300 #figure DPI for saving
Explanation: Sparsified K-means Heuristic Plots
This file generates a bunch of figures showing heuristically how sparsified k-means works.
End of explanation
def this_is_dumb(x):
Surely there's a better way but this works. Permute X
y = np.copy(x)
np.random.shuffle(y)
return y
## Preconditioning plot
# Unconditioned
vals_unconditioned = [i for i in range(-5,5)]
X_unconditioned = np.array([this_is_dumb(vals_unconditioned) for i in range(p)]).T
# Conditioned
D = np.diag(np.random.choice([-1,1],n))
X_conditioned = dct(np.dot(D,X_unconditioned), norm = 'ortho')
## Subsampling plots
# Define the entries to set X
vals = [1 for i in range(m)]
vals.extend([0 for i in range(n-m)])
# Define X by permuting the values.
X = np.array([this_is_dumb(vals) for i in range(p)]).T
# means matrix
U = np.zeros((n,K))
# This is used to plot the full data in X (before subsampling)
Z = np.zeros_like(X)
# Generate two copies of X, one to plot just the column in question (YC) and one to plot the others (YO)
def get_col_X(col):
YO = np.copy(X)
YO[:,col-1] = -1
YC = - np.ones_like(X)
YC[:,col-1] = X[:,col-1]
return [YO,YC]
# Generate a copy of U modified to plot the rows selected by the column we chose of X
def get_rows_U(col):
US = np.copy(U)
US[np.where(X[:,col-1]==1)[0],:]=1
return US
Explanation: Data Matrices
X is the data matrix, U is the centroid matrix. We define these, and then set up a few tricks to mask certain regions of the arrays for plotting selected columns. The idea is to generate two copies of X, one which is used to plot only the column in which we are interested, the other of which is used to plot the remainder of the data. We make two different plots so that we can set different alpha values (transparency) for the column in question vs. the rest of the data.
End of explanation
def read_colors(path_in):
Crappy little function to read in the text file defining the colors.
mycolors = []
with open(path_in) as f_in:
lines = f_in.readlines()
for line in lines:
line = line.lstrip()
if line[0:5] == 'shade':
mycolors.append(line.split("=")[1].strip())
return mycolors
CM = read_colors('CM.txt')
CA = read_colors('CA.txt')
CD = ['#404040','#585858','#989898']
# Set the axes colors
mpl.rc('axes', edgecolor = CD[0], linewidth = 1.3)
# Set up the colormaps and bounds
cmapM = mpl.colors.ListedColormap(['none', CM[1], CM[3]])
cmapA = mpl.colors.ListedColormap(['none', CA[1], CA[4]])
bounds = [-1,0,1,2]
normM = mpl.colors.BoundaryNorm(bounds, cmapM.N)
normA = mpl.colors.BoundaryNorm(bounds, cmapA.N)
bounds_unconditioned = [i for i in range(-5,6)]
cmap_unconditioned = mpl.colors.ListedColormap(CA[::-1] + CM)
norm_unconditioned = mpl.colors.BoundaryNorm(bounds_unconditioned, cmap_unconditioned.N)
Explanation: Color Functions
We import the colors from files called 'CM.txt' (for main colors) and 'CA.txt' (for alternate colors). These text files are generated from www.paletton.com, by exporting the colors as text files. The text parsing is hacky but works fine for now. This makes it easy to try out different color schemes by directly exporting from patellon.
We use the colors to set up a colormap that we'll apply to the data matrices. We manually set the boundaries on the colormap to agree with how we defined the various matrices above. This way we can get different colored blocks, etc.
End of explanation
def drawbrackets(ax):
Way hacky. Draws the brackets around X.
ax.annotate(r'$n$ data points', xy=(0.502, 1.03), xytext=(0.502, 1.08), xycoords='axes fraction',
fontsize=14, ha='center', va='bottom',
arrowprops=dict(arrowstyle='-[, widthB=4.6, lengthB=0.35', lw=1.2))
ax.annotate(r'$p$ dimensions', xy=(-.060, 0.495), xytext=(-.22, 0.495), xycoords='axes fraction',
fontsize=16, ha='center', va='center', rotation = 90,
arrowprops=dict(arrowstyle='-[, widthB=6.7, lengthB=0.36', lw=1.2, color='k'))
def drawbracketsU(ax):
ax.annotate(r'$K$ centroids', xy=(0.505, 1.03), xytext=(0.505, 1.08), xycoords='axes fraction',
fontsize=14, ha='center', va='bottom',
arrowprops=dict(arrowstyle='-[, widthB=2.25, lengthB=0.35', lw=1.2))
def formatax(ax):
Probably want to come up with a different way to do this. Sets a bunch of formatting options we want.
ax.tick_params(
axis='both', # changes apply to both axis
which='both', # both major and minor ticks are affected
bottom='off', # ticks along the bottom edge are off
top='off', # ticks along the top edge are off
left='off',
right='off',
labelbottom='off',
labelleft = 'off') # labels along the bottom edge are off
ax.set_xticks(np.arange(0.5, p-.5, 1))
ax.set_yticks(np.arange(0.5, n-.5, 1))
ax.grid(which='major', color = CD[0], axis = 'x', linestyle='-', linewidth=1.3)
ax.grid(which='major', color = CD[0], axis = 'y', linestyle='--', linewidth=.5)
def drawbox(ax,col):
Draw the gray box around the column.
s = col-2
box_X = ax.get_xticks()[0:2]
box_Y = [ax.get_yticks()[0]-1, ax.get_yticks()[-1]+1]
box_X = [box_X[0]+s,box_X[1]+s,box_X[1]+s,box_X[0]+s, box_X[0]+s]
box_Y = [box_Y[0],box_Y[0],box_Y[1],box_Y[1], box_Y[0]]
ax.plot(box_X,box_Y, color = CD[0], linewidth = 3, clip_on = False)
def plot_column_X(ax,col):
Draw data matrix with a single column highlighted.
formatax(ax)
drawbrackets(ax)
drawbox(ax,col)
YO,YC = get_col_X(col)
ax.imshow(YO,
interpolation = 'none',
cmap=cmapM,
alpha = 0.8,
norm=normM)
ax.imshow(YC,
interpolation = 'none',
cmap=cmapM,
norm=normM)
def plot_column_U(ax,col):
Draw means matrix with rows corresponding to col highlighted.
formatax(ax)
drawbracketsU(ax)
US = get_rows_U(col)
ax.imshow(US,
interpolation = 'none',
cmap=cmapA,
norm=normA)
def plot_column_selection(col,fn,save=False):
This one actually generates the plots. Wraps plot_column_X and plot_column_U,
saves the fig if we want to.
fig = plt.figure()
gs = mpl.gridspec.GridSpec(1,2, height_ratios=[1])
ax0 = plt.subplot(gs[0])
ax1 = plt.subplot(gs[1])
plot_column_X(ax0,col)
plot_column_U(ax1,col)
if save == True:
fig.savefig(fn,dpi=DPI)
else:
plt.show()
Explanation: Plotting Functions
End of explanation
fig = plt.figure()
gs = mpl.gridspec.GridSpec(1,2, height_ratios=[1])
ax0 = plt.subplot(gs[0])
formatax(ax0)
drawbrackets(ax0)
ax0.imshow(X_unconditioned,
interpolation = 'none',
cmap=cmap_unconditioned,
norm=norm_unconditioned)
ax1 = plt.subplot(gs[1])
formatax(ax1)
ax1.imshow(X_conditioned,
interpolation = 'none',
cmap=cmap_unconditioned,
norm=norm_unconditioned)
#ax1.imshow(X_unconditioned,
# interpolation = 'none',
# cmap=cmap_unconditioned,
# norm=norm_unconditioned)
plt.show()
# Make a plot showing the system before we subsample.
fig = plt.figure()
gs = mpl.gridspec.GridSpec(1,2, height_ratios=[1])
ax0 = plt.subplot(gs[0])
formatax(ax0)
drawbrackets(ax0)
ax0.imshow(Z,
interpolation = 'none',
cmap=cmapM,
norm=normM)
ax1 = plt.subplot(gs[1])
formatax(ax1)
drawbracketsU(ax1)
ax1.imshow(U,
interpolation = 'none',
cmap=cmapA,
norm=normA)
plt.show()
fig.savefig('mat0.png',dpi=DPI)
# Plot the subsampled system.
fig = plt.figure()
gs = mpl.gridspec.GridSpec(1,2, height_ratios=[1])
ax0 = plt.subplot(gs[0])
formatax(ax0)
drawbrackets(ax0)
ax0.imshow(X,
interpolation = 'none',
cmap=cmapM,
norm=normM)
ax1 = plt.subplot(gs[1])
formatax(ax1)
drawbracketsU(ax1)
ax1.imshow(U,
interpolation = 'none',
cmap=cmapA,
norm=normA)
plt.show()
fig.savefig('mat1.png',dpi=DPI)
# Pick out the first column.
fig = plt.figure()
gs = mpl.gridspec.GridSpec(1,2, height_ratios=[1])
ax0 = plt.subplot(gs[0])
formatax(ax0)
drawbrackets(ax0)
drawbox(ax0,1)
plot_column_X(ax0,1)
ax1 = plt.subplot(gs[1])
formatax(ax1)
drawbracketsU(ax1)
ax1.imshow(U,
interpolation = 'none',
cmap=cmapA,
norm=normA)
plt.show()
fig.savefig('mat2.png',dpi=DPI)
# make all 6 "final plots".
for i in range(1,p+1):
fn = 'col' + str(i) + '.png'
plot_column_selection(i,fn,save=True)
Explanation: Generate the Plots
End of explanation |
6,270 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
enterprise Data Structures
This guide will give an introduction to the unique data structures used in enterprise. These are all designed with the goal of making this code as user-friendly as possible, both for the end user and the developer.
Step1: Class Factories
The enterprise code makes heavy use of so-called class factories. Class factories are functions that return classes (not objects of class instances). A simple example is as follows
Step2: In the example above we see that the arguments arg1 and arg2 are seen by both instances a1 and a2; however these instances were intantiated with different input arguments iarg1 and iarg2. So we see that class-factories are great when we want to give "global" parameters to a class without having to pass them on initialization. This also allows us to mix and match classes, as we will do in enterprise before we instantiate them.
The Pulsar class
The Pulsar class is a simple data structure that stores all of the important information about a pulsar that is obtained from a timing package such as the TOAs, residuals, error-bars, flags, design matrix, etc.
This class is instantiated with a par and a tim file. Full documentation on this class can be found here.
Step3: This Pulsar object is then passed to other enterprise data structures in a loosley coupled way in order to interact with the pulsar data.
The Parameter class
In enterprise signal parameters are set by specifying a prior distribution (i.e., Uniform, Normal, etc.). These Parameters are how enterprise builds signals. Below we will give an example of this functionality.
Step4: Uniform is a class factory that returns a class. The parameter is then intialized via a name. This way, a single parameter class can be initialized for multiple signal parameters with different names (i.e. EFAC per observing backend, etc). Once the parameter is initialized then you then have access to many useful methods.
Step5: The Function structure
In enterprise we have defined a special data structure called Function. This data structure provides the user with a way to use and combine several different enterprise components in a user friendly way. More explicitly, it converts and standard function into an enterprise Function which can extract information from the Pulsar object and can also interact with enterprise Parameters.
[put reference to docstring here]
For example, consider the function
Step6: Notice that the first positional argument of the function is toas, which happens to be a name of an attribute in the Pulsar class and the keyword arguments specify the default parameters for this function.
The decorator converts this standard function to a Function which can be used in two ways
Step7: the second way is to use it as a Function
Step8: Here we see that Function is actually a class factory, that is, when initialized with enterprise Parameters it returns a class that is initialized with a name and a Pulsar object as follows
Step9: Now this Function object carries around instances of the Parameter classes given above for this particular function and Pulsar
Step10: Most importantly it can be called in three different ways
Step11: or we can give it new fixed parameters
Step12: or most importantly we can give it a parameter dictionary with the Parameter names as keys. This is how Functions are use internally inside enterprise.
Step13: Notice that the last two methods give the same answer since we gave it the same values just in different ways. So you may be thinking
Step14: Make your own Function
To define your own Function all you have to do is to define a function with these rules in mind.
If you want to use Pulsar attributes, define them as positional arguments with the same name as used in the Pulsar class (see here for more information.
Any arguments that you may use as Parameters must be keyword arguments (although you can have others that aren't Parameters)
Add the @function decorator.
And thats it! You can now define your own Functions with minimal overhead and use them in enterprise or for tests and simulations or whatever you want!
The Selection structure
In the course of our analysis it is useful to split different signals into pieces. The most common flavor of this is to split the white noise parameters (i.e., EFAC, EQUAD, and ECORR) by observing backend system. The Selection structure is here to make this as smooth and versatile as possible.
The Selection structure is also a class-factory that returns a specific selection dictionary with keys and Boolean arrays as values.
This will become more clear with an example. Lets say that you want to split our parameters between the first and second half of the dataset, then we can define the following function
Step15: This function will return a dictionary with keys (i.e. the names of the different subsections) t1 and t2 and boolean arrays corresponding to the first and second halves of the data span, respectively. So for a simple input we have
Step16: To pass this to enterprise we turn it into a Selection via
Step17: As we have stated, this is class factory that will be initialized inside enterprise signals with a Pulsar object in a very similar way to Functions.
Step18: The Selection object has a method masks that uses the Pulsar object to evaluate the arguments of cut_half (these can be any number of Pulsar attributes, not just toas). The Selection object can also be called to return initialized Parameters with the split names as follows | Python Code:
% matplotlib inline
%config InlineBackend.figure_format = 'retina'
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
import enterprise
from enterprise.pulsar import Pulsar
import enterprise.signals.parameter as parameter
from enterprise.signals import utils
from enterprise.signals import signal_base
from enterprise.signals import selections
from enterprise.signals.selections import Selection
datadir = enterprise.__path__[0] + '/datafiles/ng9/'
Explanation: enterprise Data Structures
This guide will give an introduction to the unique data structures used in enterprise. These are all designed with the goal of making this code as user-friendly as possible, both for the end user and the developer.
End of explanation
def A(farg1, farg2):
class A(object):
def __init__(self, iarg):
self.iarg = iarg
def print_info(self):
print('Object instance {}\nInstance argument: {}\nFunction args: {} {}\n'.format(
self, self.iarg, farg1, farg2))
return A
# define class A with arguments that can be seen within the class
a = A('arg1', 'arg2')
# instantiate 2 instances of class A with different arguments
a1 = a('iarg1')
a2 = a('iarg2')
# call print_info method
a1.print_info()
a2.print_info()
Explanation: Class Factories
The enterprise code makes heavy use of so-called class factories. Class factories are functions that return classes (not objects of class instances). A simple example is as follows:
End of explanation
psr = Pulsar(datadir+'/B1855+09_NANOGrav_9yv1.gls.par', datadir+'/B1855+09_NANOGrav_9yv1.tim')
Explanation: In the example above we see that the arguments arg1 and arg2 are seen by both instances a1 and a2; however these instances were intantiated with different input arguments iarg1 and iarg2. So we see that class-factories are great when we want to give "global" parameters to a class without having to pass them on initialization. This also allows us to mix and match classes, as we will do in enterprise before we instantiate them.
The Pulsar class
The Pulsar class is a simple data structure that stores all of the important information about a pulsar that is obtained from a timing package such as the TOAs, residuals, error-bars, flags, design matrix, etc.
This class is instantiated with a par and a tim file. Full documentation on this class can be found here.
End of explanation
# lets define an efac parameter with a uniform prior from [0.5, 5]
efac = parameter.Uniform(0.5, 5)
print(efac)
Explanation: This Pulsar object is then passed to other enterprise data structures in a loosley coupled way in order to interact with the pulsar data.
The Parameter class
In enterprise signal parameters are set by specifying a prior distribution (i.e., Uniform, Normal, etc.). These Parameters are how enterprise builds signals. Below we will give an example of this functionality.
End of explanation
# initialize efac parameter with name "efac_1"
efac1 = efac('efac_1')
print(efac1)
# return parameter name
print(efac1.name)
# get pdf at a point (log pdf is access)
print(efac1.get_pdf(1.3), efac1.get_logpdf(1.3))
# return 5 samples from this prior distribution
print(efac1.sample(n=5))
Explanation: Uniform is a class factory that returns a class. The parameter is then intialized via a name. This way, a single parameter class can be initialized for multiple signal parameters with different names (i.e. EFAC per observing backend, etc). Once the parameter is initialized then you then have access to many useful methods.
End of explanation
@signal_base.function
def sine_wave(toas, log10_A=-7, log10_f=-8):
return 10**log10_A * np.sin(2*np.pi*toas*10**log10_f)
Explanation: The Function structure
In enterprise we have defined a special data structure called Function. This data structure provides the user with a way to use and combine several different enterprise components in a user friendly way. More explicitly, it converts and standard function into an enterprise Function which can extract information from the Pulsar object and can also interact with enterprise Parameters.
[put reference to docstring here]
For example, consider the function:
End of explanation
# treat it just as a standard function with a vector input
sw = sine_wave(np.array([1,2,3]), log10_A=-8, log10_f=-7.5)
print(sw)
Explanation: Notice that the first positional argument of the function is toas, which happens to be a name of an attribute in the Pulsar class and the keyword arguments specify the default parameters for this function.
The decorator converts this standard function to a Function which can be used in two ways: the first way is to treat it like any other function.
End of explanation
# or use it as an enterprise function
sw_function = sine_wave(log10_A=parameter.Uniform(-10,-5), log10_f=parameter.Uniform(-9, -7))
print(sw_function)
Explanation: the second way is to use it as a Function:
End of explanation
sw2 = sw_function('sine_wave', psr=psr)
print(sw2)
Explanation: Here we see that Function is actually a class factory, that is, when initialized with enterprise Parameters it returns a class that is initialized with a name and a Pulsar object as follows:
End of explanation
print(sw2.params)
Explanation: Now this Function object carries around instances of the Parameter classes given above for this particular function and Pulsar
End of explanation
print(sw2())
Explanation: Most importantly it can be called in three different ways:
If given without parameters it will fall back on the defaults given in the original function definition
End of explanation
print(sw2(log10_A=-8, log10_f=-6.5))
Explanation: or we can give it new fixed parameters
End of explanation
params = {'sine_wave_log10_A':-8, 'sine_wave_log10_f':-6.5}
print(sw2(params=params))
Explanation: or most importantly we can give it a parameter dictionary with the Parameter names as keys. This is how Functions are use internally inside enterprise.
End of explanation
def sine_wave(toas, log10_A=-7, log10_f=-8):
return 10**log10_A * np.sin(2*np.pi*toas*10**log10_f)
sw3 = signal_base.Function(sine_wave, log10_A=parameter.Uniform(-10,-5),
log10_f=parameter.Uniform(-9, -7))
print(sw3)
Explanation: Notice that the last two methods give the same answer since we gave it the same values just in different ways. So you may be thinking: "Why did we pass the Pulsar object on initialization?" or "Wait. How does it know about the toas?!". Well the first question answers the second. By passing the pulsar object it grabs the toas attribute internally. This feature, combined with the ability to recognize Parameters and the ability to call the original function as we always would are the main strengths of Function, which is used heavily in enterprise.
Note that if we define a function without the decorator then we can still obtain a Function via:
End of explanation
def cut_half(toas):
midpoint = (toas.max() + toas.min()) / 2
return dict(zip(['t1', 't2'], [toas <= midpoint, toas > midpoint]))
Explanation: Make your own Function
To define your own Function all you have to do is to define a function with these rules in mind.
If you want to use Pulsar attributes, define them as positional arguments with the same name as used in the Pulsar class (see here for more information.
Any arguments that you may use as Parameters must be keyword arguments (although you can have others that aren't Parameters)
Add the @function decorator.
And thats it! You can now define your own Functions with minimal overhead and use them in enterprise or for tests and simulations or whatever you want!
The Selection structure
In the course of our analysis it is useful to split different signals into pieces. The most common flavor of this is to split the white noise parameters (i.e., EFAC, EQUAD, and ECORR) by observing backend system. The Selection structure is here to make this as smooth and versatile as possible.
The Selection structure is also a class-factory that returns a specific selection dictionary with keys and Boolean arrays as values.
This will become more clear with an example. Lets say that you want to split our parameters between the first and second half of the dataset, then we can define the following function:
End of explanation
toas = np.array([1,2,3,4])
print(cut_half(toas))
Explanation: This function will return a dictionary with keys (i.e. the names of the different subsections) t1 and t2 and boolean arrays corresponding to the first and second halves of the data span, respectively. So for a simple input we have:
End of explanation
ch = Selection(cut_half)
print(ch)
Explanation: To pass this to enterprise we turn it into a Selection via:
End of explanation
ch1 = ch(psr)
print(ch1)
print(ch1.masks)
Explanation: As we have stated, this is class factory that will be initialized inside enterprise signals with a Pulsar object in a very similar way to Functions.
End of explanation
# make efac class factory
efac = parameter.Uniform(0.1, 5.0)
# now give it to selection
params, masks = ch1('efac', efac)
# named parameters
print(params)
# named masks
print(masks)
Explanation: The Selection object has a method masks that uses the Pulsar object to evaluate the arguments of cut_half (these can be any number of Pulsar attributes, not just toas). The Selection object can also be called to return initialized Parameters with the split names as follows:
End of explanation |
6,271 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Attribution
Step1: Vectors and Matrices
Linear algebra is the study of vectors and matrices and how they can be manipulated to perform various calculations.
Consider functions which take several input arguments and produce several output arguments. If we stack up the input arguments into a vector $\mathbf{x}$ and the outputs into a vector $\mathbf{y}$, then a function $\mathbf{y} = f(\mathbf{x})$ is said to be linear if
Step2: Multiplying a vector or matrix by a scalar just multiplies each element by the scalar
Step3: Matrix-vector and matrix-matrix multiplication
A good way to think of an $n \times m$ matrix $F$ is as a machine that eats $m$ sized vectors and spits out $n$ sized vectors. This conversion process is known as (left) multiplying by $F$ and has many similarities to scalar multiplication, but also a few differences. Most importantly, the machine only accepts inputs of the right size.
Step4: Like scalar multiplication, matrix multiplication is distributive and associative
Step5: Note that in the above, we flipped or "transposed" the matrix. This interchanges the rows and columns, and in the example above, made the shapes compatible for matrix-matrix multiplication.
Step6: Unlike scalar multiplication, matrix multiplication is not commutative
Step7: For more information about efficiently computing norms (and also how to call BLAS directly from Python) see this blog post.
The Frobenius norm of a matrix $||A||^2$ similarly adds up the squares of all the matrix elements.
Step8: Inverses and Determinants
First, let's consider the concept of reversing or undoing or inverting the function represented by a matrix $A$. For a function to be invertible, there needs to be a one-to-one relationship between inputs and outputs so that given the output you can always say exactly what the input was. In other words, we need a function which, when composed with $A$ gives back the original vector. Such a function -- if it exists -- is called the inverse of $A$ and the matrix is denoted $A^{-1}$.
In matrix terms, we seek a matrix that left multiplies $A$ to give the identity matrix
Step9: The matrix determinant is a scalar quantity, normally denoted $|A|$ or $\text{det}(A)$ whose absolute value measures how much the matrix "stretches" or "squishes" volume as it transforms its inputs to outputs and whose sign indicates whether the transformation is orientation preserving. Matrices with large determinants do (on average) a lot of stretching and those with small determinants do a lot of squishing.
Matrices with zero determinant have rank less than the number of rows and and actually collapse some of their input space into a line or hyperplane (pancake) in the output space, and this can be thought of doing "infinite squishing". Conventionally, the determinant is only defined for square matrices, but there is a natural extension to rectangular ones using the singular value decomposition.
Step10: Fundamental Matrix Equations
The two most important matrix equations are the system of linear equations
Step11: Linear regression | Python Code:
%matplotlib notebook
# Note that this is the usual way that I import Numpy and Matplotlib
import numpy as np
import matplotlib.pyplot as plt
Explanation: Attribution:
Most material based on Sam Roweis' Linear Algebra Review
End of explanation
a = np.array([1, 2, 3])
b = np.ones(3,)
print a
print b
print a + b
A = np.array([[1, 2, 3], [4, 5, 6]])
B = np.arange(6).reshape(A.shape)
print A
print B
print A + B
Explanation: Vectors and Matrices
Linear algebra is the study of vectors and matrices and how they can be manipulated to perform various calculations.
Consider functions which take several input arguments and produce several output arguments. If we stack up the input arguments into a vector $\mathbf{x}$ and the outputs into a vector $\mathbf{y}$, then a function $\mathbf{y} = f(\mathbf{x})$ is said to be linear if:
$$f\left(\alpha \mathbf{x} + \beta \mathbf{u} \right) = \alpha f\left( \mathbf{x} \right) + \beta f \left( \mathbf{u} \right)$$
for all scalars, $\alpha, \beta$ and all vectors $\mathbf{x}, \mathbf{u}$. In other words, scaling the input scales the output and summing inputs sums their outputs.
Now for an amazing fact:
All functions which are linear can be written in the form of a matrix $F$ which left multiplies the input argument $\mathbf{x}$:
$$\mathbf{y} = F \mathbf{x}$$
Furthermore, all matrix relations like the one above represent linear functions from their inputs to their outputs.
Another interesting fact is that the composition of two linear functions is still linear:
$$g\left(f\left(\mathbf{x}\right)\right) = GF\mathbf{x} = H \mathbf{x} = h(\mathbf{x})$$
The entire study of multiple-input multiple-output linear functions can be reduced to the study of vectors and matrices.
Multiplication, Addition, Transposition
Adding up two vectors or two matrices is easy: just add their corresponding elements (of course the two must be the same shape!):
End of explanation
print 2 * a
print 0.5 * A
Explanation: Multiplying a vector or matrix by a scalar just multiplies each element by the scalar:
End of explanation
# In Numpy, both matrix-vector and matrix-matrix multiplication is performed by np.dot
print A.shape
print a.shape
print np.dot(A, a)
print A.dot(a)
print np.dot(A, a[:2]) # not compatible sizes
Explanation: Matrix-vector and matrix-matrix multiplication
A good way to think of an $n \times m$ matrix $F$ is as a machine that eats $m$ sized vectors and spits out $n$ sized vectors. This conversion process is known as (left) multiplying by $F$ and has many similarities to scalar multiplication, but also a few differences. Most importantly, the machine only accepts inputs of the right size.
End of explanation
print np.dot(A, A.T)
Explanation: Like scalar multiplication, matrix multiplication is distributive and associative:
$$
\begin{aligned}
F(\mathbf{a} + \mathbf{b}) & = F\mathbf{a} + F\mathbf{b}\
G(F\mathbf{a}) & = (GF)\mathbf{a}
\end{aligned}
$$
One way to think of this is that the matrix product $GF$ is the equivalent linear operator you get if you compose the action of $F$ followed by the action of $G$.
Matrix-matrix multiplication can be thought of as a sequence of matrix-vector multiplications, one for each column, whose results get stacked beside each other in columns to form a new matrix. In general, we can think of column vectors of length $k$ as just $k \times 1$ and row vectors as $1 \times k$ matrices. This eliminates any distinction between matrix-matrix and matrix-vector multiplication.
End of explanation
print A.shape
print A.T.shape
Explanation: Note that in the above, we flipped or "transposed" the matrix. This interchanges the rows and columns, and in the example above, made the shapes compatible for matrix-matrix multiplication.
End of explanation
print np.dot(a.T, a)
print np.dot(a, a) # note we didn't actually need the transpose - numpy automatically does dot product with two vector inputs
print np.linalg.norm(a)**2 # this is a more powerful function for computing general norms
Explanation: Unlike scalar multiplication, matrix multiplication is not commutative:
$$ F \mathbf{a} \neq \mathbf{a} F $$
Multiplying a vector by itself (transposed) gives a scalar $\mathbf{x}^T \mathbf{x}$ which is known as the (squared) norm or squared length of the vector and is written $||\mathbf{x}||^2$. This measure adds up the sum of the squares of the elements of the vector. For much more about norms, see Goodfellow et al. Chapter 2.
End of explanation
print np.sum(A * A) # note * is element-wise multiplication, not matrix-matrix multiplication
print np.linalg.norm(A)**2 # by default, linalg.norm() computes Frobenius norm for matrix input
Explanation: For more information about efficiently computing norms (and also how to call BLAS directly from Python) see this blog post.
The Frobenius norm of a matrix $||A||^2$ similarly adds up the squares of all the matrix elements.
End of explanation
C = A.dot(A.T) # A trick to make an invertible matrix
C_inv = np.linalg.inv(C)
print C
print C_inv
print C_inv.dot(C)
Explanation: Inverses and Determinants
First, let's consider the concept of reversing or undoing or inverting the function represented by a matrix $A$. For a function to be invertible, there needs to be a one-to-one relationship between inputs and outputs so that given the output you can always say exactly what the input was. In other words, we need a function which, when composed with $A$ gives back the original vector. Such a function -- if it exists -- is called the inverse of $A$ and the matrix is denoted $A^{-1}$.
In matrix terms, we seek a matrix that left multiplies $A$ to give the identity matrix:
$$A^{-1}A = I$$
The identity matrix, $I_{ij} = \delta_{ij}$ corresponds to the identity (do-nothing) function.
Only a few, special linear functions are invertible.
They must have at least as many outputs as inputs
They must not map any two inputs to the same output
Technically this means that they must have full rank, a concept which we will get to later.
Non-square matrices ($m$-by-$n$ matrices for which $m \neq n$) technically do not have an inverse. However, in some cases such a matrix may have a left inverse or right inverse (but not both). If A is $m$-by-$n$ and the rank of $A$ is equal to $n$, then $A$ has a left inverse: an $n$-by-$m$ matrix $B$ such that $BA = I$. If $A$ has rank $m$, then it has a right inverse: an $n$-by-$m$ matrix $B$ such that $AB = I$. Goodfellow et al. Chapter 2 provides a lot more detail on inverses.
End of explanation
A = np.array([[2, 3, 0], [3, 2, 7], [2, 1, 6]])
print A
print np.linalg.det(A)
print np.linalg.det(np.linalg.inv(A)) # note 1/det(A)
B = np.array([[2, 3, 2], [1, 2, 4], [3, 5, 6]])
print np.linalg.matrix_rank(B)
print B
print np.linalg.det(B) # not quite zero because of numerical instability
Explanation: The matrix determinant is a scalar quantity, normally denoted $|A|$ or $\text{det}(A)$ whose absolute value measures how much the matrix "stretches" or "squishes" volume as it transforms its inputs to outputs and whose sign indicates whether the transformation is orientation preserving. Matrices with large determinants do (on average) a lot of stretching and those with small determinants do a lot of squishing.
Matrices with zero determinant have rank less than the number of rows and and actually collapse some of their input space into a line or hyperplane (pancake) in the output space, and this can be thought of doing "infinite squishing". Conventionally, the determinant is only defined for square matrices, but there is a natural extension to rectangular ones using the singular value decomposition.
End of explanation
A = np.array([[1,2],[3,4]])
print A
b = np.array([[5],[6]])
print b
x = np.linalg.inv(A).dot(b) #Cramer's rule, slow
print x
print A.dot(np.linalg.inv(A).dot(b))-b #check
x1 = np.linalg.solve(A,b) #fast; computes the "exact" solution
print x1
print A.dot(x1)-b #check
A = np.array([[1, 2, 3],[4, 5, 6]])
print A
b = np.array([[5],[6]])
print b
# computes the least-squares solution to a linear matrix equation
# equation may be under-, well, or over- determined
x, residuals, rank, s = np.linalg.lstsq(A, b)
print x
print A.dot(x)
Explanation: Fundamental Matrix Equations
The two most important matrix equations are the system of linear equations:
$$A \mathbf{x} = \mathbf{b}$$
and the eigenvector equation:
$$A \mathbf{x} = \lambda \mathbf{x}$$
which between them cover a large number of optimization and constraint satisfaction problems. As we've written them above, $\mathbf{x}$ is a vector but these equations also have natural extensions to the case where there are many vectors simultaneously satisfying the equation $AX = B$ or $AX = \lambda X$.
Systems of Linear Equations
A central problem in linear algebra is the solution of a system of linear equations like this:
$$
\begin{align}
3x + 4y + 2z &= 12\
x + y + z &=5
\end{align}
$$
Typically, we express this system as a single matrix equation in the form: $A\mathbf{x} = \mathbf{b}$, where $A$ is an $m$-by-$n$ matrix, $\mathbf{x}$ is an $n$-dimensional column vector, and $\mathbf{b}$ is an $m$-dimensional column vector. The number of unknowns is $n$ and the number of equations or constraints is $m$. Here is another simple example:
$$
\left[
\begin{array}{rr}
2 & -1\
1 & 1
\end{array}
\right]
\left[
\begin{array}{c}
x_1 \
x_2
\end{array}
\right]
=
\left[
\begin{array}{c}
1\
5
\end{array}
\right]
$$
How do we go about "solving" this system of equations?
Finding $\mathbf{b}$ given $A$ and $\mathbf{x}$ is easy - just multiply
For a single $\mathbf{x}$ and $\mathbf{b}$ there are usually a great many matrices $A$ which satisfy the equation
The only interesting problem left is to find $\mathbf{x}$ given $A$ and $\mathbf{b}$!
This kind of equation is really a problem statement. It says "we applied the function $A$ and generated the output $\mathbf{b}$; what was the input $\mathbf{x}$?"
The matrix $A$ is dictated to us by our problem, and represents our model of how the system we are studying converts inputs to outputs. The vector $\mathbf{b}$ is the output that we observe (or desire) - we know it. The vector $\mathbf{x}$ is the set of inputs that we are trying to find.
There are two ways of thinking about this system of equations. One is rowwise as a set of $m$ equations, or constraints that correspond geometrically to $m$ intersecting constraint surfaces:
$$
\left[
\begin{array}{r}
2x_1 - x_2 &= 1\
x_1 + x_2 &= 5
\end{array}
\right]
$$
The goal is to find the point(s), for example $(x_1, x_2)$ above, which are at the intersection of all the constraint surfaces. In the example above, these surfaces are two lines in a plane. If the lines intersect then there is a solution. If they are parallel, there is not. If they are coincident then there are infinite solutions. In higher dimensions there are more geometric solutions, but in general there can be no solutions, one solution, or infinite solutions.
The other way of thinking about this system is columnwise in which we think of the entire system as a single vector relation:
$$
x_1
\left[
\begin{array}{r}
2\
1
\end{array}
\right]
+
x_2
\left[
\begin{array}{r}
-1\
1
\end{array}
\right]
=
\left[
\begin{array}{r}
1\
5
\end{array}
\right]
$$
The goal here is to discover which linear combination(s) $(x_1, x_2)$, if any, of the $n$ column vectors on the left will give the one on the right.
Either way, the matrix equation $A\mathbf{x} = \mathbf{b}$ is an almost ubiquitous problem whose solution comes up again and again in theoretical derivations and in practical data analysis problems. One of the most important applications is the minimization of quadratic energy functions: if $A$ is symmetric positive definite then the quadratic form $\mathbf{x}^TA\mathbf{x} - 2 \mathbf{x}^T\mathbf{b} + c$ is minimized at the point where $A\mathbf{x}=\mathbf{b}$. Such quadratic forms arise often in the study of linear models with Gaussian noise since the log likelihood of data under such models is always a matrix quadratic.
Least squares: solving for $\mathbf{x}$
Let's first consider the case of a single $\mathbf{x}$. As noted above, geometrically we can think of the rows of the system as encoding constraint surfaces in which the solution vector $\mathbf{x}$ must lie and what we are looking for is the point(s) at which these surfaces intersect. Of course, there may be no solution satisfying the equation; then we typically need some way to pick the "best" approximate solution. The constraints may also intersect along an entire line or surface in which case there are an infinity of solutions; once again we would like to think about which one might be best.
Let's consider finding exact solutions first. The most naive thing we could do is just find the inverse of $A$ and solve the equations as follows:
$$
\begin{align}
A^{-1}A\mathbf{x} & = A^{-1} \mathbf{b}\
I\mathbf{x} &= A^{-1}\mathbf{b}\
\mathbf{x} &= A^{-1}\mathbf{b}
\end{align}
$$
which is known as Cramer's rule.
There are several problems with this approach. Most importantly, many interesting functions are not invertible. Beyond that, matrix inversion is a difficult and potentially numerically unstable operation. Don't invert a matrix unless you really have to!
In fact, there is a much better way to define what we want as a solution. We will say that we want a solution $\mathbf{x}^*$ which minimizes the error:
$$E = ||A\mathbf{x}^* - \mathbf{b}||^2 = \mathbf{x}^TA^TA\mathbf{x} - 2\mathbf{x}^TA^T\mathbf{b} + \mathbf{b}^T\mathbf{b}$$
This problem is known as linear least squares, for obvious reasons. If there is an exact solution (one giving zero error) it will certainly minimize the above cost, but if there is no solution, we can still find the best possible approximation. If we take the matrix derivative of this expression, we can find the best solution:
$$\mathbf{x}^* = (A^TA)^{-1}A^T\mathbf{b}$$
which takes advantage of the fact that even if $A$ is not invertible, $A^TA$ may be.
But what if the problem is degenerate? In other words, what if there is more than one exact solution or more than one inexact solution which all achieve the same minimum error. How can this occur?
Imagine an equation like this:
$$\left[
\begin{array}{rr}
1 & -1
\end{array}
\right] \mathbf{x} = 4$$
in which $A = \left[ \begin{array}{rr}
1 & -1
\end{array} \right]$. This equation constrains the difference between the two elements of $\mathbf{x}$ to be 4 but the sum of the two elements can be as large or small as we want.
We can take things one step further to get around this problem. The answer is to ask for the minimum norm vector $\mathbf{x}$ that still minimizes the above error. This breaks the degeneracies in both the exact and inexact cases. In terms of the cost function, this corresponds to adding an infinitesimal penalty on $\mathbf{x}^T\mathbf{x}$:
$$
E = \text{lim}_{\epsilon \rightarrow 0} \left[ \mathbf{x}^TA^TA\mathbf{x} - 2\mathbf{x}^TA^T\mathbf{b} + \mathbf{b}^T\mathbf{b} + \epsilon \mathbf{x}^T\mathbf{x} \right]
$$
and the optimal solution becomes
$$
\mathbf{x}^* = \lim_{\epsilon \rightarrow 0} \left[ \left(A^TA + \epsilon I\right)^{-1}A^T\mathbf{b}\right]
$$
Now, of course actually computing these solutions efficiently and in a numerically stable way is the topic of much study in numerical methods. However, in Python you don't have to worry about any of this, you can just type np.linalg.solve(A, b) (if $A$ is square) or np.linalg.lstsq(A, b) (if $A$ is not square) and let someone else worry about it.
End of explanation
N = 9 # how many data points
x1 = np.arange(0, N)
# note, here the data is stacked in columns
# this is not Python convention (convention is to stack data in rows)
# but it's the convention in this tutorial!
X = np.array([x1, np.ones(N)])
y = 4 * x1 + 3 + np.random.randn(N) # note x1 is only 1-d here, but it is m-d in general
A, residuals, rank, s = np.linalg.lstsq(X.T, y)
line = A[0] * x1 + A[1] # regression line
plt.plot(x1, line, 'r-', x1, y, 'bo')
plt.xlabel('x')
plt.ylabel('y')
Explanation: Linear regression: solving for $A$
Now consider what happens if we have many vectors $\mathbf{x}_i$ and $\mathbf{b}_i$, all of which we want to satisfy the equation $A \mathbf{x}_i = \mathbf{b}_i$. If we stack the vectors $\mathbf{x}_i$ beside each other as the columns of a large matrix $X$ and do the same for $\mathbf{b}_i$ to form $B$, we can write the problem as a large matrix equation:
$$AX = B$$
There are two things we could do here. If, as before, $A$ is known, we could find $X$ given $B$. To do this, we would just need to apply the techniques described above to solve the system $A \mathbf{x}_i = \mathbf{b}_i$ independently for each column $i$.
However, if we were given both $X$ and $B$ we could try to find a single $A$ which satisfies the equations. In essense, we are fitting a linear function given its inputs $X$ and corresponding outputs $B$. This problem is called linear regression. Usually we add a row of ones to $X$ to fit an affine function (i.e. one with an offset).
Again, there are only very few cases in which there exists an $A$ which exactly satisfies the equations (if there is, $X$ will be square and invertible). However, we can set things up the same way as before and ask for the least-squares $A$ which minimizes:
$$E = \sum_i ||A \mathbf{x}_i - \mathbf{b}_i||^2$$
Using matrix calculus, we can derive the optimal solution to this problem. The answer, one of the most famous formulas in all of mathematics, is known as the discrete Wiener filter:
$$ A^* = BX^T\left(XX^T\right)^{-1}$$
Once again, we might have invertibility problems in $XX^T$, this corresponds to having fewer equations than unknowns in our linear system (or duplicated equations), thus leaving some of the elements in $A$ unconstrained. We can get around this in the same way as with linear least squares by adding a small amount of penalty on the norm of the elements of $A$.
$$E = \sum_i ||A \mathbf{x}_i - \mathbf{b}_i||^2 + \epsilon ||A||^2$$
which means we are asking for the matrix of minimum norm which still minimizes the sum squared error on the outputs. Under this cost, the optimal solution is:
$$ A^* = BX^T\left(XX^T + \epsilon I\right)^{-1}$$
which is known as ridge regression. Often it is a good idea to use a small nonzero value of $\epsilon$ even if $XX^T$ is technically invertible, because this gives more stable solutions by penalizing large elements of $A$ that aren't doing much to reduce the error. In neural networks, this is known as weight decay, and in general, is known as regularization. You can also interpret it as having a Gaussian prior with mean zero and variance $1/2\epsilon$ on each element of $A$.
In Python, you should not actually invert the matrix, just type np.linalg.lstsq(X, b).
End of explanation |
6,272 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Tutorial
Welcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow
Step1: Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example.
$$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$
Step2: Writing and running programs in TensorFlow has the following steps
Step3: As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.
Step4: Great! To summarize, remember to initialize your variables, create a session and run the operations inside the session.
Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later.
To specify values for a placeholder, you can pass in values by using a "feed dictionary" (feed_dict variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session.
Step6: When you first defined x you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you feed data to these placeholders when running the session.
Here's what's happening
Step8: Expected Output
Step10: Expected Output
Step12: Expected Output
Step14: Expected Output
Step15: Expected Output
Step16: Change the index below and run the cell to visualize some examples in the dataset.
Step17: As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
Step19: Note that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing.
Your goal is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one.
The model is LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes.
2.1 - Create placeholders
Your first task is to create placeholders for X and Y. This will allow you to later pass your training data in when you run your session.
Exercise
Step21: Expected Output
Step23: Expected Output
Step25: Expected Output
Step27: Expected Output
Step28: Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!
Step29: Expected Output | Python Code:
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict
%matplotlib inline
np.random.seed(1)
Explanation: TensorFlow Tutorial
Welcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow:
Initialize variables
Start your own session
Train algorithms
Implement a Neural Network
Programing frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code.
1 - Exploring the Tensorflow Library
To start, you will import the library:
End of explanation
y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36.
y = tf.constant(39, name='y') # Define y. Set to 39
loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss
init = tf.global_variables_initializer() # When init is run later (session.run(init)),
# the loss variable will be initialized and ready to be computed
with tf.Session() as session: # Create a session and print the output
session.run(init) # Initializes the variables
print(session.run(loss)) # Prints the loss
Explanation: Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example.
$$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$
End of explanation
a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a,b)
print(c)
Explanation: Writing and running programs in TensorFlow has the following steps:
Create Tensors (variables) that are not yet executed/evaluated.
Write operations between those Tensors.
Initialize your Tensors.
Create a Session.
Run the Session. This will run the operations you'd written above.
Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run init=tf.global_variables_initializer(). That initialized the loss variable, and in the last line we were finally able to evaluate the value of loss and print its value.
Now let us look at an easy example. Run the cell below:
End of explanation
sess = tf.Session()
print(sess.run(c))
Explanation: As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.
End of explanation
# Change the value of x in the feed_dict
x = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close()
Explanation: Great! To summarize, remember to initialize your variables, create a session and run the operations inside the session.
Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later.
To specify values for a placeholder, you can pass in values by using a "feed dictionary" (feed_dict variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session.
End of explanation
# GRADED FUNCTION: linear_function
def linear_function():
Implements a linear function:
Initializes W to be a random tensor of shape (4,3)
Initializes X to be a random tensor of shape (3,1)
Initializes b to be a random tensor of shape (4,1)
Returns:
result -- runs the session for Y = WX + b
np.random.seed(1)
### START CODE HERE ### (4 lines of code)
X = tf.constant(np.random.randn(3,1), name = "X")
W = tf.constant(np.random.randn(4,3), name = "W")
b = tf.constant(np.random.randn(4,1), name = "b")
Y = tf.add(tf.matmul(W, X), b)
### END CODE HERE ###
# Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate
### START CODE HERE ###
sess = tf.Session()
result = sess.run(Y)
### END CODE HERE ###
# close the session
sess.close()
return result
print( "result = " + str(linear_function()))
Explanation: When you first defined x you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you feed data to these placeholders when running the session.
Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph.
1.1 - Linear function
Lets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector.
Exercise: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):
```python
X = tf.constant(np.random.randn(3,1), name = "X")
```
You might find the following functions helpful:
- tf.matmul(..., ...) to do a matrix multiplication
- tf.add(..., ...) to do an addition
- np.random.randn(...) to initialize randomly
End of explanation
# GRADED FUNCTION: sigmoid
def sigmoid(z):
Computes the sigmoid of z
Arguments:
z -- input value, scalar or vector
Returns:
results -- the sigmoid of z
### START CODE HERE ### ( approx. 4 lines of code)
# Create a placeholder for x. Name it 'x'.
x = tf.placeholder(tf.float32, name = 'x')
# compute sigmoid(x)
sigmoid = tf.sigmoid(x)
# Create a session, and run it. Please use the method 2 explained above.
# You should use a feed_dict to pass z's value to x.
with tf.Session() as sess:
# Run session and call the output "result"
result = sess.run(sigmoid, feed_dict = {x:z})
### END CODE HERE ###
return result
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(12) = " + str(sigmoid(12)))
Explanation: Expected Output :
<table>
<tr>
<td>
**result**
</td>
<td>
[[-2.15657382]
[ 2.95891446]
[-1.08926781]
[-0.84538042]]
</td>
</tr>
</table>
1.2 - Computing the sigmoid
Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like tf.sigmoid and tf.softmax. For this exercise lets compute the sigmoid function of an input.
You will do this exercise using a placeholder variable x. When running the session, you should use the feed dictionary to pass in the input z. In this exercise, you will have to (i) create a placeholder x, (ii) define the operations needed to compute the sigmoid using tf.sigmoid, and then (iii) run the session.
Exercise : Implement the sigmoid function below. You should use the following:
tf.placeholder(tf.float32, name = "...")
tf.sigmoid(...)
sess.run(..., feed_dict = {x: z})
Note that there are two typical ways to create and use sessions in tensorflow:
Method 1:
```python
sess = tf.Session()
Run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
sess.close() # Close the session
**Method 2:**python
with tf.Session() as sess:
# run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
# This takes care of closing the session for you :)
```
End of explanation
# GRADED FUNCTION: cost
def cost(logits, labels):
Computes the cost using the sigmoid cross entropy
Arguments:
logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
labels -- vector of labels y (1 or 0)
Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels"
in the TensorFlow documentation. So logits will feed into z, and labels into y.
Returns:
cost -- runs the session of the cost (formula (2))
### START CODE HERE ###
# Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)
z = tf.placeholder(tf.float32, name = 'z')
y = tf.placeholder(tf.float32, name = 'x')
# Use the loss function (approx. 1 line)
cost = tf.nn.sigmoid_cross_entropy_with_logits(logits =z, labels =y)
# Create a session (approx. 1 line). See method 1 above.
sess = tf.Session()
# Run the session (approx. 1 line).
cost = sess.run(cost, feed_dict={z:logits, y:labels})
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return cost
logits = sigmoid(np.array([0.2,0.4,0.7,0.9]))
cost = cost(logits, np.array([0,0,1,1]))
print ("cost = " + str(cost))
Explanation: Expected Output :
<table>
<tr>
<td>
**sigmoid(0)**
</td>
<td>
0.5
</td>
</tr>
<tr>
<td>
**sigmoid(12)**
</td>
<td>
0.999994
</td>
</tr>
</table>
<font color='blue'>
To summarize, you how know how to:
1. Create placeholders
2. Specify the computation graph corresponding to operations you want to compute
3. Create the session
4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values.
1.3 - Computing the Cost
You can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{2}$ and $y^{(i)}$ for i=1...m:
$$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$
you can do it in one line of code in tensorflow!
Exercise: Implement the cross entropy loss. The function you will use is:
tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)
Your code should input z, compute the sigmoid (to get a) and then compute the cross entropy cost $J$. All this can be done using one call to tf.nn.sigmoid_cross_entropy_with_logits, which computes
$$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{2}) + (1-y^{(i)})\log (1-\sigma(z^{2})\large )\small\tag{2}$$
End of explanation
# GRADED FUNCTION: one_hot_matrix
def one_hot_matrix(labels, C):
Creates a matrix where the i-th row corresponds to the ith class number and the jth column
corresponds to the jth training example. So if example j had a label i. Then entry (i,j)
will be 1.
Arguments:
labels -- vector containing the labels
C -- number of classes, the depth of the one hot dimension
Returns:
one_hot -- one hot matrix
### START CODE HERE ###
# Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)
C = tf.constant(C, name="C")
# Use tf.one_hot, be careful with the axis (approx. 1 line)
one_hot_matrix = tf.one_hot(labels, C, axis=0)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session (approx. 1 line)
one_hot = sess.run(one_hot_matrix)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return one_hot
labels = np.array([1,2,3,0,2,1])
one_hot = one_hot_matrix(labels, C = 4)
print ("one_hot = " + str(one_hot))
Explanation: Expected Output :
<table>
<tr>
<td>
**cost**
</td>
<td>
[ 1.00538719 1.03664088 0.41385433 0.39956614]
</td>
</tr>
</table>
1.4 - Using One Hot encodings
Many times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows:
<img src="images/onehot.png" style="width:600px;height:150px;">
This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code:
tf.one_hot(labels, depth, axis)
Exercise: Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use tf.one_hot() to do this.
End of explanation
# GRADED FUNCTION: ones
def ones(shape):
Creates an array of ones of dimension shape
Arguments:
shape -- shape of the array you want to create
Returns:
ones -- array containing only ones
### START CODE HERE ###
# Create "ones" tensor using tf.ones(...). (approx. 1 line)
ones = tf.ones(shape)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session to compute 'ones' (approx. 1 line)
ones = sess.run(ones)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return ones
print ("ones = " + str(ones([3])))
Explanation: Expected Output:
<table>
<tr>
<td>
**one_hot**
</td>
<td>
[[ 0. 0. 0. 1. 0. 0.]
[ 1. 0. 0. 0. 0. 1.]
[ 0. 1. 0. 0. 1. 0.]
[ 0. 0. 1. 0. 0. 0.]]
</td>
</tr>
</table>
1.5 - Initialize with zeros and ones
Now you will learn how to initialize a vector of zeros and ones. The function you will be calling is tf.ones(). To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively.
Exercise: Implement the function below to take in a shape and to return an array (of the shape's dimension of ones).
tf.ones(shape)
End of explanation
# Loading the dataset
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
Explanation: Expected Output:
<table>
<tr>
<td>
**ones**
</td>
<td>
[ 1. 1. 1.]
</td>
</tr>
</table>
2 - Building your first neural network in tensorflow
In this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:
Create the computation graph
Run the graph
Let's delve into the problem you'd like to solve!
2.0 - Problem statement: SIGNS Dataset
One afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language.
Training set: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number).
Test set: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number).
Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs.
Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels.
<img src="images/hands.png" style="width:800px;height:350px;"><caption><center> <u><font color='purple'> Figure 1</u><font color='purple'>: SIGNS dataset <br> <font color='black'> </center>
Run the following code to load the dataset.
End of explanation
# Example of a picture
index = 0
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
Explanation: Change the index below and run the cell to visualize some examples in the dataset.
End of explanation
# Flatten the training and test images
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
# Normalize image vectors
X_train = X_train_flatten/255.
X_test = X_test_flatten/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6)
Y_test = convert_to_one_hot(Y_test_orig, 6)
print ("number of training examples = " + str(X_train.shape[1]))
print ("number of test examples = " + str(X_test.shape[1]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
Explanation: As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
End of explanation
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_x, n_y):
Creates the placeholders for the tensorflow session.
Arguments:
n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)
n_y -- scalar, number of classes (from 0 to 5, so -> 6)
Returns:
X -- placeholder for the data input, of shape [n_x, None] and dtype "float"
Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"
Tips:
- You will use None because it let's us be flexible on the number of examples you will for the placeholders.
In fact, the number of examples during test/train is different.
### START CODE HERE ### (approx. 2 lines)
X = tf.placeholder(tf.float32, shape=(n_x, None))
Y = tf.placeholder(tf.float32, shape=(n_y, None))
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(12288, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
Explanation: Note that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing.
Your goal is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one.
The model is LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes.
2.1 - Create placeholders
Your first task is to create placeholders for X and Y. This will allow you to later pass your training data in when you run your session.
Exercise: Implement the function below to create the placeholders in tensorflow.
End of explanation
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
Initializes parameters to build a neural network with tensorflow. The shapes are:
W1 : [25, 12288]
b1 : [25, 1]
W2 : [12, 25]
b2 : [12, 1]
W3 : [6, 12]
b3 : [6, 1]
Returns:
parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 6 lines of code)
W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())
W2 = tf.get_variable("W2", [12,25], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b2 = tf.get_variable("b2", [12,1], initializer = tf.zeros_initializer())
W3 = tf.get_variable("W3", [6,12], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b3 = tf.get_variable("b3", [6,1], initializer = tf.zeros_initializer())
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2,
"W3": W3,
"b3": b3}
return parameters
tf.reset_default_graph()
with tf.Session() as sess:
parameters = initialize_parameters()
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Expected Output:
<table>
<tr>
<td>
**X**
</td>
<td>
Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1)
</td>
</tr>
<tr>
<td>
**Y**
</td>
<td>
Tensor("Placeholder_2:0", shape=(10, ?), dtype=float32) (not necessarily Placeholder_2)
</td>
</tr>
</table>
2.2 - Initializing the parameters
Your second task is to initialize the parameters in tensorflow.
Exercise: Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use:
python
W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())
Please use seed = 1 to make sure your results match ours.
End of explanation
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
W3 = parameters['W3']
b3 = parameters['b3']
### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:
Z1 = tf.add(tf.matmul(W1, X), b1) # Z1 = np.dot(W1, X) + b1
A1 = tf.nn.relu(Z1) # A1 = relu(Z1)
Z2 = tf.add(tf.matmul(W2, A1), b2) # Z2 = np.dot(W2, a1) + b2
A2 = tf.nn.relu(Z2) # A2 = relu(Z2)
Z3 = tf.add(tf.matmul(W3, A2), b3) # Z3 = np.dot(W3,Z2) + b3
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
print("Z3 = " + str(Z3))
Explanation: Expected Output:
<table>
<tr>
<td>
**W1**
</td>
<td>
< tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**b1**
</td>
<td>
< tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**W2**
</td>
<td>
< tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**b2**
</td>
<td>
< tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref >
</td>
</tr>
</table>
As expected, the parameters haven't been evaluated yet.
2.3 - Forward propagation in tensorflow
You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are:
tf.add(...,...) to do an addition
tf.matmul(...,...) to do a matrix multiplication
tf.nn.relu(...) to apply the ReLU activation
Question: Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at z3. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need a3!
End of explanation
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
# to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
logits = tf.transpose(Z3)
labels = tf.transpose(Y)
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = labels))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
print("cost = " + str(cost))
Explanation: Expected Output:
<table>
<tr>
<td>
**Z3**
</td>
<td>
Tensor("Add_2:0", shape=(6, ?), dtype=float32)
</td>
</tr>
</table>
You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation.
2.4 Compute cost
As seen before, it is very easy to compute the cost using:
python
tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))
Question: Implement the cost function below.
- It is important to know that the "logits" and "labels" inputs of tf.nn.softmax_cross_entropy_with_logits are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you.
- Besides, tf.reduce_mean basically does the summation over the examples.
End of explanation
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
num_epochs = 1500, minibatch_size = 32, print_cost = True):
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep consistent results
seed = 3 # to keep consistent results
(n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)
n_y = Y_train.shape[0] # n_y : output size
costs = [] # To keep track of the cost
# Create Placeholders of shape (n_x, n_y)
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_x, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
epoch_cost = 0. # Defines a cost related to an epoch
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
### END CODE HERE ###
epoch_cost += minibatch_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 100 == 0:
print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# lets save the parameters in a variable
parameters = sess.run(parameters)
print ("Parameters have been trained!")
# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
return parameters
Explanation: Expected Output:
<table>
<tr>
<td>
**cost**
</td>
<td>
Tensor("Mean:0", shape=(), dtype=float32)
</td>
</tr>
</table>
2.5 - Backward propagation & parameter updates
This is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.
After you compute the cost function. You will create an "optimizer" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.
For instance, for gradient descent the optimizer would be:
python
optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)
To make the optimization you would do:
python
_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.
Note When coding, we often use _ as a "throwaway" variable to store values that we won't need to use later. Here, _ takes on the evaluated value of optimizer, which we don't need (and c takes the value of the cost variable).
2.6 - Building the model
Now, you will bring it all together!
Exercise: Implement the model. You will be calling the functions you had previously implemented.
End of explanation
parameters = model(X_train, Y_train, X_test, Y_test)
Explanation: Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!
End of explanation
import scipy
from PIL import Image
from scipy import ndimage
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "thumbs_up.jpg"
## END CODE HERE ##
# We preprocess your image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T
my_image_prediction = predict(my_image, parameters)
plt.imshow(image)
print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))
Explanation: Expected Output:
<table>
<tr>
<td>
**Train Accuracy**
</td>
<td>
0.999074
</td>
</tr>
<tr>
<td>
**Test Accuracy**
</td>
<td>
0.716667
</td>
</tr>
</table>
Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.
Insights:
- Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting.
- Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters.
2.7 - Test with your own image (optional / ungraded exercise)
Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right!
End of explanation |
6,273 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example - Plotting timetraces with bursts
This notebook is part of smFRET burst analysis software FRETBursts.
In this notebook shows how to plot timetraces with burst information.
For a complete tutorial on burst analysis see
FRETBursts - us-ALEX smFRET burst analysis.
Step1: Get and process data
Step2: Plot Timetraces
Default plot
Step3: We can plot a longer figure that scrolls horizontally in the notebook
Step4: Using the previous plot we can sample different times of the measurement to have an overview of the timetrace | Python Code:
from fretbursts import *
sns = init_notebook(apionly=True)
print('seaborn version: ', sns.__version__)
# Tweak here matplotlib style
import matplotlib as mpl
mpl.rcParams['font.sans-serif'].insert(0, 'Arial')
mpl.rcParams['font.size'] = 12
%config InlineBackend.figure_format = 'retina'
from IPython.display import display
Explanation: Example - Plotting timetraces with bursts
This notebook is part of smFRET burst analysis software FRETBursts.
In this notebook shows how to plot timetraces with burst information.
For a complete tutorial on burst analysis see
FRETBursts - us-ALEX smFRET burst analysis.
End of explanation
url = 'http://files.figshare.com/2182601/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5'
download_file(url, save_dir='./data')
full_fname = "./data/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5"
d = loader.photon_hdf5(full_fname)
loader.alex_apply_period(d)
d.calc_bg(bg.exp_fit, time_s=1000, tail_min_us=(800, 4000, 1500, 1000, 3000))
d.burst_search(min_rate_cps=8e3)
ds = d.select_bursts(select_bursts.size, add_naa=True, th1=40)
dsf = ds.fuse_bursts(ms=10)
Explanation: Get and process data
End of explanation
dplot(dsf, timetrace, tmin=0, tmax=1, bursts=True);
Explanation: Plot Timetraces
Default plot:
End of explanation
fig, ax = plt.subplots(figsize=(32, 3))
dplot(dsf, timetrace, tmin=0, tmax=3, binwidth=0.5e-3, bursts=True,
ax=ax, plot_style=dict(lw=0.7))
plt.xlim(0, 3)
plt.grid(False)
Explanation: We can plot a longer figure that scrolls horizontally in the notebook:
End of explanation
dx = dsf
num_time_points = 6
window = 3
kws = dict(figsize=(32, 3), bursts=True, binwidth=0.5e-3,
plot_style=dict(lw=0.7))
# Timepoints equally distributed along the measurement
time_points = np.round(np.linspace(dx.time_min+1, dx.time_max-window-1, num=num_time_points))
for i in time_points:
ax = dplot(dx, timetrace, tmin=i, tmax=i+window, **kws);
plt.xlim(i, i+window)
display(plt.gcf())
plt.close(plt.gcf())
Explanation: Using the previous plot we can sample different times of the measurement to have an overview of the timetrace:
End of explanation |
6,274 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib example (https
Step1: Pandas examples (https
Step2: Seaborn examples (https
Step3: Cartopy examples (https
Step4: Xarray examples (http | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot([1,2,3,4])
plt.show()
# to save
# plt.savefig('test_nb.png')
Explanation: Matplotlib example (https://matplotlib.org/gallery/index.html)
End of explanation
import pandas as pd
import numpy as np
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index, columns=list('ABCD'))
df.cumsum().plot()
# new figure
plt.figure()
df.diff().hist(color='k', alpha=0.5, bins=50)
Explanation: Pandas examples (https://pandas.pydata.org/pandas-docs/stable/visualization.html)
End of explanation
# Joint distributions
import seaborn as sns
sns.set(style="ticks")
rs = np.random.RandomState(11)
x = rs.gamma(2, size=1000)
y = -.5 * x + rs.normal(size=1000)
sns.jointplot(x, y, kind="hex", color="#4CB391")
# Multiple linear regression
sns.set()
# Load the iris dataset
iris = sns.load_dataset("iris")
# Plot sepal with as a function of sepal_length across days
g = sns.lmplot(x="sepal_length", y="sepal_width", hue="species",
truncate=True, height=5, data=iris)
# Use more informative axis labels than are provided by default
g.set_axis_labels("Sepal length (mm)", "Sepal width (mm)")
Explanation: Seaborn examples (https://seaborn.pydata.org/examples/index.html)
End of explanation
import cartopy.crs as ccrs
from cartopy.examples.arrows import sample_data
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(1, 1, 1, projection=ccrs.PlateCarree())
ax.set_extent([-90, 75, 10, 85], crs=ccrs.PlateCarree())
ax.coastlines()
x, y, u, v, vector_crs = sample_data(shape=(80, 100))
magnitude = (u ** 2 + v ** 2) ** 0.5
ax.streamplot(x, y, u, v, transform=vector_crs,
linewidth=2, density=2, color=magnitude)
plt.show()
import matplotlib.patches as mpatches
import shapely.geometry as sgeom
import cartopy.io.shapereader as shpreader
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1], projection=ccrs.LambertConformal())
ax.set_extent([-125, -66.5, 20, 50], ccrs.Geodetic())
shapename = 'admin_1_states_provinces_lakes_shp'
states_shp = shpreader.natural_earth(resolution='110m',
category='cultural', name=shapename)
# Hurricane Katrina lons and lats
lons = [-75.1, -75.7, -76.2, -76.5, -76.9, -77.7, -78.4, -79.0,
-79.6, -80.1, -80.3, -81.3, -82.0, -82.6, -83.3, -84.0,
-84.7, -85.3, -85.9, -86.7, -87.7, -88.6, -89.2, -89.6,
-89.6, -89.6, -89.6, -89.6, -89.1, -88.6, -88.0, -87.0,
-85.3, -82.9]
lats = [23.1, 23.4, 23.8, 24.5, 25.4, 26.0, 26.1, 26.2, 26.2, 26.0,
25.9, 25.4, 25.1, 24.9, 24.6, 24.4, 24.4, 24.5, 24.8, 25.2,
25.7, 26.3, 27.2, 28.2, 29.3, 29.5, 30.2, 31.1, 32.6, 34.1,
35.6, 37.0, 38.6, 40.1]
# to get the effect of having just the states without a map "background" turn off the outline and background patches
ax.background_patch.set_visible(False)
ax.outline_patch.set_visible(False)
ax.set_title('US States which intersect the track of '
'Hurricane Katrina (2005)')
# turn the lons and lats into a shapely LineString
track = sgeom.LineString(zip(lons, lats))
# buffer the linestring by two degrees (note: this is a non-physical
# distance)
track_buffer = track.buffer(2)
for state in shpreader.Reader(states_shp).geometries():
# pick a default color for the land with a black outline,
# this will change if the storm intersects with our track
facecolor = [0.9375, 0.9375, 0.859375]
edgecolor = 'black'
if state.intersects(track):
facecolor = 'red'
elif state.intersects(track_buffer):
facecolor = '#FF7E00'
ax.add_geometries([state], ccrs.PlateCarree(),
facecolor=facecolor, edgecolor=edgecolor)
ax.add_geometries([track_buffer], ccrs.PlateCarree(),
facecolor='#C8A2C8', alpha=0.5)
ax.add_geometries([track], ccrs.PlateCarree(),
facecolor='none', edgecolor='k')
# make two proxy artists to add to a legend
direct_hit = mpatches.Rectangle((0, 0), 1, 1, facecolor="red")
within_2_deg = mpatches.Rectangle((0, 0), 1, 1, facecolor="#FF7E00")
labels = ['State directly intersects\nwith track',
'State is within \n2 degrees of track']
ax.legend([direct_hit, within_2_deg], labels,
loc='lower left', bbox_to_anchor=(0.025, -0.1), fancybox=True)
plt.show()
Explanation: Cartopy examples (https://scitools.org.uk/cartopy/docs/latest/gallery/index.html)
End of explanation
import xarray as xr
airtemps = xr.tutorial.load_dataset('air_temperature')
airtemps
# Convert to celsius
air = airtemps.air - 273.15
# copy attributes to get nice figure labels and change Kelvin to Celsius
air.attrs = airtemps.air.attrs
air.attrs['units'] = 'deg C'
air.sel(lat=50, lon=225).plot()
fig, axes = plt.subplots(ncols=2)
air.sel(lat=50, lon=225).plot(ax=axes[0])
air.sel(lat=50, lon=225).plot.hist(ax=axes[1])
plt.tight_layout()
plt.show()
air.sel(time='2013-09-03T00:00:00').plot()
# Faceting
# Plot evey 250th point
air.isel(time=slice(0, 365 * 4, 250)).plot(x='lon', y='lat', col='time', col_wrap=3)
# Overlay data on cartopy map
ax = plt.axes(projection=ccrs.Orthographic(-80, 35))
air.isel(time=0).plot.contourf(ax=ax, transform=ccrs.PlateCarree());
ax.set_global(); ax.coastlines();
Explanation: Xarray examples (http://xarray.pydata.org/en/stable/plotting.html)
End of explanation |
6,275 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GPyTorch regression with derivative information
Introduction
In this notebook, we show how to train a GP regression model in GPyTorch of an unknown function given function value and derivative observations. We consider modeling the function
Step1: Setting up the training data
We use 50 uniformly distributed points in the interval $[0, 5 \pi]$
Step2: Setting up the model
A GP prior on the function values implies a multi-output GP prior on the function values and the partial derivatives, see 9.4 in http
Step3: The model training is similar to training a standard GP regression model
Step4: Model predictions are also similar to GP regression with only function values, butwe need more CG iterations to get accurate estimates of the predictive variance | Python Code:
import torch
import gpytorch
import math
from matplotlib import pyplot as plt
import numpy as np
%matplotlib inline
%load_ext autoreload
%autoreload 2
Explanation: GPyTorch regression with derivative information
Introduction
In this notebook, we show how to train a GP regression model in GPyTorch of an unknown function given function value and derivative observations. We consider modeling the function:
\begin{align}
y &= \sin(2x) + cos(x) + \epsilon \
\frac{dy}{dx} &= 2\cos(2x) - \sin(x) + \epsilon \
\epsilon &\sim \mathcal{N}(0, 0.5)
\end{align}
using 50 value and derivative observations.
End of explanation
lb, ub = 0.0, 5*math.pi
n = 50
train_x = torch.linspace(lb, ub, n).unsqueeze(-1)
train_y = torch.stack([
torch.sin(2*train_x) + torch.cos(train_x),
-torch.sin(train_x) + 2*torch.cos(2*train_x)
], -1).squeeze(1)
train_y += 0.05 * torch.randn(n, 2)
Explanation: Setting up the training data
We use 50 uniformly distributed points in the interval $[0, 5 \pi]$
End of explanation
class GPModelWithDerivatives(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(GPModelWithDerivatives, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMeanGrad()
self.base_kernel = gpytorch.kernels.RBFKernelGrad()
self.covar_module = gpytorch.kernels.ScaleKernel(self.base_kernel)
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultitaskMultivariateNormal(mean_x, covar_x)
likelihood = gpytorch.likelihoods.MultitaskGaussianLikelihood(num_tasks=2) # Value + Derivative
model = GPModelWithDerivatives(train_x, train_y, likelihood)
Explanation: Setting up the model
A GP prior on the function values implies a multi-output GP prior on the function values and the partial derivatives, see 9.4 in http://www.gaussianprocess.org/gpml/chapters/RW9.pdf for more details. This allows using a MultitaskMultivariateNormal and MultitaskGaussianLikelihood to train a GP model from both function values and gradients. The resulting RBF kernel that models the covariance between the values and partial derivatives has been implemented in RBFKernelGrad and the extension of a constant mean is implemented in ConstantMeanGrad.
The RBFKernelGrad is generally worse conditioned than the RBFKernel, so we place a lower bound on the noise parameter to keep the smallest eigenvalues of the kernel matrix away from zero.
End of explanation
# this is for running the notebook in our testing framework
import os
smoke_test = ('CI' in os.environ)
training_iter = 2 if smoke_test else 50
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes GaussianLikelihood parameters
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
for i in range(training_iter):
optimizer.zero_grad()
output = model(train_x)
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f lengthscale: %.3f noise: %.3f' % (
i + 1, training_iter, loss.item(),
model.covar_module.base_kernel.lengthscale.item(),
model.likelihood.noise.item()
))
optimizer.step()
Explanation: The model training is similar to training a standard GP regression model
End of explanation
# Set into eval mode
model.train()
model.eval()
likelihood.eval()
# Initialize plots
f, (y1_ax, y2_ax) = plt.subplots(1, 2, figsize=(12, 6))
# Make predictions
with torch.no_grad(), gpytorch.settings.max_cg_iterations(50):
test_x = torch.linspace(lb, ub, 500)
predictions = likelihood(model(test_x))
mean = predictions.mean
lower, upper = predictions.confidence_region()
# Plot training data as black stars
y1_ax.plot(train_x.detach().numpy(), train_y[:, 0].detach().numpy(), 'k*')
# Predictive mean as blue line
y1_ax.plot(test_x.numpy(), mean[:, 0].numpy(), 'b')
# Shade in confidence
y1_ax.fill_between(test_x.numpy(), lower[:, 0].numpy(), upper[:, 0].numpy(), alpha=0.5)
y1_ax.legend(['Observed Values', 'Mean', 'Confidence'])
y1_ax.set_title('Function values')
# Plot training data as black stars
y2_ax.plot(train_x.detach().numpy(), train_y[:, 1].detach().numpy(), 'k*')
# Predictive mean as blue line
y2_ax.plot(test_x.numpy(), mean[:, 1].numpy(), 'b')
# Shade in confidence
y2_ax.fill_between(test_x.numpy(), lower[:, 1].numpy(), upper[:, 1].numpy(), alpha=0.5)
y2_ax.legend(['Observed Derivatives', 'Mean', 'Confidence'])
y2_ax.set_title('Derivatives')
None
Explanation: Model predictions are also similar to GP regression with only function values, butwe need more CG iterations to get accurate estimates of the predictive variance
End of explanation |
6,276 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-1', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: PCMDI
Source ID: SANDBOX-1
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:36
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
6,277 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
深度学习在早期一度被认为是无监督的特征学习。
1. 无监督学习,即不需要标注数据就可以对数据进行一定程度的学习,这种学习是对数据内容的组织形式的学习,提取的是频繁出现的特征;
2. 逐层抽象,特征是需要不断抽象的,就像人总是从简单基础的概念开始学习,再到复杂的概念。
对于那些特征并不明确的领域,人工的提取特征需要行业相关的专业知识。比如图像识别,图像是有像素点构成。像素点数值是没有区分性的,一张图片真正有意义的不仅仅是数值的大小,而是不同像素点之间的空间结构信息。早期学者研究稀疏编码的时,提取了很多16X16像素的图像碎片,他们发现几乎所有的图像碎片都可以通过64种正交的边组合而成。而且组合出一张图像碎片所需要的边的数量是很少的,也就是说是稀疏。对于声音,也同样发现了20种基本结构,绝大多数声音可以由这些基本结构线性组合得到。使用少量的基本特征组装出更高层的抽象特征。
一张图片从原始的像素慢慢抽象,从像素组成点、线,再将点、线组合成小零件,再将小零件组成车轮、车窗、车声等高阶特征,这便是深度学习在训练过程中所做的特征学习。
如果有很多标注数据,则可以训练一个深层的神经网络。如果没有标注的数据呢?依然可以使用无监督的自编码器来提取特征。
自编码器(AutoEncoder),顾名思义,即可以使用自身的高阶特征编码自己。也就是说,图片不再是使用原始3维像素点来表示,而是用更为高阶的特征。自编码器其实也是一种神经网络,输入和输出是一致的,目标是使用稀疏一些的高阶特征重新组合来重构自己。
1. 输入/输出一致
2. 使用高阶特征来重构自己,而不只是复制像素点
要达成以上两点要求,那么对于神经网络加入几个限制。
a. 限制中间隐含层节点的数量,让中间层节点的数量小于输入/输出节点的数量,降维。降维过程,必然是丢失信息的过程。这个时候模型就会去寻找最主要的特征保留下来。如果再给中间隐藏层的权重加L1的正则,那么根据调控惩罚系数$\lambda$大小,就能调整特征的稀疏程度(L1正则化会导致出现系数为0的权重);
b. 如果给数据加入噪声,那么就是Denoising AutoEncoder(去噪自编码器)。特意加上噪声就是为了让模型去掉噪声,找出真正的模式和结构。提高模型的泛化性能。
Step6: 这里的自编码器会使用到的参数初始化方法叫做Xavier intialization,其特点是根据某一层网络的输入、输出节点数量自动调整最合适的分布。论文中指出,如果深度学习的权重初始化得太小,那信号将在每层传递中逐渐缩小而难以产生作用,但如果权重初始化得太大,那信号将在每层间传递时逐渐放大而导致发散。而Xavier初始化器就是让权重被初始化得不大不。从数学上来看,Xavier就是让权重满足0均值,同时方差为$\frac{2}{n_{in}+n_{out}}$。分布上可以使用均与分布或者高斯分布。$n_{in}, n_{out}$分别表示输入节点数量,输出节点数量。
Step7: 载入数据
Step8: 标准化
先将数据处理为均值为0,方差为1的分布。
Step9: 获取数据块
Step10: 全局参数
总样本数
最大训练轮数(epoch)
batch_size
特定轮数展示cost
Step11: 编码器的效果
绘制转换前后的图片可以看出,编码器突出了图片更为关键的信息。自编码器能够将更为重要的特征提取出来。 | Python Code:
%matplotlib inline
import numpy as np
from sklearn import preprocessing
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials import mnist
from __future__ import division
Explanation: 深度学习在早期一度被认为是无监督的特征学习。
1. 无监督学习,即不需要标注数据就可以对数据进行一定程度的学习,这种学习是对数据内容的组织形式的学习,提取的是频繁出现的特征;
2. 逐层抽象,特征是需要不断抽象的,就像人总是从简单基础的概念开始学习,再到复杂的概念。
对于那些特征并不明确的领域,人工的提取特征需要行业相关的专业知识。比如图像识别,图像是有像素点构成。像素点数值是没有区分性的,一张图片真正有意义的不仅仅是数值的大小,而是不同像素点之间的空间结构信息。早期学者研究稀疏编码的时,提取了很多16X16像素的图像碎片,他们发现几乎所有的图像碎片都可以通过64种正交的边组合而成。而且组合出一张图像碎片所需要的边的数量是很少的,也就是说是稀疏。对于声音,也同样发现了20种基本结构,绝大多数声音可以由这些基本结构线性组合得到。使用少量的基本特征组装出更高层的抽象特征。
一张图片从原始的像素慢慢抽象,从像素组成点、线,再将点、线组合成小零件,再将小零件组成车轮、车窗、车声等高阶特征,这便是深度学习在训练过程中所做的特征学习。
如果有很多标注数据,则可以训练一个深层的神经网络。如果没有标注的数据呢?依然可以使用无监督的自编码器来提取特征。
自编码器(AutoEncoder),顾名思义,即可以使用自身的高阶特征编码自己。也就是说,图片不再是使用原始3维像素点来表示,而是用更为高阶的特征。自编码器其实也是一种神经网络,输入和输出是一致的,目标是使用稀疏一些的高阶特征重新组合来重构自己。
1. 输入/输出一致
2. 使用高阶特征来重构自己,而不只是复制像素点
要达成以上两点要求,那么对于神经网络加入几个限制。
a. 限制中间隐含层节点的数量,让中间层节点的数量小于输入/输出节点的数量,降维。降维过程,必然是丢失信息的过程。这个时候模型就会去寻找最主要的特征保留下来。如果再给中间隐藏层的权重加L1的正则,那么根据调控惩罚系数$\lambda$大小,就能调整特征的稀疏程度(L1正则化会导致出现系数为0的权重);
b. 如果给数据加入噪声,那么就是Denoising AutoEncoder(去噪自编码器)。特意加上噪声就是为了让模型去掉噪声,找出真正的模式和结构。提高模型的泛化性能。
End of explanation
def xavier_init(fan_in, fan_out, constant=1):
low = -constant * np.sqrt(6.0 / (fan_in + fan_out))
high = constant * np.sqrt(6.0 / (fan_in + fan_out))
return tf.random_uniform((fan_in, fan_out), minval=low, maxval=high, dtype=tf.float32)
class AdditiveGaussianNoiseAutoEncoder(object):
def __init__(self, n_input, n_hidden, transfer_function=tf.nn.softplus, optimizer=tf.train.AdamOptimizer(), noise_scale=0.1):
Parameters
------------
n_input: 输入变量数
n_hidden: 隐含层节点数
transfer_function: 隐含层激活函数,默认为softplus
optimizer: 优化器,默认为Adam
noise_scale: 高斯噪声系数,默认为0.1
self.n_input = n_input
self.n_hidden = n_hidden
self.transfer = transfer_function
self.noise_scale = noise_scale
self.weights = self._initialize_weights()
# 噪声比例
self.scale = tf.placeholder(tf.float32)
# 输入值[batch_size, input_dimension]
self.x = tf.placeholder(tf.float32, [None, self.n_input])
# 使用高斯分布加上noise。
self.noise_x = self.x + self.scale * tf.random_normal((self.n_input,))
# 关联输入层和隐藏层的权重系数,使用转化函数处理 transfer(w1*input + b1)
self.hidden = self.transfer(tf.add(tf.matmul(self.noise_x, self.weights['w1']), self.weights['b1']))
# 关联隐藏层和输出层的权重系数 w2 * hidden + b2
self.reconstrunction = tf.add(tf.matmul(self.hidden, self.weights['w2']), self.weights['b2'])
# 使用平方误差 0.5*sum((y-x)^2)
self.cost = 0.5 * tf.reduce_sum(tf.pow(tf.subtract(self.reconstrunction, self.x), 2.0))
# 接下来使用优化算法计算最优解
self.optimizer = optimizer.minimize(self.cost)
# 初始化参数
self.sess = tf.Session()
self.sess.run(tf.global_variables_initializer())
def _initialize_weights(self):
all_weights = dict()
# 第一层权重系数关联了输入层和隐藏层。
all_weights['w1'] = tf.Variable(xavier_init(self.n_input, self.n_hidden))
all_weights['b1'] = tf.Variable(tf.zeros([self.n_hidden], dtype=tf.float32))
# 第二层隐藏层和输出层关联.输出层和输入层的节点数一样。
all_weights['w2'] = tf.Variable(tf.zeros([self.n_hidden, self.n_input], dtype=tf.float32))
all_weights['b2'] = tf.Variable(tf.zeros([self.n_input], dtype=tf.float32))
return all_weights
def partial_fit(self, X):
执行优化算法,更新权重
返回成本结果
Parameters
---------------
X: 训练数据。通常是一个batch大小的数据块。
cost, opt = self.sess.run([self.cost, self.optimizer],
feed_dict={self.x: X, self.scale: self.noise_scale})
return cost
def calc_total_cost(self, X):
计算最终成本结果。
Parameters
------------
X:测试数据。通常是传入一整个测试样本数据。
return self.sess.run(self.cost, feed_dict={self.x: X, self.scale: self.noise_scale})
def transform(self, X):
返回隐藏层的结果,自编码器的隐藏层最主要的功能就是学习数据中的高阶特征。
主要是为了解析一张图片的高阶特征到底是什么样子。
Parameters
---------------
X: 图片数据。
return self.sess.run(self.hidden, feed_dict={self.x: X, self.scale: self.noise_scale})
def generate(self, hidden=None):
生成结果的API。输入隐藏层,可以得到最终的输出结果。与以上的transform合并起来,就构成自编码的完整步骤。
if hidden is None:
hidden = np.random.normal(size=self.weights['b1'])
return self.sess.run(self.reconstrunction, feed_dict={self.hidden: hidden})
def getWeights(self):
return self.sess.run(self.weights['w1'])
def getBiases(self):
return self.sess.run(self.weights['b1'])
Explanation: 这里的自编码器会使用到的参数初始化方法叫做Xavier intialization,其特点是根据某一层网络的输入、输出节点数量自动调整最合适的分布。论文中指出,如果深度学习的权重初始化得太小,那信号将在每层传递中逐渐缩小而难以产生作用,但如果权重初始化得太大,那信号将在每层间传递时逐渐放大而导致发散。而Xavier初始化器就是让权重被初始化得不大不。从数学上来看,Xavier就是让权重满足0均值,同时方差为$\frac{2}{n_{in}+n_{out}}$。分布上可以使用均与分布或者高斯分布。$n_{in}, n_{out}$分别表示输入节点数量,输出节点数量。
End of explanation
mnist_data = mnist.input_data.read_data_sets('MNIST', one_hot=True)
Explanation: 载入数据
End of explanation
def standard_scale(X_train, X_test):
standarder = preprocessing.StandardScaler()
X_train = standarder.fit_transform(X_train)
X_test = standarder.transform(X_test)
return X_train, X_test
X_train, X_test = standard_scale(mnist_data.train.images, mnist_data.test.images)
Explanation: 标准化
先将数据处理为均值为0,方差为1的分布。
End of explanation
def get_random_block_from_data(data, batch_size):
start_index = np.random.randint(0, len(data) - batch_size) # 避免index越界
return data[start_index: (start_index+batch_size)]
Explanation: 获取数据块
End of explanation
n_samples = int(mnist_data.train.num_examples)
training_epochs = 20
batch_size = 128
display_step = 1
autoencoder = AdditiveGaussianNoiseAutoEncoder(n_input=784, n_hidden=200, transfer_function=tf.nn.softplus, optimizer=tf.train.AdamOptimizer(learning_rate=0.001), noise_scale=0.01)
for epoch in xrange(training_epochs):
avg_cost = 0
total_batch = int(n_samples / batch_size)
# 每一轮都遍历训练样本,训练神经网络模型:自编码器
for i in range(total_batch):
batch_xs = get_random_block_from_data(X_train, batch_size)
cost = autoencoder.partial_fit(batch_xs)
avg_cost += cost * (batch_size / n_samples)
if epoch % display_step == 0:
print("Epoch: %4d" % (epoch+1), "cost=", "{:.9f}".format(avg_cost))
print("Total cost: " + str(autoencoder.calc_total_cost(X_test)))
Explanation: 全局参数
总样本数
最大训练轮数(epoch)
batch_size
特定轮数展示cost
End of explanation
def plot_transform_sample(sample_image):
plt.title("Origin Image")
plt.imshow(sample_image.reshape(28, 28), cmap='binary')
plt.show()
sample_transform_hidden = autoencoder.transform(sample_image.reshape(-1, 784))
sample_transform_image = autoencoder.generate(hidden=sample_transform_hidden)
plt.title("Transform Image")
plt.imshow(sample_transform_image.reshape(28, 28), cmap='binary')
plt.show()
plot_transform_sample(X_train[20200])
Explanation: 编码器的效果
绘制转换前后的图片可以看出,编码器突出了图片更为关键的信息。自编码器能够将更为重要的特征提取出来。
End of explanation |
6,278 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: 权重聚类综合指南
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 定义聚类模型
聚类整个模型(序贯模型和函数式模型)
提高模型准确率的提示:
您必须将具有可接受准确率的预训练模型传递给此 API。使用聚类从头开始训练模型会导致准确率不佳。
在某些情况下,聚类某些层会对模型准确率造成不利影响。查看“聚类某些层”来了解如何跳过聚类对准确率影响最大的层。
要聚类所有层,请将 tfmot.clustering.keras.cluster_weights 应用于模型。
Step3: 聚类某些层(序贯模型和函数式模型)
提高模型准确率的提示:
您必须将具有可接受准确率的预训练模型传递给此 API。使用聚类从头开始训练模型会导致准确率不佳。
与前面的层相反,使用更多冗余参数(例如 tf.keras.layers.Dense 和 tf.keras.layers.Conv2D)来聚类后面的层。
在微调期间,先冻结前面的层,然后再冻结聚类的层。将冻结层数视为超参数。根据经验,冻结大多数前面的层对于当前的聚类 API 较为理想。
避免聚类关键层(例如注意力机制)。
更多提示:tfmot.clustering.keras.cluster_weights API 文档提供了有关如何更改每层的聚类配置的详细信息。
Step4: 为聚类模型设置检查点和进行反序列化
您的用例:仅 HDF5 模型格式需要此代码(HDF5 权重或其他格式不需要)。
Step5: 提高聚类模型的准确率
对于您的特定用例,您可以考虑以下提示:
形心初始化在最终优化的模型准确率中起到关键作用。通常,线性初始化优于密度和随机初始化,因为它不会丢失较大的权重。但是,对于在具有双峰分布的权重上使用极少簇的情况,已经观察到密度初始化可以提供更出色的准确率。
微调聚类模型时,将学习率设置为低于训练中使用的学习率。
有关提高模型准确率的总体思路,请在“定义聚类模型”下查找您的用例对应的提示。
部署
导出大小经过压缩的模型
常见误区:strip_clustering 和应用标准压缩算法(例如通过 gzip)对于看到聚类压缩的好处必不可少。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
! pip install -q tensorflow-model-optimization
import tensorflow as tf
import numpy as np
import tempfile
import os
import tensorflow_model_optimization as tfmot
input_dim = 20
output_dim = 20
x_train = np.random.randn(1, input_dim).astype(np.float32)
y_train = tf.keras.utils.to_categorical(np.random.randn(1), num_classes=output_dim)
def setup_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(input_dim, input_shape=[input_dim]),
tf.keras.layers.Flatten()
])
return model
def train_model(model):
model.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy']
)
model.summary()
model.fit(x_train, y_train)
return model
def save_model_weights(model):
_, pretrained_weights = tempfile.mkstemp('.h5')
model.save_weights(pretrained_weights)
return pretrained_weights
def setup_pretrained_weights():
model= setup_model()
model = train_model(model)
pretrained_weights = save_model_weights(model)
return pretrained_weights
def setup_pretrained_model():
model = setup_model()
pretrained_weights = setup_pretrained_weights()
model.load_weights(pretrained_weights)
return model
def save_model_file(model):
_, keras_file = tempfile.mkstemp('.h5')
model.save(keras_file, include_optimizer=False)
return keras_file
def get_gzipped_model_size(model):
# It returns the size of the gzipped model in bytes.
import os
import zipfile
keras_file = save_model_file(model)
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(keras_file)
return os.path.getsize(zipped_file)
setup_model()
pretrained_weights = setup_pretrained_weights()
Explanation: 权重聚类综合指南
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/model_optimization/guide/clustering/clustering_comprehensive_guide"> <img src="https://tensorflow.google.cn/images/tf_logo_32px.png"> 在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/model_optimization/guide/clustering/clustering_comprehensive_guide.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/model_optimization/guide/clustering/clustering_comprehensive_guide.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/model_optimization/guide/clustering/clustering_comprehensive_guide.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png"> 下载笔记本</a></td>
</table>
欢迎阅读 TensorFlow Model Optimization Toolkit 中权重聚类的综合指南。
本页面记录了各种用例,并展示了如何将 API 用于每种用例。了解需要哪些 API 后,可在 API 文档中找到参数和底层详细信息:
如果要查看权重聚类的好处以及支持的功能,请查看概述。
有关单个端到端示例,请参阅权重聚类示例。
本指南涵盖了以下用例:
定义聚类模型。
为聚类模型设置检查点和进行反序列化。
提高聚类模型的准确率。
仅对于部署而言,您必须采取措施才能看到压缩的好处。
设置
End of explanation
import tensorflow_model_optimization as tfmot
cluster_weights = tfmot.clustering.keras.cluster_weights
CentroidInitialization = tfmot.clustering.keras.CentroidInitialization
clustering_params = {
'number_of_clusters': 3,
'cluster_centroids_init': CentroidInitialization.DENSITY_BASED
}
model = setup_model()
model.load_weights(pretrained_weights)
clustered_model = cluster_weights(model, **clustering_params)
clustered_model.summary()
Explanation: 定义聚类模型
聚类整个模型(序贯模型和函数式模型)
提高模型准确率的提示:
您必须将具有可接受准确率的预训练模型传递给此 API。使用聚类从头开始训练模型会导致准确率不佳。
在某些情况下,聚类某些层会对模型准确率造成不利影响。查看“聚类某些层”来了解如何跳过聚类对准确率影响最大的层。
要聚类所有层,请将 tfmot.clustering.keras.cluster_weights 应用于模型。
End of explanation
# Create a base model
base_model = setup_model()
base_model.load_weights(pretrained_weights)
# Helper function uses `cluster_weights` to make only
# the Dense layers train with clustering
def apply_clustering_to_dense(layer):
if isinstance(layer, tf.keras.layers.Dense):
return cluster_weights(layer, **clustering_params)
return layer
# Use `tf.keras.models.clone_model` to apply `apply_clustering_to_dense`
# to the layers of the model.
clustered_model = tf.keras.models.clone_model(
base_model,
clone_function=apply_clustering_to_dense,
)
clustered_model.summary()
Explanation: 聚类某些层(序贯模型和函数式模型)
提高模型准确率的提示:
您必须将具有可接受准确率的预训练模型传递给此 API。使用聚类从头开始训练模型会导致准确率不佳。
与前面的层相反,使用更多冗余参数(例如 tf.keras.layers.Dense 和 tf.keras.layers.Conv2D)来聚类后面的层。
在微调期间,先冻结前面的层,然后再冻结聚类的层。将冻结层数视为超参数。根据经验,冻结大多数前面的层对于当前的聚类 API 较为理想。
避免聚类关键层(例如注意力机制)。
更多提示:tfmot.clustering.keras.cluster_weights API 文档提供了有关如何更改每层的聚类配置的详细信息。
End of explanation
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights)
clustered_model = cluster_weights(base_model, **clustering_params)
# Save or checkpoint the model.
_, keras_model_file = tempfile.mkstemp('.h5')
clustered_model.save(keras_model_file, include_optimizer=True)
# `cluster_scope` is needed for deserializing HDF5 models.
with tfmot.clustering.keras.cluster_scope():
loaded_model = tf.keras.models.load_model(keras_model_file)
loaded_model.summary()
Explanation: 为聚类模型设置检查点和进行反序列化
您的用例:仅 HDF5 模型格式需要此代码(HDF5 权重或其他格式不需要)。
End of explanation
model = setup_model()
clustered_model = cluster_weights(model, **clustering_params)
clustered_model.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy']
)
clustered_model.fit(
x_train,
y_train
)
final_model = tfmot.clustering.keras.strip_clustering(clustered_model)
print("final model")
final_model.summary()
print("\n")
print("Size of gzipped clustered model without stripping: %.2f bytes"
% (get_gzipped_model_size(clustered_model)))
print("Size of gzipped clustered model with stripping: %.2f bytes"
% (get_gzipped_model_size(final_model)))
Explanation: 提高聚类模型的准确率
对于您的特定用例,您可以考虑以下提示:
形心初始化在最终优化的模型准确率中起到关键作用。通常,线性初始化优于密度和随机初始化,因为它不会丢失较大的权重。但是,对于在具有双峰分布的权重上使用极少簇的情况,已经观察到密度初始化可以提供更出色的准确率。
微调聚类模型时,将学习率设置为低于训练中使用的学习率。
有关提高模型准确率的总体思路,请在“定义聚类模型”下查找您的用例对应的提示。
部署
导出大小经过压缩的模型
常见误区:strip_clustering 和应用标准压缩算法(例如通过 gzip)对于看到聚类压缩的好处必不可少。
End of explanation |
6,279 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
On souhaite prédire la colonne "SalePrice". Donc toutes les autres colonnes sont des variables à faire apprendre
Step1: Le model peux prendre en entré que des chiffre, il faut donc transformer les données en string en chiffre
Step2: Il y a des données manquante qui ne peuvent pas etre prisent en compte par la modèle, il faut donc les remplacer
Step3: Compréhension du Score de performance de notre modèle
Step4: Contruction de notre stratégie de Cross - Validation
Step5: Voila notre 1er score !!
Step6: Une valeur semble complétement perdu; on prédit 900.000 alors qu'elle devrait etre à moins de 200.000 ...
Step7: transformation de notre Prix pour améliorer le score
Step8: Paramettre d'un modèle
Step9: Aller plus loin
Step10: Encore plus loin
Step11: Toujours tester un ajout de features pour savoir si celle-ci va avoir un impact positif ou négatif
Regardons la colonnes "MasVnrArea" & "MasVnrType"
Step12: BsmtFinType1 & BsmtFinSF1 / BsmtFinType2 & BsmtFinSF2
Step13: Let's talk about Tree now
Step14: Problème avec nos données en string...
Step15: On change chaque valeur en string en valeur numérique
Step16: yeah mixer 2 algos !!!
Step17: Importance des features dans un tree | Python Code:
features = [col for col in data.columns if col not in "SalePrice"]
features
train = data[features]
y = data.SalePrice
#y = data['SalePrice']
train.head()
y.head()
sns.distplot(y)
# Modele pour la regression
from sklearn.linear_model import Ridge
import sklearn
sklearn.__version__
# Initialisation du model
model_ridge = Ridge()
# 1) On fait apprendre le model
model_ridge.fit(train, y)
# Error ...
Explanation: On souhaite prédire la colonne "SalePrice". Donc toutes les autres colonnes sont des variables à faire apprendre
End of explanation
data['SaleCondition'].head()
pd.get_dummies(data['SaleCondition'], prefix="SaleCondition").head()
def prepare_data(data):
features = [col for col in data.columns if col not in "SalePrice"] # 80 col
train = data[features]
y = data.SalePrice
# Transform Object features to columns
train = pd.get_dummies(train)
return train, y
train, y = prepare_data(data.copy())
train.head()
# 2) On fait apprendre le model
model_ridge.fit(train, y)
# Error ...
data.BsmtFinType2.value_counts(dropna=False)
Explanation: Le model peux prendre en entré que des chiffre, il faut donc transformer les données en string en chiffre
End of explanation
pd.isnull(data).sum()
def prepare_data(data):
features = [col for col in data.columns if col not in "SalePrice"]
train = data[features]
y = data.SalePrice
# Transform Object features to columns
train = pd.get_dummies(train)
# Replace Nan value by mean of the column
train = train.fillna(train.mean())
return train, y
train, y = prepare_data(data.copy())
# 3) On fait apprendre le model
model_ridge.fit(train, y)
# yeah !!!
Explanation: Il y a des données manquante qui ne peuvent pas etre prisent en compte par la modèle, il faut donc les remplacer
End of explanation
from sklearn.metrics import mean_absolute_error
vrai = np.array([1000, 2000, 1500])
prediction = np.array([900, 2200, 1300]) # classic
#prediction = np.array([990, 2005, 1500]) # Best
#prediction = np.array([9000, 22000, 13000]) # Bad
mean_absolute_error(vrai, prediction)
1000 - 900
2000 - 2200
1500 - 1300
(100 + 200 + 200) / 3.0
Explanation: Compréhension du Score de performance de notre modèle :
End of explanation
from sklearn.model_selection import cross_val_score
def cross_validation(model, train, y, cv=5):
mae = -cross_val_score(model, train, y, scoring="neg_mean_absolute_error", cv = cv)
return mae
score = cross_validation(model_ridge, train, y)
print score
score.mean(), score.std()
data.SalePrice.describe()
Explanation: Contruction de notre stratégie de Cross - Validation
End of explanation
preds = pd.DataFrame({"preds":model_ridge.predict(train), "true":y})
preds["residuals"] = np.abs(preds["true"] - preds["preds"])
preds.plot(x = "preds", y = "residuals",kind = "scatter")
preds[preds.residuals >150000]
data.shape
def prepare_data_outlier(data):
features = [col for col in data.columns if col not in "SalePrice"]
#on enleve les id qui sont trop extreme
data = data.drop(data.index[[523,898, 1298]])
train = data[features]
y = data.SalePrice
# Transform Object features to columns
train = pd.get_dummies(train)
# Replace Nan value by mean of the column
train = train.fillna(train.mean())
return train, y
train, y = prepare_data_outlier(data.copy())
print train.shape, y.shape
score = cross_validation(model_ridge, train, y)
print score.mean()
train, y = prepare_data(data.copy())
print train.shape, y.shape
score = cross_validation(model_ridge, train, y)
print score.mean()
from sklearn.model_selection import train_test_split
X_train, X_validation, y_train, y_validation = train_test_split(train, y, random_state = 3)
print"X_train : " + str(X_train.shape)
print"X_validation : " + str(X_validation.shape)
print"y_train : " + str(y_train.shape)
print"y_validation : " + str(y_validation.shape)
model_ridge.fit(X_train, y_train)
mes_predictions = model_ridge.predict(X_validation)
# Mes prédiction
mes_predictions[0:5]
# Les vrai valeurs
y_validation[0:5]
mean_absolute_error(y_validation, mes_predictions)
plt.scatter(mes_predictions, y_validation)
plt.plot([min(mes_predictions),max(mes_predictions)], [min(mes_predictions),max(mes_predictions)], c="red")
plt.xlabel('Mes predicitons')
plt.ylabel('Vrai valeurs')
Explanation: Voila notre 1er score !!
End of explanation
analyse = X_validation.copy()
analyse.head()
analyse['prix'] = y_validation
analyse.head()
analyse['prediction'] = mes_predictions
analyse.head()
analyse[analyse.prediction >= 800000]
sns.countplot(data.SaleCondition)
Explanation: Une valeur semble complétement perdu; on prédit 900.000 alors qu'elle devrait etre à moins de 200.000 ...
End of explanation
sns.distplot(data.SalePrice)
data.SalePrice.describe()
sns.distplot(np.log1p(data.SalePrice))
np.log1p(data.SalePrice).describe()
def prepare_data_log(data):
features = [col for col in data.columns if col not in "SalePrice"]
train = data[features]
y = data.SalePrice
# Transforme log
y = np.log1p(y)
# Transform Object features to columns
train = pd.get_dummies(train)
# Replace Nan value by mean of the column
train = train.fillna(train.mean())
return train, y
train, y = prepare_data_log(data)
score = cross_validation(model_ridge, train, y)
print score.mean()
X_train, X_validation, y_train, y_validation = train_test_split(train, y, random_state = 3)
model_ridge.fit(X_train, y_train)
mes_predictions = model_ridge.predict(X_validation)
mes_predictions[0:5]
# Les vrai valeurs
y_validation[0:5]
mean_absolute_error(y_validation, mes_predictions)
mes_predictions_exp = np.expm1(mes_predictions)
y_validation_exp = np.expm1(y_validation)
# Redonner les valeurs un transformation normal (exp)
mean_absolute_error(y_validation_exp, mes_predictions_exp)
plt.scatter(mes_predictions_exp, y_validation_exp)
plt.plot([min(mes_predictions_exp),max(mes_predictions_exp)], [min(mes_predictions_exp),max(mes_predictions_exp)]
, c="red")
plt.xlabel('Mes predicitons')
plt.ylabel('Vrai valeurs')
def prepare_data_outlier_log(data):
features = [col for col in data.columns if col not in "SalePrice"]
#on enleve les id qui sont trop extreme
data = data.drop(data.index[[523,898, 1298]])
train = data[features]
y = data.SalePrice
# Transforme log
y = np.log1p(y)
# Transform Object features to columns
train = pd.get_dummies(train)
# Replace Nan value by mean of the column
train = train.fillna(train.mean())
return train, y
train, y = prepare_data_outlier_log(data.copy())
score = cross_validation(model_ridge, train, y)
print score.mean()
X_train, X_validation, y_train, y_validation = train_test_split(train, y, random_state = 3)
model_ridge.fit(X_train, y_train)
mes_predictions = model_ridge.predict(X_validation)
mes_predictions[0:5]
# Les vrai valeurs
y_validation[0:5]
mes_predictions_exp = np.expm1(mes_predictions)
y_validation_exp = np.expm1(y_validation)
# Redonner les valeurs un transformation normal (exp)
mean_absolute_error(y_validation_exp, mes_predictions_exp)
plt.scatter(mes_predictions_exp, y_validation_exp)
plt.plot([min(mes_predictions_exp),max(mes_predictions_exp)], [min(mes_predictions_exp),max(mes_predictions_exp)]
, c="red")
plt.xlabel('Mes predicitons')
plt.ylabel('Vrai valeurs')
Explanation: transformation de notre Prix pour améliorer le score :
End of explanation
model_ridge = Ridge()
model_ridge
#alphas = [0.05, 0.1, 0.3, 1, 3, 5, 10, 15, 30, 50, 75]
alphas = [10, 10.5, 11, 11.5, 12, 12.5, 13, 13.5, 14, 14.5, 15, 15.5, 16]
cv_ridge = [cross_validation(Ridge(alpha = alpha ,random_state=42), train, y).mean()
for alpha in alphas]
cv_ridge = pd.Series(cv_ridge, index = alphas)
cv_ridge.plot()
plt.xlabel("alpha")
plt.ylabel("mean absolute error")
cv_ridge.argmin()
cv_ridge
score = cross_validation(Ridge(alpha=13.5, random_state=42), train, y)
print score.mean()
X_train, X_validation, y_train, y_validation = train_test_split(train, y, random_state = 3)
model_ridge = Ridge(alpha=13.5, random_state=42)
model_ridge.fit(X_train, y_train)
mes_predictions_exp = np.expm1(model_ridge.predict(X_validation))
y_validation_exp = np.expm1(y_validation)
mean_absolute_error(y_validation_exp, mes_predictions_exp)
plt.scatter(mes_predictions_exp, y_validation_exp)
plt.plot([min(mes_predictions_exp),max(mes_predictions_exp)], [min(mes_predictions_exp),max(mes_predictions_exp)]
, c="red")
plt.xlabel('Mes predicitons')
plt.ylabel('Vrai valeurs')
Explanation: Paramettre d'un modèle
End of explanation
data.plot(kind='scatter', x="1stFlrSF", y='SalePrice')
data.plot(kind='scatter', x="2ndFlrSF", y='SalePrice')
data['1stFlr_2ndFlr_Sf'] = data['1stFlrSF'] + data['2ndFlrSF']
data.plot(kind='scatter', x="1stFlr_2ndFlr_Sf", y='SalePrice')
sns.distplot(np.log1p(data['1stFlr_2ndFlr_Sf']))
data[(data['1stFlr_2ndFlr_Sf'] > 4000) & (data.SalePrice <= 700000)]
def prepare_data_outlier_log_plus(data):
#on enleve les id qui sont trop extreme
data = data.drop(data.index[[523,898, 1298]])
# Ajout de nouvelle variables
data['1stFlr_2ndFlr_Sf'] = np.log1p(data['1stFlrSF'] + data['2ndFlrSF'])
features = [col for col in data.columns if col not in "SalePrice"]
train = data[features]
y = data.SalePrice
# Transforme log
y = np.log1p(y)
# Transform Object features to columns
train = pd.get_dummies(train)
# Replace Nan value by mean of the column
train = train.fillna(train.mean())
print train.shape
return train, y
train, y = prepare_data_outlier_log_plus(data.copy())
score = cross_validation(Ridge(alpha=13.5, random_state=42), train, y)
print score.mean()
X_train, X_validation, y_train, y_validation = train_test_split(train, y, random_state = 3)
model_ridge = Ridge(alpha=13.5, random_state=42)
model_ridge.fit(X_train, y_train)
mes_predictions_exp = np.expm1(model_ridge.predict(X_validation))
y_validation_exp = np.expm1(y_validation)
mean_absolute_error(y_validation_exp, mes_predictions_exp)
plt.scatter(mes_predictions_exp, y_validation_exp)
plt.plot([min(mes_predictions_exp),max(mes_predictions_exp)], [min(mes_predictions_exp),max(mes_predictions_exp)]
, c="red")
plt.xlabel('Mes predicitons')
plt.ylabel('Vrai valeurs')
model_ridge.coef_[0:10]
coef = pd.Series(model_ridge.coef_, index = X_train.columns)
# On prend les 10 plus important features postive et négative
nb_important = 25
imp_coef = pd.concat([coef.sort_values().head(nb_important),
coef.sort_values().tail(nb_important)])
imp_coef.plot(kind = "barh", figsize=(10, 8))
plt.title("Coefficients in Model")
Explanation: Aller plus loin :
End of explanation
# Pour afficher des images (pas besoin de taper cet import)
from IPython.display import Image
data[['YearBuilt', 'GarageYrBlt']].head()
df = data.copy() # To work on df with no change in Daframe data
df['build_home_garage_same_year'] = 0
df.loc[data['YearBuilt'] == data['GarageYrBlt'], 'build_home_garage_same_year'] = 1
df.build_home_garage_same_year.value_counts()
def prepare_data_outlier_log_plus_2(data):
#on enleve les id qui sont trop extreme
data = data.drop(data.index[[523,898, 1298]])
data['1stFlr_2ndFlr_Sf'] = np.log1p(data['1stFlrSF'] + data['2ndFlrSF'])
data['build_home_garage_same_year'] = "N"
data.loc[data['YearBuilt'] == data['GarageYrBlt'], 'build_home_garage_same_year'] = "Y"
features = [col for col in data.columns if col not in "SalePrice"]
train = data[features]
y = data.SalePrice
# Transforme log
y = np.log1p(y)
# Transform Object features to columns
train = pd.get_dummies(train)
# Replace Nan value by mean of the column
train = train.fillna(train.mean())
print train.shape
return train, y
train.head()
data.shape
train, y = prepare_data_outlier_log_plus_2(data.copy())
score = cross_validation(Ridge(alpha=13.5, random_state=42), train, y)
print score.mean()
# Best was 0.0794542370234 donc ce n'est pas positif comme features
Explanation: Encore plus loin :
End of explanation
df.MasVnrType.value_counts()
df.MasVnrType.head()
#df.MasVnrArea.value_counts()
df.shape
df[df.MasVnrType == "None"].MasVnrArea.value_counts()
def prepare_data_outlier_log_plus_3(data):
#on enleve les id qui sont trop extreme
data = data.drop(data.index[[523,898, 1298]])
data['1stFlr_2ndFlr_Sf'] = np.log1p(data['1stFlrSF'] + data['2ndFlrSF'])
data.loc[data.MasVnrType == 'None', 'MasVnrArea'] = 0
features = [col for col in data.columns if col not in "SalePrice"]
train = data[features]
y = data.SalePrice
# Transforme log
y = np.log1p(y)
# Transform Object features to columns
train = pd.get_dummies(train)
# Replace Nan value by mean of the column
train = train.fillna(train.mean())
print train.shape
return train, y
train, y = prepare_data_outlier_log_plus_3(data.copy())
score = cross_validation(Ridge(alpha=13.5, random_state=42), train, y)
print score.mean()
Image(url="http://i.giphy.com/GPq3wxmLbwUGA.gif")
Explanation: Toujours tester un ajout de features pour savoir si celle-ci va avoir un impact positif ou négatif
Regardons la colonnes "MasVnrArea" & "MasVnrType" :
End of explanation
df.BsmtFinType2.value_counts(dropna=False)
df.BsmtFinSF2.describe()
df[pd.isnull(df.BsmtFinType2)].BsmtFinSF2.value_counts()
def prepare_data_outlier_log_plus_4(data):
#on enleve les id qui sont trop extreme
data = data.drop(data.index[[523,898, 1298]])
data['1stFlr_2ndFlr_Sf'] = np.log1p(data['1stFlrSF'] + data['2ndFlrSF'])
data.loc[data.MasVnrType == 'None', 'MasVnrArea'] = 0
data.loc[pd.isnull(data.BsmtFinType2), 'BsmtFinSF2'] = 0
features = [col for col in data.columns if col not in "SalePrice"]
train = data[features]
y = data.SalePrice
# Transforme log
y = np.log1p(y)
# Transform Object features to columns
train = pd.get_dummies(train)
# Replace Nan value by mean of the column
train = train.fillna(train.mean())
print train.shape
return train, y
train, y = prepare_data_outlier_log_plus_4(data.copy())
score = cross_validation(Ridge(alpha=13.5, random_state=42), train, y)
print score.mean()
X_train_ridge, X_validation_ridge, y_train_ridge, y_validation_ridge = train_test_split(train, y, random_state = 3)
model_ridge = Ridge(alpha=13.5, random_state=42)
model_ridge.fit(X_train_ridge, y_train_ridge)
coef = pd.Series(np.abs(model_ridge.coef_), index = X_train.columns)
# On prend les 10 plus important features postive et négative
nb_important = 15
#imp_coef = pd.concat([coef.sort_values().head(nb_important),
# coef.sort_values().tail(nb_important)])
imp_coef = coef.sort_values().head(nb_important)
imp_coef.plot(kind = "barh", figsize=(10, 8))
plt.title("Coefficients in Model")
coef.sort_values().head(10)
features_to_delete = ["GarageCond_Ex",
"Condition2_RRAe",
"Exterior1st_Stone",
"MiscFeature_TenC",
"MiscVal",
"BsmtUnfSF",
"LotArea",
"MasVnrArea",
"GarageYrBlt",
"Id"]
def prepare_data_outlier_log_plus_4_bis(data):
#on enleve les id qui sont trop extreme
data = data.drop(data.index[[523,898, 1298]])
data['1stFlr_2ndFlr_Sf'] = np.log1p(data['1stFlrSF'] + data['2ndFlrSF'])
data.loc[data.MasVnrType == 'None', 'MasVnrArea'] = 0
data.loc[pd.isnull(data.BsmtFinType2), 'BsmtFinSF2'] = 0
features = [col for col in data.columns if col not in "SalePrice"]
train = data[features]
y = data.SalePrice
# Transforme log
y = np.log1p(y)
# Transform Object features to columns
train = pd.get_dummies(train)
# Replace Nan value by mean of the column
train = train.fillna(train.mean())
train = train.drop(features_to_delete, axis=1)
print train.shape
return train, y
train, y = prepare_data_outlier_log_plus_4_bis(data.copy())
score = cross_validation(Ridge(alpha=13.5, random_state=42), train, y)
print score.mean()
#pd.isnull(df).sum()
df.shape
column_detail = pd.DataFrame(pd.isnull(df).sum(), columns=['nbr_null'])
column_detail.sort_values('nbr_null', ascending=0, inplace=True)
column_detail.head(10)
def prepare_data_outlier_log_plus_5(data):
#on enleve les id qui sont trop extreme
data = data.drop(data.index[[523,898, 1298]])
data['1stFlr_2ndFlr_Sf'] = np.log1p(data['1stFlrSF'] + data['2ndFlrSF'])
data.loc[data.MasVnrType == 'None', 'MasVnrArea'] = 0
data.loc[pd.isnull(data.BsmtFinType2), 'BsmtFinSF2'] = 0
#Drop features with too much Null value
data = data.drop('PoolQC', axis=1)
data = data.drop('MiscFeature', axis=1)
data = data.drop('Alley', axis=1)
data = data.drop('Fence', axis=1)
features = [col for col in data.columns if col not in "SalePrice"]
train = data[features]
y = data.SalePrice
# Transforme log
y = np.log1p(y)
# Transform Object features to columns
train = pd.get_dummies(train)
# Replace Nan value by mean of the column
train = train.fillna(train.mean())
print train.shape
return train, y
train, y = prepare_data_outlier_log_plus_5(data.copy())
score = cross_validation(Ridge(alpha=13.5, random_state=42), train, y)
print score.mean()
Image(url="http://i.giphy.com/LZfZXcFNOOzw4.gif")
Explanation: BsmtFinType1 & BsmtFinSF1 / BsmtFinType2 & BsmtFinSF2
End of explanation
from sklearn.tree import DecisionTreeRegressor
np.random.seed(42)
dt = DecisionTreeRegressor(random_state=0)
def dt_prepare_data(data):
features = [col for col in data.columns if col not in "SalePrice"]
train = data[features]
y = data.SalePrice
# Replace Nan value by mean of the column
train = train.fillna(train.mean())
print train.shape
return train, y
train, y = dt_prepare_data(data.copy())
train.head()
# 1) On fait apprendre le model
dt.fit(train, y)
#Error...
from sklearn.preprocessing import LabelEncoder
Explanation: Let's talk about Tree now :
End of explanation
categoricals = [x for x in data.columns if data[x].dtype == 'object']
categoricals
data.SaleCondition.head()
lbl = LabelEncoder() # Initialisation
lbl.fit(data['SaleCondition'].values)
test = lbl.transform(data['SaleCondition'].values)
test[0:5]
Explanation: Problème avec nos données en string...
End of explanation
def dt_prepare_data_plus(data):
features = [col for col in data.columns if col not in "SalePrice"]
train = data[features]
y = data.SalePrice
# String problem
categoricals = [x for x in train.columns if train[x].dtype == 'object']
for col in categoricals:
lbl = LabelEncoder()
lbl.fit(train[col].values)
train[col] = lbl.transform(train[col].values)
# Replace Nan value by mean of the column
train = train.fillna(train.mean())
print train.shape
return train, y
train, y = dt_prepare_data_plus(data.copy())
train.head()
data.head()
score = cross_validation(dt, train, y)
print score.mean()
def dt_prepare_data_plus_log(data):
features = [col for col in data.columns if col not in "SalePrice"]
train = data[features]
y = data.SalePrice
# Transforme log
y = np.log1p(y)
# String problem
categoricals = [x for x in train.columns if train[x].dtype == 'object']
for col in categoricals:
lbl = LabelEncoder()
lbl.fit(train[col].values)
train[col] = lbl.transform(train[col].values)
# Replace Nan value by mean of the column
train = train.fillna(train.mean())
print train.shape
return train, y
train, y = dt_prepare_data_plus_log(data.copy())
score = cross_validation(dt, train, y)
print score.mean()
from sklearn.ensemble import RandomForestRegressor
rfr = RandomForestRegressor(random_state=0)
score = cross_validation(rfr, train, y)
print score.mean()
X_train, X_validation, y_train, y_validation = train_test_split(train, y, random_state = 3)
rfr.fit(X_train, y_train)
mes_predictions_exp = np.expm1(rfr.predict(X_validation))
mes_predictions_exp[0:5]
y_validation_exp = np.exp(y_validation)
y_validation_exp[0:5]
mean_absolute_error(y_validation_exp, mes_predictions_exp)
plt.scatter(mes_predictions_exp, y_validation_exp)
plt.plot([min(mes_predictions_exp),max(mes_predictions_exp)], [min(mes_predictions_exp),max(mes_predictions_exp)]
, c="red")
plt.xlabel('Mes predicitons')
plt.ylabel('Vrai valeurs')
def dt_prepare_data_plus_log_1(data):
#on enleve les id qui sont trop extreme
data = data.drop(data.index[[523,898, 1298]])
data['1stFlr_2ndFlr_Sf'] = np.log1p(data['1stFlrSF'] + data['2ndFlrSF'])
data.loc[data.MasVnrType == 'None', 'MasVnrArea'] = 0
data.loc[pd.isnull(data.BsmtFinType2), 'BsmtFinSF2'] = 0
features = [col for col in data.columns if col not in "SalePrice"]
train = data[features]
y = data.SalePrice
# Transforme log
y = np.log1p(y)
# String problem
categoricals = [x for x in train.columns if train[x].dtype == 'object']
for col in categoricals:
lbl = LabelEncoder()
lbl.fit(train[col].values)
train[col] = lbl.transform(train[col].values)
# Replace Nan value by mean of the column
train = train.fillna(train.mean())
print train.shape
return train, y
train, y = dt_prepare_data_plus_log_1(data.copy())
score = cross_validation(rfr, train, y)
print score.mean()
pd.DataFrame?
coef = pd.DataFrame({'col' : X_train.columns,'importance' : rfr.feature_importances_})
coef = coef.sort_values('importance', ascending=False)
top_tree_features = coef.col.head(25)
#plt.figure(figsize=(10, 5))
#coef.head(25).plot(kind='bar')
#plt.title('Feature Significance')
top_tree_features
coef_ridge = pd.DataFrame({'col' : X_train.columns,
'importance' : model_ridge.coef_})
coef_ridge[coef_ridge.col.isin(list(top_tree_features))].shape
#imp_coef = coef.sort_values().head(nb_important)
#imp_coef.plot(kind = "barh", figsize=(10, 8))
#plt.title("Coefficients in Model")
coef_ridge.tail()
rfr = RandomForestRegressor(n_estimators=100, random_state=0, n_jobs=-1)
rfr
score = cross_validation(rfr, train, y)
print score.mean()
RandomForestRegressor?
cv_rfr = []
n_estimators = [10, 50, 100, 200]
max_depths = [3, 5, 7]
for n_estimator in n_estimators:
for max_depth in max_depths:
print "Je lance n_estimator : " + str(n_estimator) + " et "+str(max_depth) + " max_depth."
score = cross_validation(RandomForestRegressor(n_estimators=n_estimator,
max_depth=max_depth,
random_state=0), train, y).mean()
cv_rfr.append({'n_estimator' : n_estimator,
'max_depths' : max_depth,
'score' : score})
cv_rfr_df = pd.DataFrame(cv_rfr)
cv_rfr_df
from sklearn.model_selection import GridSearchCV
param_grid = { "n_estimators" : [250, 300],
"max_depth" : [3, 5, 7, 9]}
#grid_search = GridSearchCV(rfr, param_grid, n_jobs=-1, cv=5)
grid_search = GridSearchCV(rfr,
param_grid,
n_jobs=-1,
cv=5,
scoring='neg_mean_absolute_error')
grid_search.fit(train, y)
#print grid_search.best_params_
grid_search.grid_scores_
print grid_search.best_params_
rfr = RandomForestRegressor(n_estimators=300, max_depth=9
, random_state=0)
X_train, X_validation, y_train, y_validation = train_test_split(train, y, random_state = 3)
rfr.fit(X_train, y_train)
mes_predictions_exp = np.expm1(rfr.predict(X_validation))
mes_predictions_exp[0:5]
y_validation_exp = np.exp(y_validation)
y_validation_exp[0:5]
mean_absolute_error(y_validation_exp, mes_predictions_exp)
plt.scatter(mes_predictions_exp, y_validation_exp)
plt.plot([min(mes_predictions_exp),max(mes_predictions_exp)], [min(mes_predictions_exp),max(mes_predictions_exp)]
, c="red")
plt.xlabel('Mes predicitons')
plt.ylabel('Vrai valeurs')
Explanation: On change chaque valeur en string en valeur numérique
End of explanation
model_ridge = Ridge(alpha=13.5, random_state=42)
mes_predictions_ridge = np.expm1(model_ridge.predict(X_validation_ridge))
mes_predictions_ridge[0:5]
mes_predictions_exp[0:5]
resultat = pd.DataFrame({'ridge' : mes_predictions_ridge,
'tree' : mes_predictions_exp,
'realite' : y_validation_exp})
resultat['moyenne'] = (resultat.ridge+ resultat.tree) / 2.0
resultat.head()
mean_absolute_error(resultat.realite, resultat.ridge)
mean_absolute_error(resultat.realite, resultat.tree)
mean_absolute_error(resultat.realite, resultat.moyenne)
Explanation: yeah mixer 2 algos !!!
End of explanation
coef = pd.Series(rfr.feature_importances_, index = X_train.columns).sort_values(ascending=False)
plt.figure(figsize=(10, 5))
coef.head(25).plot(kind='bar')
plt.title('Feature Significance')
Explanation: Importance des features dans un tree :
End of explanation |
6,280 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transfer Learning
Using the high level transfer learning APIs, you can easily customize pretrained models for feature extraction or fine-tuning.
In this notebook, we will use a pre-trained Inception_V1 model. But we will operate on the pre-trained model to freeze first few layers, replace the classifier on the top, then fine tune the whole model. And we use the fine-tuned model to solve the dogs-vs-cats classification problem,
Preparation
1. Get the dogs-vs-cats datasets
Download the training dataset from https
Step1: manually set model_path and image_path for training
model_path = path to the pre-trained models. (E.g. path/to/model/bigdl_inception-v1_imagenet_0.4.0.model)
image_path = path to the folder of the training images. (E.g. path/to/data/dogs-vs-cats/demo/*/*)
Step2: Fine-tune a pre-trained model
We fine-tune a pre-trained model by removing the last few layers, freezing the first few layers, and adding some new layers.
Step3: Load a pre-trained model
We use the Net API to load a pre-trained model, including models saved by Analytics Zoo, BigDL, Torch, Caffe and Tensorflow. Please refer to Net API Guide.
Step4: Remove the last few layers
Here we print all the model layers and you can choose which layer(s) to remove.
When a model is loaded using Net, we can use the newGraph(output) api to define a Model with the output specified by the parameter.
Step5: The returning model's output layer is "pool5/drop_7x7_s1".
Freeze some layers
We freeze layers from input to pool4/3x3_s2 inclusive.
Step6: Add a few new layers
Step7: Train the model
The transfer learning can finish in a few minutes.
Step8: As we can see, the model from transfer learning can achieve over 95% accuracy on the validation set.
Visualize result
We randomly select some images to show, and print the prediction results here.
cat | Python Code:
import re
from bigdl.dllib.nn.criterion import CrossEntropyCriterion
from pyspark.ml import Pipeline
from pyspark.sql.functions import col, udf
from pyspark.sql.types import DoubleType, StringType
from bigdl.dllib.nncontext import *
from bigdl.dllib.feature.image import *
from bigdl.dllib.keras.layers import Dense, Input, Flatten
from bigdl.dllib.keras.models import *
from bigdl.dllib.net import *
from bigdl.dllib.nnframes import *
sc = init_nncontext("ImageTransferLearningExample")
Explanation: Transfer Learning
Using the high level transfer learning APIs, you can easily customize pretrained models for feature extraction or fine-tuning.
In this notebook, we will use a pre-trained Inception_V1 model. But we will operate on the pre-trained model to freeze first few layers, replace the classifier on the top, then fine tune the whole model. And we use the fine-tuned model to solve the dogs-vs-cats classification problem,
Preparation
1. Get the dogs-vs-cats datasets
Download the training dataset from https://www.kaggle.com/c/dogs-vs-cats and extract it.
The following commands copy about 1100 images of cats and dogs into demo/cats and demo/dogs separately.
shell
mkdir -p demo/dogs
mkdir -p demo/cats
cp train/cat.7* demo/cats
cp train/dog.7* demo/dogs
2. Get the pre-trained Inception-V1 model
Download the pre-trained Inception-V1 model from Zoo
Alternatively, user may also download pre-trained caffe/Tensorflow/keras model.
End of explanation
model_path = "path/to/model/bigdl_inception-v1_imagenet_0.4.0.model"
image_path = "file://path/to/data/dogs-vs-cats/demo/*/*"
imageDF = NNImageReader.readImages(image_path, sc)
getName = udf(lambda row:
re.search(r'(cat|dog)\.([\d]*)\.jpg', row[0], re.IGNORECASE).group(0),
StringType())
getLabel = udf(lambda name: 1.0 if name.startswith('cat') else 2.0, DoubleType())
labelDF = imageDF.withColumn("name", getName(col("image"))) \
.withColumn("label", getLabel(col('name')))
(trainingDF, validationDF) = labelDF.randomSplit([0.9, 0.1])
labelDF.select("name","label").show(10)
Explanation: manually set model_path and image_path for training
model_path = path to the pre-trained models. (E.g. path/to/model/bigdl_inception-v1_imagenet_0.4.0.model)
image_path = path to the folder of the training images. (E.g. path/to/data/dogs-vs-cats/demo/*/*)
End of explanation
transformer = ChainedPreprocessing(
[RowToImageFeature(), ImageResize(256, 256), ImageCenterCrop(224, 224),
ImageChannelNormalize(123.0, 117.0, 104.0), ImageMatToTensor(), ImageFeatureToTensor()])
Explanation: Fine-tune a pre-trained model
We fine-tune a pre-trained model by removing the last few layers, freezing the first few layers, and adding some new layers.
End of explanation
full_model = Net.load_bigdl(model_path)
Explanation: Load a pre-trained model
We use the Net API to load a pre-trained model, including models saved by Analytics Zoo, BigDL, Torch, Caffe and Tensorflow. Please refer to Net API Guide.
End of explanation
for layer in full_model.layers:
print (layer.name())
model = full_model.new_graph(["pool5/drop_7x7_s1"])
Explanation: Remove the last few layers
Here we print all the model layers and you can choose which layer(s) to remove.
When a model is loaded using Net, we can use the newGraph(output) api to define a Model with the output specified by the parameter.
End of explanation
model.freeze_up_to(["pool4/3x3_s2"])
Explanation: The returning model's output layer is "pool5/drop_7x7_s1".
Freeze some layers
We freeze layers from input to pool4/3x3_s2 inclusive.
End of explanation
inputNode = Input(name="input", shape=(3, 224, 224))
inception = model.to_keras()(inputNode)
flatten = Flatten()(inception)
logits = Dense(2)(flatten)
lrModel = Model(inputNode, logits)
classifier = NNClassifier(lrModel, CrossEntropyCriterion(), transformer) \
.setLearningRate(0.003).setBatchSize(4).setMaxEpoch(1).setFeaturesCol("image") \
.setCachingSample(False)
pipeline = Pipeline(stages=[classifier])
Explanation: Add a few new layers
End of explanation
catdogModel = pipeline.fit(trainingDF)
predictionDF = catdogModel.transform(validationDF).cache()
predictionDF.select("name","label","prediction").sort("label", ascending=False).show(10)
predictionDF.select("name","label","prediction").show(10)
correct = predictionDF.filter("label=prediction").count()
overall = predictionDF.count()
accuracy = correct * 1.0 / overall
print("Test Error = %g " % (1.0 - accuracy))
Explanation: Train the model
The transfer learning can finish in a few minutes.
End of explanation
samplecat=predictionDF.filter(predictionDF.prediction==1.0).limit(3).collect()
sampledog=predictionDF.filter(predictionDF.prediction==2.0).sort("label", ascending=False).limit(3).collect()
from IPython.display import Image, display
for cat in samplecat:
print ("prediction:"), cat.prediction
display(Image(cat.image.origin[5:]))
for dog in sampledog:
print ("prediction:"), dog.prediction
display(Image(dog.image.origin[5:]))
Explanation: As we can see, the model from transfer learning can achieve over 95% accuracy on the validation set.
Visualize result
We randomly select some images to show, and print the prediction results here.
cat: prediction = 1.0
dog: prediction = 2.0
End of explanation |
6,281 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Assigning particles unique IDs and removing particles from the simulation
For some applications, it is useful to keep track of which particle is which, and this can get jumbled up when particles are added or removed from the simulation. It can thefore be useful for particles to have unique IDs associated with them.
Let's set up a simple simulation with 10 bodies, and give them IDs in the order we add the particles (if you don't set them explicitly, all particle IDs default to 0)
Step1: Now let's do a simple example where we do a short initial integration to isolate the particles that interest us for a longer simulation
Step2: At this stage, we might be interested in particles that remained within some semimajor axis range, particles that were in resonance with a particular planet, etc. Let's imagine a simple (albeit arbitrary) case where we only want to keep particles that had $x > 0$ at the end of the preliminary integration. Let's first print out the particle ID and x position.
Step3: Next, let's use the remove() function to filter out particle. As an argument, we pass the corresponding index in the particles array.
Step4: By default, the remove() function removes the i-th particle from the particles array, and shifts all particles with higher indices down by 1. This ensures that the original order in the particles array is preserved (e.g., to help with output).
By running through the planets in reverse order above, we are guaranteed that when a particle with index i gets removed, the particle replacing it doesn't need to also be removed (we already checked it).
If you have many particles and many removals (or you don't care about the ordering), you can save the reshuffling of all particles with higher indices with the flag keepSorted=0
Step5: We see that the particles array is no longer sorted by ID. Note that the default keepSorted=1 only keeps things sorted (i.e., if they were sorted by ID to start with). If you custom-assign IDs out of order as you add particles, the default will simply preserve the original order.
You might also have been surprised that the above sim.remove(2, keepSorted=0) succeeded, since there was no id=2 left in the simulation. That's because remove() takes the index in the particles array, so we removed the 3rd particle (with id=4). If you'd like to remove a particle by id, use the id keyword, e.g. | Python Code:
import rebound
import numpy as np
def setupSimulation(Nplanets):
sim = rebound.Simulation()
sim.integrator = "ias15" # IAS15 is the default integrator, so we don't need this line
sim.add(m=1.,id=0)
for i in range(1,Nbodies):
sim.add(m=1e-5,x=i,vy=i**(-0.5),id=i)
sim.move_to_com()
return sim
Nbodies=10
sim = setupSimulation(Nbodies)
print([sim.particles[i].id for i in range(sim.N)])
Explanation: Assigning particles unique IDs and removing particles from the simulation
For some applications, it is useful to keep track of which particle is which, and this can get jumbled up when particles are added or removed from the simulation. It can thefore be useful for particles to have unique IDs associated with them.
Let's set up a simple simulation with 10 bodies, and give them IDs in the order we add the particles (if you don't set them explicitly, all particle IDs default to 0):
End of explanation
Noutputs = 1000
xs = np.zeros((Nbodies, Noutputs))
ys = np.zeros((Nbodies, Noutputs))
times = np.linspace(0.,50*2.*np.pi, Noutputs, endpoint=False)
for i, time in enumerate(times):
sim.integrate(time)
xs[:,i] = [sim.particles[j].x for j in range(Nbodies)]
ys[:,i] = [sim.particles[j].y for j in range(Nbodies)]
%matplotlib inline
import matplotlib.pyplot as plt
fig,ax = plt.subplots(figsize=(15,5))
for i in range(Nbodies):
plt.plot(xs[i,:], ys[i,:])
ax.set_aspect('equal')
Explanation: Now let's do a simple example where we do a short initial integration to isolate the particles that interest us for a longer simulation:
End of explanation
print("ID\tx")
for i in range(Nbodies):
print("{0}\t{1}".format(i, xs[i,-1]))
Explanation: At this stage, we might be interested in particles that remained within some semimajor axis range, particles that were in resonance with a particular planet, etc. Let's imagine a simple (albeit arbitrary) case where we only want to keep particles that had $x > 0$ at the end of the preliminary integration. Let's first print out the particle ID and x position.
End of explanation
for i in reversed(range(1,Nbodies)):
if xs[i,-1] < 0:
sim.remove(i)
print("Number of particles after cut = {0}".format(sim.N))
print("IDs of remaining particles = {0}".format([p.id for p in sim.particles]))
Explanation: Next, let's use the remove() function to filter out particle. As an argument, we pass the corresponding index in the particles array.
End of explanation
sim.remove(2, keepSorted=0)
print("Number of particles after cut = {0}".format(sim.N))
print("IDs of remaining particles = {0}".format([p.id for p in sim.particles]))
Explanation: By default, the remove() function removes the i-th particle from the particles array, and shifts all particles with higher indices down by 1. This ensures that the original order in the particles array is preserved (e.g., to help with output).
By running through the planets in reverse order above, we are guaranteed that when a particle with index i gets removed, the particle replacing it doesn't need to also be removed (we already checked it).
If you have many particles and many removals (or you don't care about the ordering), you can save the reshuffling of all particles with higher indices with the flag keepSorted=0:
End of explanation
sim.remove(id=9)
print("Number of particles after cut = {0}".format(sim.N))
print("IDs of remaining particles = {0}".format([p.id for p in sim.particles]))
Explanation: We see that the particles array is no longer sorted by ID. Note that the default keepSorted=1 only keeps things sorted (i.e., if they were sorted by ID to start with). If you custom-assign IDs out of order as you add particles, the default will simply preserve the original order.
You might also have been surprised that the above sim.remove(2, keepSorted=0) succeeded, since there was no id=2 left in the simulation. That's because remove() takes the index in the particles array, so we removed the 3rd particle (with id=4). If you'd like to remove a particle by id, use the id keyword, e.g.
End of explanation |
6,282 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Prediction with BQML and AutoML
Objectives
1. Learn how to use BQML to create a classification time-series model using CREATE MODEL.
2. Learn how to use BQML to create a linear regression time-series model.
3. Learn how to use AutoML Tables to build a time series model from data in BigQuery.
Set up environment variables and load necessary libraries
Step2: Create the dataset
Step3: Review the dataset
In the previous lab we created the data, if you haven’t run the previous notebook, go back to 2_feature_engineering.ipynb to create them. We will use modeling and saved them as tables in BigQuery.
Let's examine that table again to see that everything is as we expect. Then, we will build a model using BigQuery ML using this table.
Step4: Using BQML
Create classification model for direction
To create a model
1. Use CREATE MODEL and provide a destination table for resulting model. Alternatively we can use CREATE OR REPLACE MODEL which allows overwriting an existing model.
2. Use OPTIONS to specify the model type (linear_reg or logistic_reg). There are many more options we could specify, such as regularization and learning rate, but we'll accept the defaults.
3. Provide the query which fetches the training data
Have a look at Step Two of this tutorial to see another example.
The query will take about two minutes to complete
We'll start with creating a classification model to predict the direction of each stock.
We'll take a random split using the symbol value. With about 500 different values, using ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1 will give 30 distinct symbol values which corresponds to about 171,000 training examples. After taking 70% for training, we will be building a model on about 110,000 training examples.
Lab Task #1a
Step5: Get training statistics and examine training info
After creating our model, we can evaluate the performance using the ML.EVALUATE function. With this command, we can find the precision, recall, accuracy F1-score and AUC of our classification model.
Lab Task #1b
Step6: We can also examine the training statistics collected by Big Query. To view training results we use the ML.TRAINING_INFO function.
Lab Task #1c
Step7: Compare to simple benchmark
Another way to asses the performance of our model is to compare with a simple benchmark. We can do this by seeing what kind of accuracy we would get using the naive strategy of just predicted the majority class. For the training dataset, the majority class is 'STAY'. The following query we can see how this naive strategy would perform on the eval set.
Step8: So, the naive strategy of just guessing the majority class would have accuracy of 0.5509 on the eval dataset, just below our BQML model.
Create regression model for normalized change
We can also use BigQuery to train a regression model to predict the normalized change for each stock. To do this in BigQuery we need only change the OPTIONS when calling CREATE OR REPLACE MODEL. This will give us a more precise prediction rather than just predicting if the stock will go up, down, or stay the same. Thus, we can treat this problem as either a regression problem or a classification problem, depending on the business needs.
Lab Task #2a
Step9: Just as before we can examine the evaluation metrics for our regression model and examine the training statistics in Big Query | Python Code:
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
%env PROJECT = {PROJECT}
%env REGION = "us-central1"
Explanation: Time Series Prediction with BQML and AutoML
Objectives
1. Learn how to use BQML to create a classification time-series model using CREATE MODEL.
2. Learn how to use BQML to create a linear regression time-series model.
3. Learn how to use AutoML Tables to build a time series model from data in BigQuery.
Set up environment variables and load necessary libraries
End of explanation
from google.cloud import bigquery
from IPython import get_ipython
bq = bigquery.Client(project=PROJECT)
def create_dataset():
dataset = bigquery.Dataset(bq.dataset("stock_market"))
try:
bq.create_dataset(dataset) # Will fail if dataset already exists.
print("Dataset created")
except:
print("Dataset already exists")
def create_features_table():
error = None
try:
bq.query(
CREATE TABLE stock_market.eps_percent_change_sp500
AS
SELECT *
FROM `stock_market.eps_percent_change_sp500`
).to_dataframe()
except Exception as e:
error = str(e)
if error is None:
print("Table created")
elif "Already Exists" in error:
print("Table already exists.")
else:
print(error)
raise Exception("Table was not created.")
create_dataset()
create_features_table()
Explanation: Create the dataset
End of explanation
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
stock_market.eps_percent_change_sp500
LIMIT
10
Explanation: Review the dataset
In the previous lab we created the data, if you haven’t run the previous notebook, go back to 2_feature_engineering.ipynb to create them. We will use modeling and saved them as tables in BigQuery.
Let's examine that table again to see that everything is as we expect. Then, we will build a model using BigQuery ML using this table.
End of explanation
%%bigquery --project $PROJECT
#standardSQL
CREATE OR REPLACE MODEL
# TODO: Your code goes here
-- query to fetch training data
SELECT
# TODO: Your code goes here
FROM
`stock_market.eps_percent_change_sp500`
WHERE
# TODO: Your code goes here
Explanation: Using BQML
Create classification model for direction
To create a model
1. Use CREATE MODEL and provide a destination table for resulting model. Alternatively we can use CREATE OR REPLACE MODEL which allows overwriting an existing model.
2. Use OPTIONS to specify the model type (linear_reg or logistic_reg). There are many more options we could specify, such as regularization and learning rate, but we'll accept the defaults.
3. Provide the query which fetches the training data
Have a look at Step Two of this tutorial to see another example.
The query will take about two minutes to complete
We'll start with creating a classification model to predict the direction of each stock.
We'll take a random split using the symbol value. With about 500 different values, using ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1 will give 30 distinct symbol values which corresponds to about 171,000 training examples. After taking 70% for training, we will be building a model on about 110,000 training examples.
Lab Task #1a: Create model using BQML
Use BQML's CREATE OR REPLACE MODEL to train a classification model which predicts the direction of a stock using the features in the percent_change_sp500 table. Look at the documentation for creating a BQML model to get the right syntax. Use ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1 to train on a subsample.
End of explanation
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
# TODO: Your code goes here.
Explanation: Get training statistics and examine training info
After creating our model, we can evaluate the performance using the ML.EVALUATE function. With this command, we can find the precision, recall, accuracy F1-score and AUC of our classification model.
Lab Task #1b: Evaluate your BQML model.
Use BQML's EVALUATE to evaluate the performance of your model on the validation set. Your query should be similar to this example.
End of explanation
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
# TODO: Your code goes here
Explanation: We can also examine the training statistics collected by Big Query. To view training results we use the ML.TRAINING_INFO function.
Lab Task #1c: Examine the training information in BQML.
Use BQML's TRAINING_INFO to see statistics of the training job executed above.
End of explanation
%%bigquery --project $PROJECT
#standardSQL
WITH
eval_data AS (
SELECT
symbol,
Date,
Open,
close_MIN_prior_5_days,
close_MIN_prior_20_days,
close_MIN_prior_260_days,
close_MAX_prior_5_days,
close_MAX_prior_20_days,
close_MAX_prior_260_days,
close_AVG_prior_5_days,
close_AVG_prior_20_days,
close_AVG_prior_260_days,
close_STDDEV_prior_5_days,
close_STDDEV_prior_20_days,
close_STDDEV_prior_260_days,
direction
FROM
`stock_market.eps_percent_change_sp500`
WHERE
tomorrow_close IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) > 15 * 70
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) <= 15 * 85)
SELECT
direction,
(COUNT(direction)* 100 / (
SELECT
COUNT(*)
FROM
eval_data)) AS percentage
FROM
eval_data
GROUP BY
direction
Explanation: Compare to simple benchmark
Another way to asses the performance of our model is to compare with a simple benchmark. We can do this by seeing what kind of accuracy we would get using the naive strategy of just predicted the majority class. For the training dataset, the majority class is 'STAY'. The following query we can see how this naive strategy would perform on the eval set.
End of explanation
%%bigquery --project $PROJECT
#standardSQL
CREATE OR REPLACE MODEL
# TODO: Your code goes here
-- query to fetch training data
SELECT
# TODO: Your code goes here
FROM
`stock_market.eps_percent_change_sp500`
WHERE
# TODO: Your code goes here
Explanation: So, the naive strategy of just guessing the majority class would have accuracy of 0.5509 on the eval dataset, just below our BQML model.
Create regression model for normalized change
We can also use BigQuery to train a regression model to predict the normalized change for each stock. To do this in BigQuery we need only change the OPTIONS when calling CREATE OR REPLACE MODEL. This will give us a more precise prediction rather than just predicting if the stock will go up, down, or stay the same. Thus, we can treat this problem as either a regression problem or a classification problem, depending on the business needs.
Lab Task #2a: Create a regression model in BQML.
Use BQML's CREATE OR REPLACE MODEL to train another model, this time a regression model, which predicts the normalized_change of a given stock based on the same features we used above.
End of explanation
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
ML.EVALUATE(MODEL `stock_market.price_model`,
(
SELECT
symbol,
Date,
Open,
close_MIN_prior_5_days,
close_MIN_prior_20_days,
close_MIN_prior_260_days,
close_MAX_prior_5_days,
close_MAX_prior_20_days,
close_MAX_prior_260_days,
close_AVG_prior_5_days,
close_AVG_prior_20_days,
close_AVG_prior_260_days,
close_STDDEV_prior_5_days,
close_STDDEV_prior_20_days,
close_STDDEV_prior_260_days,
normalized_change
FROM
`stock_market.eps_percent_change_sp500`
WHERE
normalized_change IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) > 15 * 70
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) <= 15 * 85))
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
ML.TRAINING_INFO(MODEL `stock_market.price_model`)
ORDER BY iteration
Explanation: Just as before we can examine the evaluation metrics for our regression model and examine the training statistics in Big Query
End of explanation |
6,283 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Librairies
Step1: read file content
Step2: Dots seem to follow a line, we could have done a correlation test to check if the two variabes are linked. Now we transform the data matrix into two numpy arrays.
Step3: now we will developp the two functions predict (apply theta to the X matrix) and gradient_descent1 (update theta)
Step4: Expected output (for alpha 0.01 and 1500 iterations)
Step5: the cost function will allow us to record the evolution of the cost during the gradient descent
Step6: expected output for [0, 0]
Step7: Expected output for alhpa 0.01 and 1500 iterations | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Librairies
End of explanation
data = pd.read_csv('ex1data1.txt', header=None, names=['population', 'profit'])
data.head()
data.plot.scatter('population', 'profit')
Explanation: read file content
End of explanation
X = np.array(data["population"])
y = np.array(data["profit"])
Explanation: Dots seem to follow a line, we could have done a correlation test to check if the two variabes are linked. Now we transform the data matrix into two numpy arrays.
End of explanation
def predict(X, theta):
return (X * theta[1] + theta[0])
def gradient_descent1(X, y, theta, alpha, num_iters):
m = X.shape[0]
for i in range(0, num_iters):
theta0 = theta[0] - (alpha / m) * np.sum(predict(X, theta) - y)
theta1 = theta[1] - (alpha / m) * np.dot(predict(X, theta) - y, np.transpose(X))
theta = [theta0, theta1]
return theta
theta = np.zeros(2, dtype=float)
theta = gradient_descent1(X, y, theta, 0.01, 1500)
theta
Explanation: now we will developp the two functions predict (apply theta to the X matrix) and gradient_descent1 (update theta)
End of explanation
def visualize(theta):
fig = plt.figure()
ax = plt.axes()
ax.set_xlim([4.5,22.5])
ax.set_ylim([-5, 25])
ax.scatter(X, y)
line_x = np.linspace(0,22.5, 20)
line_y = theta[0] + line_x * theta[1]
ax.plot(line_x, line_y)
plt.show()
visualize(theta)
Explanation: Expected output (for alpha 0.01 and 1500 iterations):[-3.6302914394043597, 1.166362350335582]
The visualize plot our dataset with the regression line corresponding to theta
End of explanation
def cost(X, y, theta):
loss = predict(X, theta) - y
cost = (1 / (2 * X.shape[0])) * np.dot(loss, np.transpose(loss))
return(cost)
cost(X, y, [0, 0])
Explanation: the cost function will allow us to record the evolution of the cost during the gradient descent
End of explanation
def gradient_descent(X, y, theta, alpha, num_iters):
m = X.shape[0]
J_history = []
for i in range(0, num_iters):
theta0 = theta[0] - (alpha / m) * np.sum(predict(X, theta) - y)
theta1 = theta[1] - (alpha / m) * np.dot(predict(X, theta) - y, np.transpose(X))
theta = [theta0, theta1]
J_history.append(cost(X, y, theta))
return theta, J_history
theta = np.zeros(2, dtype=float)
theta, J_history = gradient_descent(X, y, theta, 0.01, 1500)
theta
Explanation: expected output for [0, 0]: 32.072733877455676
the full version of gradient descent now records the cost history
End of explanation
fit = plt.figure()
ax = plt.axes()
ax.plot(J_history)
Explanation: Expected output for alhpa 0.01 and 1500 iterations: [-3.6302914394043597, 1.166362350335582]
End of explanation |
6,284 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Corpus callosum's shape signature for segmentation error detection in large datasets
Abstract
Corpus Callosum (CC) is a subcortical, white matter structure with great importance in clinical and research studies because its shape and volume are correlated with subject's characteristics and neurodegenerative diseases. CC segmentation is a important step for any medical, clinical or research posterior study. Currently, magnetic resonance imaging (MRI) is the main tool for evaluating brain because it offers the better soft tissue contrast. Particullary, segmentation in MRI difussion modality has great importante given information associated to brain microstruture and fiber composition.
In this work a method for detection of erroneous segmentations in large datasets is proposed based-on shape signature. Shape signature is obtained from segmentation, calculating curvature along contour using a spline formulation. A mean correct signature is used as reference for compare new segmentations through root mean square error. This method was applied to 152 subject dataset for three different segmentation methods in diffusion
Step1: Introduction
The Corpus Callosum (CC) is the largest white matter structure in the central nervous system that connects both brain hemispheres and allows the communication between them. The CC has great importance in research studies due to the correlation between shape and volume with some subject's characteristics, such as
Step2: Shape signature for comparison
Signature is a shape descriptor that measures the rate of variation along the segmentation contour. As shown in figure, the curvature $k$ in the pivot point $p$, with coordinates ($x_p$,$y_p$), is calculated using the next equation. This curvature depict the angle between the segments $\overline{(x_{p-ls},y_{p-ls})(x_p,y_p)}$ and $\overline{(x_p,y_p)(x_{p+ls},y_{p+ls})}$. These segments are located to a distance $ls>0$, starting in a pivot point and finishing in anterior and posterior points, respectively.
The signature is obtained calculating the curvature along all segmentation contour.
\begin{equation} \label{eq
Step3: Autoencoder
Step4: Testing in new datasets
ROQS test
Step5: Pixel-based test | Python Code:
## Functions
import sys,os
import copy
path = os.path.abspath('../dev/')
if path not in sys.path:
sys.path.append(path)
import bib_mri as FW
import numpy as np
import scipy as scipy
import scipy.misc as misc
import matplotlib as mpl
import matplotlib.pyplot as plt
from numpy import genfromtxt
import platform
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
%matplotlib inline
def sign_extract(seg, resols): #Function for shape signature extraction
splines = FW.get_spline(seg,smoothness)
sign_vect = np.array([]).reshape(0,points) #Initializing temporal signature vector
for resol in resols:
sign_vect = np.vstack((sign_vect, FW.get_profile(splines, n_samples=points, radius=resol)))
return sign_vect
def sign_fit(sig_ref, sig_fit): #Function for signature fitting
dif_curv = []
for shift in range(points):
dif_curv.append(np.abs(np.sum((sig_ref - np.roll(sig_fit[0],shift))**2)))
return np.apply_along_axis(np.roll, 1, sig_fit, np.argmin(dif_curv))
print "Python version: ", platform.python_version()
print "Numpy version: ", np.version.version
print "Scipy version: ", scipy.__version__
print "Matplotlib version: ", mpl.__version__
Explanation: Corpus callosum's shape signature for segmentation error detection in large datasets
Abstract
Corpus Callosum (CC) is a subcortical, white matter structure with great importance in clinical and research studies because its shape and volume are correlated with subject's characteristics and neurodegenerative diseases. CC segmentation is a important step for any medical, clinical or research posterior study. Currently, magnetic resonance imaging (MRI) is the main tool for evaluating brain because it offers the better soft tissue contrast. Particullary, segmentation in MRI difussion modality has great importante given information associated to brain microstruture and fiber composition.
In this work a method for detection of erroneous segmentations in large datasets is proposed based-on shape signature. Shape signature is obtained from segmentation, calculating curvature along contour using a spline formulation. A mean correct signature is used as reference for compare new segmentations through root mean square error. This method was applied to 152 subject dataset for three different segmentation methods in diffusion: Watershed, ROQS and pixel-based presenting high accuracy in error detection. This method do not require per-segmentation reference and it can be applied to any MRI modality and other image aplications.
End of explanation
#Loading labeled segmentations
seg_label = genfromtxt('../../dataset/Seg_Watershed/watershed_label.csv', delimiter=',').astype('uint8')
list_masks = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 0] #Extracting segmentations
list_labels = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 1] #Extracting labels
ind_ex_err = list_masks[np.where(list_labels)[0]]
ind_ex_cor = list_masks[np.where(np.logical_not(list_labels))[0]]
print "Mask List", list_masks
print "Label List", list_labels
print "Correct List", ind_ex_cor
print "Erroneous List", ind_ex_err
mask_correct = np.load('../../dataset/Seg_Watershed/mask_wate_{}.npy'.format(ind_ex_cor[10]))
mask_error = np.load('../../dataset/Seg_Watershed/mask_wate_{}.npy'.format(ind_ex_err[10]))
plt.figure()
plt.axis('off')
plt.imshow(mask_correct,'gray',interpolation='none')
plt.title("Correct segmentation example")
plt.show()
plt.figure()
plt.axis('off')
plt.imshow(mask_error,'gray',interpolation='none')
plt.title("Erroneous segmentation example")
plt.show()
Explanation: Introduction
The Corpus Callosum (CC) is the largest white matter structure in the central nervous system that connects both brain hemispheres and allows the communication between them. The CC has great importance in research studies due to the correlation between shape and volume with some subject's characteristics, such as: gender, age, numeric and mathematical skills and handedness. In addition, some neurodegenerative diseases like Alzheimer, autism, schizophrenia and dyslexia could cause CC shape deformation.
CC segmentation is a necessary step for morphological and physiological features extraction in order to analyze the structure in image-based clinical and research applications. Magnetic Resonance Imaging (MRI) is the most suitable image technique for CC segmentation due to its ability to provide contrast between brain tissues however CC segmentation is challenging because of the shape and intensity variability between subjects, volume partial effect in diffusion MRI, fornex proximity and narrow areas in CC. Among the known MRI modalities, Diffusion-MRI arouses special interest to study the CC, despite its low resolution and high complexity, since it provides useful information related to the organization of brain tissues and the magnetic field does not interfere with the diffusion process itself.
Some CC segmentation approaches using Diffusion-MRI were found in the literature. Niogi et al. proposed a method based on thresholding, Freitas et al. e Rittner et al. proposed region methods based on Watershed transform, Nazem-Zadeh et al. implemented based on level surfaces, Kong et al. presented an clustering algorithm for segmentation, Herrera et al. segmented CC directly in diffusion weighted imaging (DWI) using a model based on pixel classification and Garcia et al. proposed a hybrid segmentation method based on active geodesic regions and level surfaces.
With the growing of data and the proliferation of automatic algorithms, segmentation over large databases is affordable. Therefore, error automatic detection is important in order to facilitate and speed up filter on CC segmentation databases. presented proposals for content-based image retrieval (CBIR) using shape signature of the planar object representation.
In this work, a method for automatic detection of segmentation error in large datasets is proposed based on CC shape signature. Signature offers shape characterization of the CC and therefore it is expected that a "typical correct signature" represents well any correct segmentation. Signature is extracted measuring curvature along segmentation contour. The method was implemented in three main stages: mean correct signature generation, signature configuration and method testing. The first one takes 20 corrects segmentations and generates one correct signature of reference (typical correct signature), per-resolution, using mean values in each point. The second stage stage takes 10 correct segmentations and 10 erroneous segmentations and adjusts the optimal resolution and threshold, based on mean correct signature, that lets detection of erroneous segmentations. The third stage labels a new segmentation as correct and erroneous comparing with the mean signature using optimal resolution and threshold.
<img src="../figures/workflow.png">
The comparison between signatures is done using root mean square error (RMSE). True label for each segmentation was done visually. Correct segmentation corresponds to segmentations with at least 50% of agreement with the structure. It is expected that RMSE for correct segmentations is lower than RMSE associated to erroneous segmentation when compared with a typical correct segmentation.
End of explanation
smoothness = 700 #Smoothness
degree = 5 #Spline degree
fit_res = 0.35
resols = np.arange(0.01,0.5,0.01) #Signature resolutions
resols = np.insert(resols,0,fit_res) #Insert resolution for signature fitting
points = 500 #Points of Spline reconstruction
prof_vec = np.empty((len(list_masks),resols.shape[0],points)) #Initializing correct signature vector
for ind, mask in enumerate(list_masks):
#Loading correct mask
mask_pn = np.load('../../dataset/Seg_Watershed/mask_wate_{}.npy'.format(mask))
refer_temp = sign_extract(mask_pn, resols) #Function for shape signature extraction
prof_vec[ind] = refer_temp
if mask > 0: #Fitting curves using the first one as basis
prof_ref = prof_vec[0]
prof_vec[ind] = sign_fit(prof_ref[0], refer_temp) #Function for signature fitting
ind_rel_cor = np.where(np.logical_not(list_labels))[0]
ind_rel_err = np.where(list_labels)[0]
print "Correct segmentations' vector: ", prof_vec[ind_rel_cor].shape
print "Erroneous segmentations' vector: ", prof_vec[ind_rel_err].shape
print(ind_rel_cor.shape)
print(ind_ex_cor.shape)
res_ex = 15
#for ind_ex, ind_rel in zip(ind_ex_cor, ind_rel_cor):
# plt.figure()
# f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))
# ax1.plot(prof_vec[ind_rel,res_ex,:].T)
# ax1.set_title("Signature %i at res: %f"%(ind_ex, resols[res_ex]))
#
# mask_correct = np.load('../../dataset/Seg_Watershed/mask_wate_{}.npy'.format(ind_ex))
# ax2.axis('off')
# ax2.imshow(mask_correct,'gray',interpolation='none')
#
# plt.show()
plt.figure()
plt.plot(prof_vec[ind_rel_cor,res_ex,:].T)
plt.title("Correct signatures for res: %f"%(resols[res_ex]))
plt.show()
plt.figure()
plt.plot(prof_vec[ind_rel_err,res_ex,:].T)
plt.title("Erroneous signatures for res: %f"%(resols[res_ex]))
plt.show()
Explanation: Shape signature for comparison
Signature is a shape descriptor that measures the rate of variation along the segmentation contour. As shown in figure, the curvature $k$ in the pivot point $p$, with coordinates ($x_p$,$y_p$), is calculated using the next equation. This curvature depict the angle between the segments $\overline{(x_{p-ls},y_{p-ls})(x_p,y_p)}$ and $\overline{(x_p,y_p)(x_{p+ls},y_{p+ls})}$. These segments are located to a distance $ls>0$, starting in a pivot point and finishing in anterior and posterior points, respectively.
The signature is obtained calculating the curvature along all segmentation contour.
\begin{equation} \label{eq:per1}
k(x_p,y_p) = \arctan\left(\frac{y_{p+ls}-y_p}{x_{p+ls}-x_p}\right)-\arctan\left(\frac{y_p-y_{p-ls}}{x_p-x_{p-ls}}\right)
\end{equation}
<img src="../figures/curvature.png">
Signature construction is performed from segmentation contour of the CC. From contour, spline is obtained. Spline purpose is twofold: to get a smooth representation of the contour and to facilitate calculation of
the curvature using its parametric representation. The signature is obtained measuring curvature along spline. $ls$ is the parametric distance between pivot point and both posterior and anterior points and it determines signature resolution. By simplicity, $ls$ is measured in percentage of reconstructed spline points.
In order to achieve quantitative comparison between two signatures root mean square error (RMSE) is introduced. RMSE measures distance, point to point, between signatures $a$ and $b$ along all points $p$ of signatures.
\begin{equation} \label{eq:per4}
RMSE = \sqrt{\frac{1}{P}\sum_{p=1}^{P}(k_{ap}-k_{bp})^2}
\end{equation}
Frequently, signatures of different segmentations are not fitted along the 'x' axis because of the initial point on the spline calculation starts in different relative positions. This makes impossible to compare directly two signatures and therefore, a prior fitting process must be accomplished. The fitting process is done shifting one of the signature while the other is kept fixed. For each shift, RMSE between the two signatures is measured. The point giving the minor error is the fitting point. Fitting was done at resolution $ls = 0.35$. This resolution represents globally the CC's shape and eases their fitting.
After fitting, RMSE between signatures can be measured in order to achieve final quantitative comparison.
Signature for segmentation error detection
For segmentation error detection, a typical correct signature is obtained calculating mean over a group of signatures from correct segmentations. Because of this signature could be used in any resolution, $ls$ must be chosen for achieve segmentation error detection. The optimal resolution must be able to return the greatest RMSE difference between correct and erroneous segmentation when compared with a typical correct signature.
In the optimal resolution, a threshold must be chosen for separate erroneous and correct segmentations. This threshold stays between RMSE associated to correct ($RMSE_E$) and erroneous ($RMSE_C$) signatures and it is given by the next equation where N (in percentage) represents proximity to correct or erroneous RMSE. If RMSE calculated over a group of signatures, mean value is applied.
\begin{equation} \label{eq:eq3}
th = N*(\overline{RMSE_E}-\overline{RMSE_C})+\overline{RMSE_C}
\end{equation}
Experiments and results
In this work, comparison of signatures through RMSE is used for segmentation error detection in large datasets. For this, it will be calculated a mean correct signature based on 20 correct segmentation signatures. This mean correct signature represents a tipycal correct segmentation. For a new segmentation, signature is extracted and compared with mean signature.
For experiments, DWI from 152 subjects at the University of Campinas, were acquired on a Philips scanner Achieva 3T in the axial plane with a $1$x$1mm$ spatial resolution and $2mm$ slice thickness, along $32$ directions ($b-value=1000s/mm^2$, $TR=8.5s$, and $TE=61ms$). All data used in this experiment was acquired through a project approved by the research ethics committee from the School of Medicine at UNICAMP. From each acquired DWI volume, only the midsaggital slice was used.
Three segmentation methods were implemented to obtained binary masks over a 152 subject dataset: Watershed, ROQS and pixel-based. 40 Watershed segmentations were chosen as follows: 20 correct segmentations for mean correct signature generation and 10 correct and 10 erroneous segmentations for signature configuration stage. Watershed was chosen to generate and adjust the mean signature because of its higher error rate and its variability in the erroneous segmentation shape. These characteristics allow improve generalization. The method was tested on the remaining Watershed segmentations (108 masks) and two additional segmentations methods: ROQS (152 masks) and pixel-based (152 masks).
Mean correct signature generation
In this work, segmentations based on Watershed method were used for implementation of the first and second stages. From the Watershed dataset, 20 correct segmentations were chosen. Spline for each one was obtained from segmentation contour. The contour was obtained using mathematical morphology, applying xor logical operation, pixel-wise, between original segmentation and the eroded version of itself by an structuring element b:
\begin{equation} \label{eq:per2}
G_E = XOR(S,S \ominus b)
\end{equation}
From contour, it is calculated spline. The implementation, is a B-spline (Boor's basic spline). This formulation has two parameters: degree, representing polynomial degrees of the spline, and smoothness, being the trade off between proximity and smoothness in the fitness of the spline. Degree was fixed in 5 allowing adequate representation of the contour. Smoothness was fixed in 700. This value is based on the mean quantity of pixels of the contour that are passed for spline calculation. The curvature was measured over 500 points over the spline to generate the signature along 20 segmentations. Signatures were fitted to make possible comparison (Fig. signatures). Fitting resolution was fixed in 0.35.
In order to get a representative correct signature, mean signature per-resolution is generated using 20 correct signatures. The mean is calculated in each point.
Signature configuration
Because of the mean signature was extracted for all the resolutions, it is necessary to find resolution in that diference between RMSE for correct signature and RMSE for erroneous signature is maximum. So, 20 news segmentations were used to find this optimal resolution, being divided as 10 correct segmentations and 10 erroneous segmentations. For each segmentation, it was extracted signature for all resolutions.
End of explanation
def train(model,train_loader,loss_fn,optimizer,epochs=100,patience=5,criteria_stop="loss"):
hist_train_loss = hist_val_loss = hist_train_acc = hist_val_acc = np.array([])
best_epoch = patience_count = 0
print("Training starts along %i epoch"%epochs)
for e in range(epochs):
correct_train = correct_val = total_train = total_val = 0
cont_i = loss_t_e = loss_v_e = 0
for data_train in train_loader:
var_inputs = Variable(data_train)
predict, encode = model(var_inputs)
loss = loss_fn(predict, var_inputs.view(-1, 500))
loss_t_e += loss.data[0]
optimizer.zero_grad()
loss.backward()
optimizer.step()
cont_i += 1
#Stacking historical
hist_train_loss = np.hstack((hist_train_loss, loss_t_e/(cont_i*1.0)))
print('Epoch: ', e, 'train loss: ', hist_train_loss[-1])
if(e == epochs-1):
best_epoch = e
best_model = copy.deepcopy(model)
print("Training stopped")
patience_count += 1
return(best_model, hist_train_loss, hist_val_loss)
class autoencoder(nn.Module):
def __init__(self):
super(autoencoder, self).__init__()
self.fc1 = nn.Linear(500, 200)
self.fc21 = nn.Linear(200, 2)
self.fc3 = nn.Linear(2, 200)
self.fc4 = nn.Linear(200, 500)
self.relu = nn.ReLU()
self.sigmoid = nn.Sigmoid()
def encode(self, x):
h1 = self.relu(self.fc1(x))
return self.fc21(h1)
def decode(self, z):
h3 = self.relu(self.fc3(z))
return self.sigmoid(self.fc4(h3))
def forward(self, x):
z = self.encode(x.view(-1, 500))
return self.decode(z), z
class decoder(nn.Module):
def __init__(self):
super(decoder, self).__init__()
self.fc3 = nn.Linear(2, 200)
self.fc4 = nn.Linear(200, 500)
self.relu = nn.ReLU()
self.sigmoid = nn.Sigmoid()
def decode(self, z):
h3 = self.relu(self.fc3(z))
return self.sigmoid(self.fc4(h3))
def forward(self, x):
return self.decode(x.view(-1, 2))
net = autoencoder()
print(net)
res_chs = res_ex
trainloader = prof_vec[:,res_chs,:]
val_norm = np.amax(trainloader).astype(float)
print val_norm
trainloader = trainloader / val_norm
trainloader = torch.FloatTensor(trainloader)
print trainloader.size()
loss_fn = torch.nn.MSELoss()
optimizer = torch.optim.Adam(net.parameters())
epochs = 20
patience = 5
max_batch = 64
criteria = "loss"
best_model, loss, loss_test = train(net, trainloader, loss_fn, optimizer, epochs = epochs,
patience = patience, criteria_stop = criteria)
plt.title('Loss')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.plot(loss, label='Train')
plt.legend()
plt.show()
decode, encode = net(Variable(trainloader))
out_decod = decode.data.numpy()
out_encod = encode.data.numpy()
print(out_decod.shape, out_encod.shape, list_labels.shape)
plt.figure(figsize=(7, 6))
plt.scatter(out_encod[:,0], out_encod[:,1], c=list_labels)
plt.show()
Explanation: Autoencoder
End of explanation
#Loading labeled segmentations
seg_label = genfromtxt('../../dataset/Seg_ROQS/roqs_label.csv', delimiter=',').astype('uint8')
list_masks = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 0] #Extracting segmentations
list_labels = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 1] #Extracting labels
ind_ex_err = list_masks[np.where(list_labels)[0]]
ind_ex_cor = list_masks[np.where(np.logical_not(list_labels))[0]]
prof_vec_roqs = np.empty((len(list_masks),resols.shape[0],points)) #Initializing correct signature vector
for ind, mask in enumerate(list_masks):
mask_pn = np.load('../../dataset/Seg_ROQS/mask_roqs_{}.npy'.format(mask)) #Loading mask
refer_temp = sign_extract(mask_pn, resols) #Function for shape signature extraction
prof_vec_roqs[ind] = sign_fit(prof_ref[0], refer_temp) #Function for signature fitting using Watershed as basis
ind_rel_cor = np.where(np.logical_not(list_labels))[0]
ind_rel_err = np.where(list_labels)[0]
print "Correct segmentations' vector: ", prof_vec_roqs[ind_rel_cor].shape
print "Erroneous segmentations' vector: ", prof_vec_roqs[ind_rel_err].shape
#for ind_ex, ind_rel in zip(ind_ex_err, ind_rel_err):
# plt.figure()
# f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))
# ax1.plot(prof_vec_roqs[ind_rel,res_ex,:].T)
# ax1.set_title("Signature %i at res: %f"%(ind_ex, resols[res_ex]))
#
# mask_correct = np.load('../../dataset/Seg_ROQS/mask_roqs_{}.npy'.format(ind_ex))
# ax2.axis('off')
# ax2.imshow(mask_correct,'gray',interpolation='none')
#
# plt.show()
plt.figure()
plt.plot(prof_vec_roqs[ind_rel_cor,res_ex,:].T)
plt.title("Correct signatures for res: %f"%(resols[res_ex]))
plt.show()
plt.figure()
plt.plot(prof_vec_roqs[ind_rel_err,res_ex,:].T)
plt.title("Erroneous signatures for res: %f"%(resols[res_ex]))
plt.show()
trainloader = prof_vec_roqs[:,res_chs,:]
trainloader = trainloader / val_norm
trainloader = torch.FloatTensor(trainloader)
print trainloader.size()
decode, encode = net(Variable(trainloader))
out_decod = decode.data.numpy()
out_encod = encode.data.numpy()
print(out_decod.shape, out_encod.shape, list_labels.shape)
plt.figure(figsize=(7, 6))
plt.scatter(out_encod[:,0], out_encod[:,1], c=list_labels)
plt.show()
Explanation: Testing in new datasets
ROQS test
End of explanation
#Loading labeled segmentations
seg_label = genfromtxt('../../dataset/Seg_pixel/pixel_label.csv', delimiter=',').astype('uint8')
list_masks = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 0] #Extracting segmentations
list_labels = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 1] #Extracting labels
ind_ex_err = list_masks[np.where(list_labels)[0]]
ind_ex_cor = list_masks[np.where(np.logical_not(list_labels))[0]]
prof_vec_pixe = np.empty((len(list_masks),resols.shape[0],points)) #Initializing correct signature vector
for ind, mask in enumerate(list_masks):
mask_pn = np.load('../../dataset/Seg_pixel/mask_pixe_{}.npy'.format(mask)) #Loading mask
refer_temp = sign_extract(mask_pn, resols) #Function for shape signature extraction
prof_vec_pixe[ind] = sign_fit(prof_ref[0], refer_temp) #Function for signature fitting using Watershed as basis
ind_rel_cor = np.where(np.logical_not(list_labels))[0]
ind_rel_err = np.where(list_labels)[0]
print "Correct segmentations' vector: ", prof_vec_pixe[ind_rel_cor].shape
print "Erroneous segmentations' vector: ", prof_vec_pixe[ind_rel_err].shape
#for ind_ex, ind_rel in zip(ind_ex_cor, ind_rel_cor):
# plt.figure()
# f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))
# ax1.plot(prof_vec_pixe[ind_rel,res_ex,:].T)
# ax1.set_title("Signature %i at res: %f"%(ind_ex, resols[res_ex]))
#
# mask_correct = np.load('../../dataset/Seg_pixel/mask_pixe_{}.npy'.format(ind_ex))
# ax2.axis('off')
# ax2.imshow(mask_correct,'gray',interpolation='none')
#
# plt.show()
plt.figure()
plt.plot(prof_vec_pixe[ind_rel_cor,res_ex,:].T)
plt.title("Correct signatures for res: %f"%(resols[res_ex]))
plt.show()
plt.figure()
plt.plot(prof_vec_pixe[ind_rel_err,res_ex,:].T)
plt.title("Erroneous signatures for res: %f"%(resols[res_ex]))
plt.show()
trainloader = prof_vec_pixe[:,res_chs,:]
trainloader = trainloader / val_norm
trainloader = torch.FloatTensor(trainloader)
print trainloader.size()
decode, encode = net(Variable(trainloader))
out_decod = decode.data.numpy()
out_encod = encode.data.numpy()
print(out_decod.shape, out_encod.shape, list_labels.shape)
plt.figure(figsize=(7, 6))
plt.scatter(out_encod[:,0], out_encod[:,1], c=list_labels)
plt.show()
Explanation: Pixel-based test
End of explanation |
6,285 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Implementing a Neural Network
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
Step2: Forward pass
Step3: Forward pass
Step4: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check
Step5: Train the network
Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
Step6: Load the data
Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
Step7: Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
Step8: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
Step9: Tune your hyperparameters
What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.
Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
Experiment
Step10: Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.
We will give you extra bonus point for every 1% of accuracy above 52%. | Python Code:
import os
os.chdir(os.getcwd() + '/..')
# Run some setup code for this notebook
import random
import numpy as np
import matplotlib.pyplot as plt
from utils.data_utils import load_CIFAR10
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
from classifiers.neural_net import TwoLayerNet
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / np.maximum(1e-8, np.abs(x) + np.abs(y)))
# Create a small net and toy data to check implementations.
# set random seed for repeatable experiments.
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
net = init_toy_model()
X, y = init_toy_data()
Explanation: Implementing a Neural Network
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
End of explanation
scores = net.loss(X)
print('scores: ')
print(scores)
print
print('correct scores:')
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print(correct_scores)
print
# The difference should be very small, get < 1e-7
print('Difference between your scores and correct scores:')
print(np.sum(np.abs(scores - correct_scores)))
Explanation: Forward pass: compute scores
End of explanation
loss, _ = net.loss(X, y, reg=0.05)
corrent_loss = 1.30378789133
# should be very small, get < 1e-12
print('Difference between your loss and correct loss:')
print(np.sum(np.abs(loss - corrent_loss)))
Explanation: Forward pass: compute loss
End of explanation
from utils.gradient_check import eval_numerical_gradient
loss, grads = net.loss(X, y, reg=0.05)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.05)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)
print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
Explanation: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
End of explanation
net = init_toy_model()
stats = net.train(X, y, X, y, learning_rate=1e-1, reg=5e-6, num_iters=100, verbose=False)
print('Final training loss: ', stats['loss_history'][-1])
# plot the loss history
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
Explanation: Train the network
Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
End of explanation
# Load the raw CIFAR-10 data
cifar10_dir = 'datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Split the data
num_training = 49000
num_validation = 1000
num_test = 1000
mask = range(num_training, num_training+num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = xrange(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Preprocessing: reshape the image data into rows
X_train = X_train.reshape(X_train.shape[0], -1)
X_val = X_val.reshape(X_val.shape[0], -1)
X_test = X_test.reshape(X_test.shape[0], -1)
# Normalize the data: subtract the mean rows
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
print(X_train.shape, X_val.shape, X_test.shape)
Explanation: Load the data
Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
End of explanation
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
learning_rate=1e-4, learning_rate_decay=0.95,
reg=0.25, num_iters=1000, batch_size=200, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print('Validation accuracy: ', val_acc)
Explanation: Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
End of explanation
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.legend()
plt.xlabel('Epoch')
plt.ylabel('Classification accuracy')
plt.show()
from utils.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net)
Explanation: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
End of explanation
input_size = 32 * 32 * 3
num_classes = 10
hidden_layer_size = [50]
learning_rates = [3e-4, 9e-4, 1e-3, 3e-3]
regularization_strengths = [7e-1, 8e-1, 9e-1, 1]
results = {}
best_model = None
best_val = -1
for hidden_size in hidden_layer_size:
for lr in learning_rates:
for reg in regularization_strengths:
model = TwoLayerNet(input_size, hidden_size, num_classes, std=1e-3)
stats = model.train(X_train, y_train, X_val, y_val,
learning_rate=lr, learning_rate_decay=0.95,
reg=reg, num_iters=5000, batch_size=200, verbose=True)
train_acc = (model.predict(X_train) == y_train).mean()
val_acc = (model.predict(X_val) == y_val).mean()
print('hidden_layer_size: %d, lr: %e, reg: %e, train_acc: %f, val_acc: %f' % (hidden_size, lr, reg, train_acc, val_acc))
results[(hidden_size, lr, reg)] = (train_acc, val_acc)
if val_acc > best_val:
best_val = val_acc
best_model = model
print
print
print('best val_acc: %f' % (best_val))
old_lr = -1
for hidden_size, lr, reg in sorted(results):
if old_lr != lr:
old_lr = lr
print
train_acc, val_acc = results[(hidden_size, lr, reg)]
print('hidden_layer_size: %d, lr: %e, reg: %e, train_acc: %f, val_acc: %f' % (hidden_size, lr, reg, train_acc, val_acc))
for hidden_size, lr, reg in sorted(results):
train_acc, val_acc = results[(hidden_size, lr, reg)]
print('hidden_layer_size: %d, lr: %e, reg: %e, train_acc: %f, val_acc: %f' % (hidden_size, lr, reg, train_acc, val_acc))
# visualize the weights of the best network
show_net_weights(best_model)
Explanation: Tune your hyperparameters
What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.
Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
Experiment: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 52% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).
End of explanation
test_acc = (best_model.predict(X_test) == y_test).mean()
print('Test accuracy: ', test_acc)
Explanation: Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.
We will give you extra bonus point for every 1% of accuracy above 52%.
End of explanation |
6,286 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The multi-armed bandit problem
Step1: The Bandits
Here we define our bandits. For this example we are using a four-armed bandit. The pullBandit function generates a random number from a normal distribution with a mean of 0. The lower the bandit number, the more likely a positive reward will be returned. We want our agent to learn to always choose the bandit that will give that positive reward.
Step2: The Agent
The code below established our simple neural agent. It consists of a set of values for each of the bandits. Each value is an estimate of the value of the return from choosing the bandit. We use a policy gradient method to update the agent by moving the value for the selected action toward the recieved reward.
Step3: Training the Agent
We will train our agent by taking actions in our environment, and recieving rewards. Using the rewards and actions, we can know how to properly update our network in order to more often choose actions that will yield the highest rewards over time. | Python Code:
import tensorflow as tf
import numpy as np
Explanation: The multi-armed bandit problem
End of explanation
bandits = [0.2, 0, -0.2, -5] # Random order
num_bandits = len(bandits)
def pullBandit(bandit):
# Get a random number
result = np.random.randn(1)
if result > bandit:
# return a positive reward
return 1
else:
return -1
Explanation: The Bandits
Here we define our bandits. For this example we are using a four-armed bandit. The pullBandit function generates a random number from a normal distribution with a mean of 0. The lower the bandit number, the more likely a positive reward will be returned. We want our agent to learn to always choose the bandit that will give that positive reward.
End of explanation
tf.reset_default_graph()
weights = tf.Variable(tf.ones([num_bandits]))
chosen_action = tf.argmax(weights, 0)
reward_holder = tf.placeholder(shape = [1], dtype = tf.float32)
action_holder = tf.placeholder(shape = [1], dtype = tf.int32)
responsible_weight = tf.slice(weights, action_holder, [1])
loss = -(tf.log(responsible_weight)* reward_holder)
optimizer = tf.train.GradientDescentOptimizer(learning_rate = 0.001)
update = optimizer.minimize(loss)
Explanation: The Agent
The code below established our simple neural agent. It consists of a set of values for each of the bandits. Each value is an estimate of the value of the return from choosing the bandit. We use a policy gradient method to update the agent by moving the value for the selected action toward the recieved reward.
End of explanation
total_episodes = 1000 #Set total number of episodes to train agent on.
total_reward = np.zeros(num_bandits) #Set scoreboard for bandits to 0.
e = 0.1 #Set the chance of taking a random action.
init = tf.initialize_all_variables()
# Launch the tensorflow graph
with tf.Session() as sess:
sess.run(init)
i = 0
while i < total_episodes:
#Choose either a random action or one from our network.
if np.random.rand(1) < e:
action = np.random.randint(num_bandits)
else:
action = sess.run(chosen_action)
reward = pullBandit(bandits[action]) #Get our reward from picking one of the bandits.
#Update the network.
_,resp,ww = sess.run([update,responsible_weight,weights], feed_dict={reward_holder:[reward],action_holder:[action]})
#Update our running tally of scores.
total_reward[action] += reward
if i % 50 == 0:
print("Running reward for the " + str(num_bandits) + " bandits: " + str(total_reward))
i+=1
print("The agent thinks bandit " + str(np.argmax(ww)+1) + " is the most promising....")
if np.argmax(ww) == np.argmax(-np.array(bandits)):
print("...and it was right!")
else:
print("...and it was wrong!")
Explanation: Training the Agent
We will train our agent by taking actions in our environment, and recieving rewards. Using the rewards and actions, we can know how to properly update our network in order to more often choose actions that will yield the highest rewards over time.
End of explanation |
6,287 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Customising static orbit plots
The default styling for plots works pretty well however sometimes you may need to change things. The following will show you how to change the style of your plots and have different types of lines and dots
This is the default plot we will start with
Step1: Here we get hold of the lines list from the OrbitPlotter.plot method this is a list of lines. The first is the orbit line. The second is the current position marker. With the matplotlib lines objects we can start changing the style. First we make the line solid but thin line. Then we change the current position marker to a large hexagon.
More details of the style options for the markers can be found here
Step2: You can also change the style of the plot using the matplotlib axis which can be aquired from the OrbitPlotter()
See the folling example that creates a grid, adds a title, and makes the background transparent. To make the changes clearer it goes back to the inital example. | Python Code:
from astropy.time import Time
import matplotlib.pyplot as plt
from poliastro.plotting import StaticOrbitPlotter
from poliastro.frames import Planes
from poliastro.bodies import Earth, Mars, Jupiter, Sun
from poliastro.twobody import Orbit
epoch = Time("2018-08-17 12:05:50", scale="tdb")
plotter = StaticOrbitPlotter(plane=Planes.EARTH_ECLIPTIC)
plotter.plot_body_orbit(Earth, epoch, label="Earth")
plotter.plot_body_orbit(Mars, epoch, label="Mars")
plotter.plot_body_orbit(Jupiter, epoch, label="Jupiter");
epoch = Time("2018-08-17 12:05:50", scale="tdb")
plotter = StaticOrbitPlotter(plane=Planes.EARTH_ECLIPTIC)
earth_plots_traj, earth_plots_pos = plotter.plot_body_orbit(Earth, epoch, label=Earth)
earth_plots_traj[0].set_linestyle("-") # solid line
earth_plots_traj[0].set_linewidth(0.5)
earth_plots_pos.set_marker("H") # Hexagon
earth_plots_pos.set_markersize(15)
mars_plots = plotter.plot_body_orbit(Mars, epoch, label=Mars)
jupiter_plots = plotter.plot_body_orbit(Jupiter, epoch, label=Jupiter)
Explanation: Customising static orbit plots
The default styling for plots works pretty well however sometimes you may need to change things. The following will show you how to change the style of your plots and have different types of lines and dots
This is the default plot we will start with:
End of explanation
epoch = Time("2018-08-17 12:05:50", scale="tdb")
plotter = StaticOrbitPlotter()
earth_plots_t, earth_plots_p = plotter.plot_body_orbit(Earth, epoch, label=Earth)
earth_plots_t[0].set_linestyle("-") # solid line
earth_plots_t[0].set_linewidth(0.5)
earth_plots_p.set_marker("H") # Hexagon
earth_plots_p.set_markersize(15)
mars_plots_t, mars_plots_p = plotter.plot_body_orbit(Mars, epoch, label=Mars)
mars_plots_t[0].set_dashes([0, 1, 0, 1, 1, 0])
mars_plots_t[0].set_linewidth(2)
mars_plots_p.set_marker("D") # Diamond
mars_plots_p.set_markersize(15)
mars_plots_p.set_fillstyle("none")
# make sure this is set if you use fillstyle 'none'
mars_plots_p.set_markeredgewidth(1)
jupiter_plots_t, jupiter_plots_p = plotter.plot_body_orbit(Jupiter, epoch, label=Jupiter)
jupiter_plots_t[0].set_linestyle("") # No line
jupiter_plots_p.set_marker("*") # star
jupiter_plots_p.set_markersize(15)
Explanation: Here we get hold of the lines list from the OrbitPlotter.plot method this is a list of lines. The first is the orbit line. The second is the current position marker. With the matplotlib lines objects we can start changing the style. First we make the line solid but thin line. Then we change the current position marker to a large hexagon.
More details of the style options for the markers can be found here: https://matplotlib.org/2.0.2/api/markers_api.html#module-matplotlib.markers
More details of the style options on lines can be found here: https://matplotlib.org/2.0.2/api/lines_api.html However make sure that you use the set methods rather than just changing the attributes as the methods will force a re-draw of the plot.
Next we will make some changes to the other two orbits.
End of explanation
epoch = Time("2018-08-17 12:05:50", scale="tdb")
fig, ax = plt.subplots()
ax.grid(True)
ax.set_title("Earth, Mars, and Jupiter")
ax.set_facecolor("None")
plotter = StaticOrbitPlotter(ax)
plotter.plot_body_orbit(Earth, epoch, label=Earth)
plotter.plot_body_orbit(Mars, epoch, label=Mars)
plotter.plot_body_orbit(Jupiter, epoch, label=Jupiter)
Explanation: You can also change the style of the plot using the matplotlib axis which can be aquired from the OrbitPlotter()
See the folling example that creates a grid, adds a title, and makes the background transparent. To make the changes clearer it goes back to the inital example.
End of explanation |
6,288 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load Data
We will load the sociopatterns network data for this notebook. From the Konect website
Step1: Hubs
Step2: API Note
Step3: Approach 2
Step4: If you inspect the dictionary closely, you will find that node 51 is the one that has the highest degree centrality, just as we had measured by counting the number of neighbors.
There are other measures of centrality, namely betweenness centrality, flow centrality and load centrality. You can take a look at their definitions on the NetworkX API docs and their cited references. You can also define your own measures if those don't fit your needs, but that is an advanced topic that won't be dealt with here.
The NetworkX API docs that document the centrality measures are here
Step5: Exercise
Before we move on to paths in a network, see if you can use the Circos plot to visualize the network. Order and color the nodes according to the order keyword. (2 min.)
The CircosPlot API needs documentation written; for now, I am providing the following "on-the-spot" docs for you.
To instantiate and draw a CircosPlot
Step7: What can you deduce about the structure of the network, based on this visualization?
Place your own notes here
Step8: If you write an algorithm that runs breadth-first, the recursion pattern is likely to follow what we have done above. If you do a depth-first search (i.e. DFS), the recursion pattern is likely to look a bit different. Take it as a challenge exercise to figure out how a DFS looks like.
Meanwhile... thankfully, NetworkX has a function for us to use, titled has_path, so we don't have to implement this on our own.
Step9: NetworkX also has other shortest path algorithms implemented.
We can build upon these to build our own graph query functions. Let's see if we can trace the shortest path from one node to another.
nx.shortest_path(G, source, target) gives us a list of nodes that exist within one of the shortest paths between the two nodes. (Not all paths are guaranteed to be found.)
Step10: Incidentally, the node list is in order as well.
Exercise
Write a function that extracts the edges in the shortest path between two nodes and puts them into a new graph, and draws it to the screen. It should also return an error if there is no path between the two nodes. (5 min.)
Hint
Step11: Challenge Exercise (at home)
These exercises below are designed to let you become more familiar with manipulating and visualizing subsets of a graph's nodes.
Write a function that extracts only node, its neighbors, and the edges between that node and its neighbors as a new graph. Then, draw the new graph to screen.
Step13: Challenge Exercises (at home)
Let's try some other problems that build on the NetworkX API. Refer to the following for the relevant functions
Step14: Hubs Revisited
If a message has to be passed through the network in the shortest time possible, there may be "bottleneck" nodes through which information must always flow through. Such a node has a high betweenness centrality. This is implemented as one of NetworkX's centrality algorithms. Check out the Wikipedia page for a further description.
http
Step15: Exercise
Plot betweeness centrality against degree centrality for the network data. (5 min.)
Step16: Think about it...
From the scatter plot, we can see that the dots don't all fall on the same line. Degree centrality and betweenness centrality don't necessarily correlate. Can you think of scenarios where this is true?
What would be the degree centrality and betweenness centrality of the middle connecting node in the barbell graph below? | Python Code:
# Load the sociopatterns network data.
G = cf.load_sociopatterns_network()
# How many nodes and edges are present?
len(G.nodes()), len(G.edges())
Explanation: Load Data
We will load the sociopatterns network data for this notebook. From the Konect website:
This network describes the face-to-face behavior of people during the exhibition INFECTIOUS: STAY AWAY in 2009 at the Science Gallery in Dublin. Nodes represent exhibition visitors; edges represent face-to-face contacts that were active for at least 20 seconds. Multiple edges between two nodes are possible and denote multiple contacts. The network contains the data from the day with the most interactions.
End of explanation
# Let's find out the number of neighbors that individual #7 has.
len(list(G.neighbors(7)))
Explanation: Hubs: How do we evaluate the importance of some individuals in a network?
Within a social network, there will be certain individuals which perform certain important functions. For example, there may be hyper-connected individuals who are connected to many, many more people. They would be of use in the spreading of information. Alternatively, if this were a disease contact network, identifying them would be useful in stopping the spread of diseases. How would one identify these people?
Approach 1: Neighbors
One way we could compute this is to find out the number of people an individual is conencted to. NetworkX let's us do this by giving us a G.neighbors(node) function.
End of explanation
sorted([______________], key=lambda x: __________, reverse=True)
Explanation: API Note: As of NetworkX 2.0, G.neighbors(node) now returns a dict_keyiterator, which means we have to cast them as a list first in order to compute its length.
Exercise
Can you create a ranked list of the importance of each individual, based on the number of neighbors they have? (3 min.)
Hint: One suggested output would be a list of tuples, where the first element in each tuple is the node ID (an integer number), and the second element is the number of neighbors that it has.
Hint: Python's sorted(iterable, key=lambda x:...., reverse=True) function may be of help here.
End of explanation
nx.degree_centrality(G)
# Uncomment the next line to show a truncated version.
# list(nx.degree_centrality(G).items())[0:5]
Explanation: Approach 2: Degree Centrality
The number of other nodes that one node is connected to is a measure of its centrality. NetworkX implements a degree centrality, which is defined as the number of neighbors that a node has normalized to the number of individuals it could be connected to in the entire graph. This is accessed by using nx.degree_centrality(G)
End of explanation
# Possible Answers:
fig = plt.figure(0)
# Get a list of degree centrality scores for all of the nodes.
degree_centralities = list(____________)
x, y = ecdf(___________)
# Plot the histogram of degree centralities.
plt.scatter(____________)
# Set the plot title.
plt.title('Degree Centralities')
fig = plt.figure(1)
neighbors = [__________]
x, y = ecdf(_________)
plt.scatter(_________)
# plt.yscale('log')
plt.title('Number of Neighbors')
fig = plt.figure(2)
plt.scatter(_____________, ____________, alpha=0.1)
plt.xlabel('Degree Centralities')
plt.ylabel('Number of Neighbors')
Explanation: If you inspect the dictionary closely, you will find that node 51 is the one that has the highest degree centrality, just as we had measured by counting the number of neighbors.
There are other measures of centrality, namely betweenness centrality, flow centrality and load centrality. You can take a look at their definitions on the NetworkX API docs and their cited references. You can also define your own measures if those don't fit your needs, but that is an advanced topic that won't be dealt with here.
The NetworkX API docs that document the centrality measures are here: http://networkx.readthedocs.io/en/networkx-1.11/reference/algorithms.centrality.html?highlight=centrality#module-networkx.algorithms.centrality
Exercises
The following exercises are designed to get you familiar with the concept of "distribution of metrics" on a graph.
Can you create an ECDF of the distribution of degree centralities?
Can you create an ECDF of the distribution of number of neighbors?
Can you create a scatterplot of the degree centralities against number of neighbors?
If I have n nodes, then how many possible edges are there in total, assuming self-edges are allowed? What if self-edges are not allowed?
Exercise Time: 8 minutes.
Here is what an ECDF is (https://en.wikipedia.org/wiki/Empirical_distribution_function).
Hint: You may want to use:
ecdf(list_of_values)
to get the empirical CDF x- and y-values for plotting, and
plt.scatter(x_values, y_values)
Hint: You can access the dictionary .keys() and .values() and cast them as a list.
If you know the Matplotlib API, feel free to get fancy :).
End of explanation
c = CircosPlot(__________)
c._______()
plt.savefig('images/sociopatterns.png', dpi=300)
Explanation: Exercise
Before we move on to paths in a network, see if you can use the Circos plot to visualize the network. Order and color the nodes according to the order keyword. (2 min.)
The CircosPlot API needs documentation written; for now, I am providing the following "on-the-spot" docs for you.
To instantiate and draw a CircosPlot:
python
c = CircosPlot(G, node_order='node_key', node_color='node_key')
c.draw()
plt.show() # or plt.savefig(...)
Notes:
'node_key' is a key in the node metadata dictionary that the CircosPlot constructor uses for determining the colour, grouping, and ordering of the nodes.
In the following exercise, you may want to use order, which is already encoded on each node in the graph.
End of explanation
def path_exists(node1, node2, G):
This function checks whether a path exists between two nodes (node1, node2)
in graph G.
visited_nodes = set()
queue = [node1]
# Fill in code below
for node in queue:
neighbors = G.neighbors(_______)
if ______ in neighbors:
print('Path exists between nodes {0} and {1}'.format(node1, node2))
return True
else:
_________.add(______)
_______.extend(________)
print('Path does not exist between nodes {0} and {1}'.format(node1, node2))
return False
# Test your answer below
def test_path_exists():
assert path_exists(18, 10, G)
assert path_exists(22, 51, G)
test_path_exists()
Explanation: What can you deduce about the structure of the network, based on this visualization?
Place your own notes here :)
Paths in a Network
Graph traversal is akin to walking along the graph, node by node, restricted by the edges that connect the nodes. Graph traversal is particularly useful for understanding the local structure (e.g. connectivity, retrieving the exact relationships) of certain portions of the graph and for finding paths that connect two nodes in the network.
Using the synthetic social network, we will figure out how to answer the following questions:
How long will it take for a message to spread through this group of friends? (making some assumptions, of course)
How do we find the shortest path to get from individual A to individual B?
Shortest Path
Let's say we wanted to find the shortest path between two nodes. How would we approach this? One approach is what one would call a breadth-first search (http://en.wikipedia.org/wiki/Breadth-first_search). While not necessarily the fastest, it is the easiest to conceptualize.
The approach is essentially as such:
Begin with a queue of the starting node.
Add the neighbors of that node to the queue.
If destination node is present in the queue, end.
If destination node is not present, proceed.
For each node in the queue:
Add neighbors of the node to the queue. Check if destination node is present or not.
If destination node is present, end. <!--Credit: @cavaunpeu for finding bug in pseudocode.-->
If destination node is not present, continue.
Exercise
Try implementing this algorithm in a function called path_exists(node1, node2, G). (15 min.)
The function should take in two nodes, node1 and node2, and the graph G that they belong to, and return a Boolean that indicates whether a path exists between those two nodes or not. For convenience, also print out whether a path exists or not between the two nodes.
End of explanation
nx.has_path(G, 400, 1)
Explanation: If you write an algorithm that runs breadth-first, the recursion pattern is likely to follow what we have done above. If you do a depth-first search (i.e. DFS), the recursion pattern is likely to look a bit different. Take it as a challenge exercise to figure out how a DFS looks like.
Meanwhile... thankfully, NetworkX has a function for us to use, titled has_path, so we don't have to implement this on our own. :-) Check it out here.
End of explanation
nx.shortest_path(G, 4, 400)
Explanation: NetworkX also has other shortest path algorithms implemented.
We can build upon these to build our own graph query functions. Let's see if we can trace the shortest path from one node to another.
nx.shortest_path(G, source, target) gives us a list of nodes that exist within one of the shortest paths between the two nodes. (Not all paths are guaranteed to be found.)
End of explanation
# Possible Answer:
def extract_path_edges(G, source, target):
# Check to make sure that a path does exists between source and target.
if _______________________:
________ = nx._____________(__________)
newG = G.subgraph(________)
return newG
else:
raise Exception('Path does not exist between nodes {0} and {1}.'.format(source, target))
newG = extract_path_edges(G, 4, 400)
nx.draw(newG, with_labels=True)
Explanation: Incidentally, the node list is in order as well.
Exercise
Write a function that extracts the edges in the shortest path between two nodes and puts them into a new graph, and draws it to the screen. It should also return an error if there is no path between the two nodes. (5 min.)
Hint: You may want to use G.subgraph(iterable_of_nodes) to extract just the nodes and edges of interest from the graph G. You might want to use the following lines of code somewhere:
newG = G.subgraph(nodes_of_interest)
nx.draw(newG)
newG will be comprised of the nodes of interest and the edges that connect them.
End of explanation
def extract_neighbor_edges(G, node):
return newG
fig = plt.figure(0)
newG = extract_neighbor_edges(G, 19)
nx.draw(newG, with_labels=True)
def extract_neighbor_edges2(G, node):
return newG
fig = plt.figure(1)
newG = extract_neighbor_edges2(G, 19)
nx.draw(newG, with_labels=True)
Explanation: Challenge Exercise (at home)
These exercises below are designed to let you become more familiar with manipulating and visualizing subsets of a graph's nodes.
Write a function that extracts only node, its neighbors, and the edges between that node and its neighbors as a new graph. Then, draw the new graph to screen.
End of explanation
# Possible answer to Question 1:
# All we need here is the length of the path.
def compute_transmission_time(G, source, target):
Fill in code below.
return __________
compute_transmission_time(G, 14, 4)
# Possible answer to Question 2:
# We need to know the length of every single shortest path between every pair of nodes.
# If we don't put a source and target into the nx.shortest_path_length(G) function call, then
# we get a dictionary of dictionaries, where all source-->target-->lengths are shown.
lengths = []
times = []
## Fill in code below ##
plt.figure(0)
plt.bar(Counter(lengths).keys(), Counter(lengths).values())
plt.figure(1)
plt.bar(Counter(times).keys(), Counter(times).values())
Explanation: Challenge Exercises (at home)
Let's try some other problems that build on the NetworkX API. Refer to the following for the relevant functions:
http://networkx.readthedocs.io/en/networkx-1.11/reference/algorithms.shortest_paths.html
If we want a message to go from one person to another person, and we assume that the message takes 1 day for the initial step and 1 additional day per step in the transmission chain (i.e. the first step takes 1 day, the second step takes 2 days etc.), how long will the message take to spread from any two given individuals? Write a function to compute this.
What is the distribution of message spread times from person to person? What about chain lengths?
End of explanation
btws = nx.betweenness_centrality(G, normalized=False)
plt.bar(list(btws.keys()), list(btws.values()))
Explanation: Hubs Revisited
If a message has to be passed through the network in the shortest time possible, there may be "bottleneck" nodes through which information must always flow through. Such a node has a high betweenness centrality. This is implemented as one of NetworkX's centrality algorithms. Check out the Wikipedia page for a further description.
http://en.wikipedia.org/wiki/Betweenness_centrality
End of explanation
plt.scatter(__________, ____________)
plt.xlabel('degree')
plt.ylabel('betweeness')
plt.title('centrality scatterplot')
Explanation: Exercise
Plot betweeness centrality against degree centrality for the network data. (5 min.)
End of explanation
nx.draw(nx.barbell_graph(5, 1))
Explanation: Think about it...
From the scatter plot, we can see that the dots don't all fall on the same line. Degree centrality and betweenness centrality don't necessarily correlate. Can you think of scenarios where this is true?
What would be the degree centrality and betweenness centrality of the middle connecting node in the barbell graph below?
End of explanation |
6,289 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<figure>
<IMG SRC="https
Step1: Load the head observations
The first step in time series analysis is to load a time series of head observations. The time series needs to be stored as a pandas.Series object where the index is the date (and time, if desired). pandas provides many options to load time series data, depending on the format of the file that contains the time series. In this example, measured heads are stored in the csv file head_nb1.csv.
The heads are read from a csv file with the read_csv function of pandas and are then squeezed to create a pandas Series object. To check if you have the correct data type, use the type command as shown below.
Step2: The variable ho is now a pandas Series object. To see the first five lines, type ho.head().
Step3: The series can be plotted as follows
Step4: Load the stresses
The head variation shown above is believed to be caused by two stresses
Step5: Recharge
As a first simple model, the recharge is approximated as the measured rainfall minus the measured potential evaporation.
Step6: First time series model
Once the time series are read from the data files, a time series model can be constructed by going through the following three steps
Step7: The solve function has a number of default options that can be specified with keyword arguments. One of these options is that by default a fit report is printed to the screen. The fit report includes a summary of the fitting procedure, the optimal values obtained by the fitting routine, and some basic statistics. The model contains five parameters | Python Code:
import pandas as pd
import pastas as ps
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: <figure>
<IMG SRC="https://raw.githubusercontent.com/pastas/pastas/master/doc/_static/TUD_logo.png" WIDTH=250 ALIGN="right">
</figure>
Time Series Analysis with Pastas
Developed by Mark Bakker
Required files to run this notebook (all available from the data subdirectory):
* Head files: head_nb1.csv, B58C0698001_1.csv, B50H0026001_1.csv, B22C0090001_1.csv, headwell.csv
* Pricipitation files: rain_nb1.csv, neerslaggeg_HEIBLOEM-L_967.txt, neerslaggeg_ESBEEK_831.txt, neerslaggeg_VILSTEREN_342.txt, rainwell.csv
* Evaporation files: evap_nb1.csv, etmgeg_380.txt, etmgeg_260.txt, evapwell.csv
* Well files: well1.csv, well2.csv
* Figure: b58c0698_dino.png
Pastas
Pastas is a computer program for hydrological time series analysis and is available from https://github.com/pastas/pastas (install the development version!). Pastas makes heavy use of pandas timeseries. An introduction to pandas timeseries can be found, for example, here. The Pastas documentation is available here.
End of explanation
ho = pd.read_csv('../data/head_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True)
print('The data type of the oseries is:', type(ho))
Explanation: Load the head observations
The first step in time series analysis is to load a time series of head observations. The time series needs to be stored as a pandas.Series object where the index is the date (and time, if desired). pandas provides many options to load time series data, depending on the format of the file that contains the time series. In this example, measured heads are stored in the csv file head_nb1.csv.
The heads are read from a csv file with the read_csv function of pandas and are then squeezed to create a pandas Series object. To check if you have the correct data type, use the type command as shown below.
End of explanation
ho.head()
Explanation: The variable ho is now a pandas Series object. To see the first five lines, type ho.head().
End of explanation
ho.plot(style='.', figsize=(16, 4))
plt.ylabel('Head [m]');
plt.xlabel('Time [years]');
Explanation: The series can be plotted as follows
End of explanation
rain = pd.read_csv('../data/rain_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True)
print('The data type of the rain series is:', type(rain))
evap = pd.read_csv('../data/evap_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True)
print('The data type of the evap series is', type(evap))
plt.figure(figsize=(16, 4))
rain.plot(label='rain')
evap.plot(label='evap')
plt.xlabel('Time [years]')
plt.ylabel('Rainfall/Evaporation (m/d)')
plt.legend(loc='best');
Explanation: Load the stresses
The head variation shown above is believed to be caused by two stresses: rainfall and evaporation. Measured rainfall is stored in the file rain_nb1.csv and measured potential evaporation is stored in the file evap_nb1.csv.
The rainfall and potential evaporation are loaded and plotted.
End of explanation
recharge = rain - evap
plt.figure(figsize=(16, 4))
recharge.plot()
plt.xlabel('Time [years]')
plt.ylabel('Recharge (m/d)');
Explanation: Recharge
As a first simple model, the recharge is approximated as the measured rainfall minus the measured potential evaporation.
End of explanation
ml = ps.Model(ho)
sm1 = ps.StressModel(recharge, ps.Gamma, name='recharge', settings='prec')
ml.add_stressmodel(sm1)
ml.solve(tmin='1985', tmax='2010')
Explanation: First time series model
Once the time series are read from the data files, a time series model can be constructed by going through the following three steps:
Creat a Model object by passing it the observed head series. Store your model in a variable so that you can use it later on.
Add the stresses that are expected to cause the observed head variation to the model. In this example, this is only the recharge series. For each stess, a StressModel object needs to be created. Each StressModel object needs three input arguments: the time series of the stress, the response function that is used to simulate the effect of the stress, and a name. In addition, it is recommended to specified the kind of series, which is used to perform a number of checks on the series and fix problems when needed. This checking and fixing of problems (for example, what to substitute for a missing value) depends on the kind of series. In this case, the time series of the stress is stored in the variable recharge, the Gamma function is used to simulate the response, the series will be called 'recharge', and the kind is prec which stands for precipitation. One of the other keyword arguments of the StressModel class is up, which means that a positive stress results in an increase (up) of the head. The default value is True, which we use in this case as a positive recharge will result in the heads going up. Each StressModel object needs to be stored in a variable, after which it can be added to the model.
When everything is added, the model can be solved. The default option is to minimize the sum of the squares of the errors between the observed and modeled heads.
End of explanation
ml.plot();
ml = ps.Model(ho)
sm1 = ps.StressModel(recharge, ps.Gamma, name='recharge', settings='prec')
ml.add_stressmodel(sm1)
ml.solve(tmin='1985', tmax='2010', solver=ps.LeastSquares)
ml = ps.Model(ho)
sm1 = ps.StressModel(recharge, ps.Gamma, name='recharge', settings='prec')
ml.add_stressmodel(sm1)
ml.set_vary('recharge_n', 0)
ml.solve(tmin='1985', tmax='2010', solver=ps.LeastSquares)
ml.plot();
Explanation: The solve function has a number of default options that can be specified with keyword arguments. One of these options is that by default a fit report is printed to the screen. The fit report includes a summary of the fitting procedure, the optimal values obtained by the fitting routine, and some basic statistics. The model contains five parameters: the parameters $A$, $n$, and $a$ of the Gamma function used as the response function for the recharge, the parameter $d$, which is a constant base level, and the parameter $\alpha$ of the noise model, which will be explained a little later on in this notebook.
The results of the model are plotted below.
End of explanation |
6,290 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This Jupyter notebook should be used in conjunction with pypeoutgoing.ipynb.
Run through the following cells...
Step1: Then run the following cell and send some values from pypeoutgoing.ipynb running in another window. The will be sent over the "pype". Watch them printed below once they are received
Step2: Once you have finished experimenting, you can close the pype | Python Code:
import os, sys
sys.path.append(os.path.abspath('../../main/python'))
import thalesians.tsa.pypes as pypes
pype = pypes.Pype(pypes.Direction.INCOMING, name='EXAMPLE', port=5758); pype
Explanation: This Jupyter notebook should be used in conjunction with pypeoutgoing.ipynb.
Run through the following cells...
End of explanation
for x in pype: print(x)
Explanation: Then run the following cell and send some values from pypeoutgoing.ipynb running in another window. The will be sent over the "pype". Watch them printed below once they are received:
End of explanation
pype.close()
Explanation: Once you have finished experimenting, you can close the pype:
End of explanation |
6,291 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Template for test
Step1: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using K acytelation.
Training data is from CUCKOO group and benchmarks are from dbptm.
Step2: Chemical Vector | Python Code:
from pred import Predictor
from pred import sequence_vector
from pred import chemical_vector
Explanation: Template for test
End of explanation
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/k_acetylation.csv")
y.process_data(vector_function="sequence", amino_acid="K", imbalance_function=i, random_data=0)
y.supervised_training("mlp_adam")
y.benchmark("Data/Benchmarks/acet.csv", "K")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/k_acetylation.csv")
x.process_data(vector_function="sequence", amino_acid="K", imbalance_function=i, random_data=1)
x.supervised_training("mlp_adam")
x.benchmark("Data/Benchmarks/acet.csv", "K")
del x
Explanation: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using K acytelation.
Training data is from CUCKOO group and benchmarks are from dbptm.
End of explanation
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/k_acetylation.csv")
y.process_data(vector_function="chemical", amino_acid="K", imbalance_function=i, random_data=0)
y.supervised_training("mlp_adam")
y.benchmark("Data/Benchmarks/acet.csv", "K")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/k_acetylation.csv")
x.process_data(vector_function="chemical", amino_acid="K", imbalance_function=i, random_data=1)
x.supervised_training("mlp_adam")
x.benchmark("Data/Benchmarks/acet.csv", "K")
del x
Explanation: Chemical Vector
End of explanation |
6,292 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aprendizaje de variedades
Una de las debilidades del PCA es que no puede detectar características no lineales. Un conjunto de algoritmos que evitan este problema son los algoritmos de aprendizaje de variedades (manifold learning). Un conjunto de datos que se suele emplear a menudo en este contexto es el S-curve
Step1: Este es en realidad un conjunto de datos 2D (que sería la S desenrollada), pero se ha embebido en un espacio 3D, de tal forma que un PCA no es capaz de descubrir el conjunto de datos original
Step2: Como puedes observar, al ser un método lineal, el PCA ha obtenido dos direcciones máxima variabilidad, pero ha perdido muchísima varianza en los datos, al proyectar la S directamente en un hiperplano. Los algoritmos de aprendizaje de variedades, disponibles en el paquete sklearn.manifold, pretenden descubrir el manifold que contiene a los datos (en este caso, es un manifold de dos dimensiones). Apliquemos, por ejemplo, el método Isomap
Step3: Aprendizaje de variedades para la base de datos de dígitos
Podemos aplicar este tipo de algoritmos para bases de datos de alta dimensionalidad, como la base de datos de dígitos manuscritos
Step4: Si visualizamos el dataset utilizando una técnica lineal como PCA, ya pudimos comprobar como conseguíamos algo de información sobre la estructura de los datos
Step5: Sin embargo, podemos usar técnicas no lineales, que nos llevarán, en este caso, a una mejor visualización. Vamos a aplicar el método t-SNE de manifold learning | Python Code:
from sklearn.datasets import make_s_curve
X, y = make_s_curve(n_samples=1000)
from mpl_toolkits.mplot3d import Axes3D
ax = plt.axes(projection='3d')
ax.scatter3D(X[:, 0], X[:, 1], X[:, 2], c=y)
ax.view_init(10, -60);
Explanation: Aprendizaje de variedades
Una de las debilidades del PCA es que no puede detectar características no lineales. Un conjunto de algoritmos que evitan este problema son los algoritmos de aprendizaje de variedades (manifold learning). Un conjunto de datos que se suele emplear a menudo en este contexto es el S-curve:
End of explanation
from sklearn.decomposition import PCA
X_pca = PCA(n_components=2).fit_transform(X)
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y);
Explanation: Este es en realidad un conjunto de datos 2D (que sería la S desenrollada), pero se ha embebido en un espacio 3D, de tal forma que un PCA no es capaz de descubrir el conjunto de datos original
End of explanation
from sklearn.manifold import Isomap
iso = Isomap(n_neighbors=15, n_components=2)
X_iso = iso.fit_transform(X)
plt.scatter(X_iso[:, 0], X_iso[:, 1], c=y);
Explanation: Como puedes observar, al ser un método lineal, el PCA ha obtenido dos direcciones máxima variabilidad, pero ha perdido muchísima varianza en los datos, al proyectar la S directamente en un hiperplano. Los algoritmos de aprendizaje de variedades, disponibles en el paquete sklearn.manifold, pretenden descubrir el manifold que contiene a los datos (en este caso, es un manifold de dos dimensiones). Apliquemos, por ejemplo, el método Isomap:
End of explanation
from sklearn.datasets import load_digits
digits = load_digits()
fig, axes = plt.subplots(2, 5, figsize=(10, 5),
subplot_kw={'xticks':(), 'yticks': ()})
for ax, img in zip(axes.ravel(), digits.images):
ax.imshow(img, interpolation="none", cmap="gray")
Explanation: Aprendizaje de variedades para la base de datos de dígitos
Podemos aplicar este tipo de algoritmos para bases de datos de alta dimensionalidad, como la base de datos de dígitos manuscritos:
End of explanation
# Construir un modelo PCA
pca = PCA(n_components=2)
pca.fit(digits.data)
# Transformar los dígitos según las dos primeras componentes principales
digits_pca = pca.transform(digits.data)
colors = ["#476A2A", "#7851B8", "#BD3430", "#4A2D4E", "#875525",
"#A83683", "#4E655E", "#853541", "#3A3120","#535D8E"]
plt.figure(figsize=(10, 10))
plt.xlim(digits_pca[:, 0].min(), digits_pca[:, 0].max() + 1)
plt.ylim(digits_pca[:, 1].min(), digits_pca[:, 1].max() + 1)
for i in range(len(digits.data)):
# Representar los dígitos usando texto
plt.text(digits_pca[i, 0], digits_pca[i, 1], str(digits.target[i]),
color = colors[digits.target[i]],
fontdict={'weight': 'bold', 'size': 9})
plt.xlabel("primera componente principal")
plt.ylabel("segunda componente principal");
Explanation: Si visualizamos el dataset utilizando una técnica lineal como PCA, ya pudimos comprobar como conseguíamos algo de información sobre la estructura de los datos:
End of explanation
from sklearn.manifold import TSNE
tsne = TSNE(random_state=42)
# utilizamos fit_transform en lugar de fit:
digits_tsne = tsne.fit_transform(digits.data)
plt.figure(figsize=(10, 10))
plt.xlim(digits_tsne[:, 0].min(), digits_tsne[:, 0].max() + 1)
plt.ylim(digits_tsne[:, 1].min(), digits_tsne[:, 1].max() + 1)
for i in range(len(digits.data)):
# actually plot the digits as text instead of using scatter
plt.text(digits_tsne[i, 0], digits_tsne[i, 1], str(digits.target[i]),
color = colors[digits.target[i]],
fontdict={'weight': 'bold', 'size': 9})
Explanation: Sin embargo, podemos usar técnicas no lineales, que nos llevarán, en este caso, a una mejor visualización. Vamos a aplicar el método t-SNE de manifold learning:
End of explanation |
6,293 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: <a href="https
Step3: Step by Step Code Order
#1. How to find the order of differencing (d) in ARIMA model
p is the order of the AR term
q is the order of the MA term
d is the number of differencing required to make the time series stationary as I term
Step4: A p-value less than 0.05 (typically ≤ 0.05) is statistically significant. It indicates strong evidence against the null hypothesis, as there is less than a 5% probability the null hypo is correct (and the results are by random). Therefore, we reject the null hypothesis, and accept the alternative hypothesis (there is correlation).
Our timeserie is not significant, why?
Example
Step5: For the above series, the time series reaches stationarity with two orders of differencing. But we use for the beginning 1 order as a conservative part. Let me explain that
Step6: #3. How to find the order of the MA term (q)
Step7: 4. How to build the ARIMA Model
Step8: Notice here the coefficient of the MA2 term is close to zero (-0.0010 ) and the P-Value in ‘P>|z|’ column is highly insignificant (0.9). It should ideally be less than 0.05 for the respective X to be significant << 0.05.
5. Plot residual errors
Let’s plot the residuals to ensure there are no patterns (that is, look for constant mean and variance).
Step9: 6. Plot Predict Actual vs Fitted
When you set dynamic=False in-sample lagged values are used for prediction.
That is, the model gets trained up until the previous values to make next prediction. This can make a fitted forecast and actuals look artificially good.
Step10: 7. Now Create Training and Test Validation
We can see that ARIMA is adequately forecasting the seasonal pattern in the series. In terms of the model performance, the RMSE (root mean squared error) and MFE (mean forecast error) and also best in terms of the lowest BIC .
Step11: 8. Some scores and performance
The 20 observations depends on the train/test set fc,se,conf = fitted.forecast(20, alpha=0.05) # 95% conf | Python Code:
#sign:max: MAXBOX8: 03/02/2021 18:34:41
# optimal moving average OMA for market index signals ARIMA study- Max Kleiner
# v2 shell argument forecast days - 4 lines compare - ^GDAXI for DAX
# pip install pandas-datareader
# C:\maXbox\mX46210\DataScience\princeton\AB_NYC_2019.csv AB_NYC_2019.csv
#https://medium.com/abzuai/the-qlattice-a-new-machine-learning-model-you-didnt-know-you-needed-c2e037878cd
#https://www.kaggle.com/dgomonov/data-exploration-on-nyc-airbnb 41
#https://www.kaggle.com/duygut/airbnb-nyc-price-prediction
#https://www.machinelearningplus.com/time-series/arima-model-time-series-forecasting-python/
import numpy as np
import matplotlib.pyplot as plt
import sys
import numpy as np, pandas as pd
from statsmodels.tsa.arima_model import ARIMA
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from statsmodels.tsa.stattools import adfuller, acf
import matplotlib.pyplot as plt
plt.rcParams.update({'figure.figsize':(9,7), 'figure.dpi':120})
# Import data
wwwus = pd.read_csv('https://raw.githubusercontent.com/selva86/datasets/master/wwwusage.csv', names=['value'], header=0)
import pandas as pd
# Accuracy metrics
def forecast_accuracy(forecast, actual):
mape = np.mean(np.abs(forecast - actual)/np.abs(actual)) # MAPE
me = np.mean(forecast - actual) # ME
mae = np.mean(np.abs(forecast - actual)) # MAE
mpe = np.mean((forecast - actual)/actual) # MPE
rmse = np.mean((forecast - actual)**2)**.5 # RMSE
corr = np.corrcoef(forecast, actual)[0,1] # corr
mins = np.amin(np.hstack([forecast[:,None],
actual[:,None]]), axis=1)
maxs = np.amax(np.hstack([forecast[:,None],
actual[:,None]]), axis=1)
minmax = 1 - np.mean(mins/maxs) # minmax
acf1 = acf(fc-test)[1] # ACF1
return({'mape':mape, 'me':me, 'mae': mae,
'mpe': mpe, 'rmse':rmse, 'acf1':acf1,
'corr':corr, 'minmax':minmax})
#wwwus = pd.read_csv(r'C:\maXbox\mX46210\DataScience\princeton\1022dataset.txt', \
# names=['value'], header=0)
print(wwwus.head(10).T) #Transposed for column overview
#1. How to find the order of differencing (d) in ARIMA model
result = adfuller(wwwus.value.dropna())
print('ADF Statistic: %f' % result[0])
print('p-value: %f' % result[1])
#
# Original Series
fig, axes = plt.subplots(3, 2, sharex=True)
axes[0, 0].plot(wwwus.value); axes[0, 0].set_title('Orig Series')
plot_acf(wwwus.value, ax=axes[0, 1], lags=60)
# 1st Differencing
axes[1, 0].plot(wwwus.value.diff()); axes[1, 0].set_title('1st Order Differencing')
plot_acf(wwwus.value.diff().dropna(), ax=axes[1, 1], lags=60)
# 2nd Differencing
axes[2, 0].plot(wwwus.value.diff().diff()); axes[2, 0].set_title('2nd Order Differencing')
plot_acf(wwwus.value.diff().diff().dropna(), ax=axes[2, 1], lags=60)
plt.show()
#2. How to find the order of the AR term (p)
plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120})
fig, axes = plt.subplots(1, 2, sharex=True)
axes[0].plot(wwwus.value.diff()); axes[0].set_title('1st Differencing')
axes[1].set(ylim=(0,5))
plot_pacf(wwwus.value.diff().dropna(), ax=axes[1], lags=100)
plt.show()
#3. How to find the order of the MA term (q)
fig, axes = plt.subplots(1, 2, sharex=True)
axes[0].plot(wwwus.value.diff()); axes[0].set_title('1st Differencing')
axes[1].set(ylim=(0,1.2))
plot_acf(wwwus.value.diff().dropna(), ax=axes[1] , lags=60)
plt.show()
#
#4. How to build the ARIMA Model
model = ARIMA(wwwus.value, order=(1,1,2))
model_fit = model.fit(disp=0)
print('first fit ',model_fit.summary())
# Plot residual errors
residuals = pd.DataFrame(model_fit.resid)
plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120})
fig, ax = plt.subplots(1,2)
residuals.plot(title="Residuals", ax=ax[0])
residuals.plot(kind='kde', title='Density', ax=ax[1])
plt.show()
#5. Plot Predict Actual vs Fitted
# When you set dynamic=False in-sample lagged values are used for prediction.
plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120})
model_fit.plot_predict(dynamic=False)
plt.show()
#That is, the model gets trained up until the previous value to make next prediction. This can make a fitted forecast and actuals look artificially good.
# Now Create Training and Test
train = wwwus.value[:80]
test = wwwus.value[80:]
#model = ARIMA(train, order=(3, 2, 1))
model = ARIMA(train, order=(2, 2, 3))
fitted = model.fit(disp=-1)
print('second fit ',fitted.summary())
# Forecast
fc,se,conf = fitted.forecast(20, alpha=0.05) # 95% conf
# Make as pandas series
fc_series = pd.Series(fc, index=test.index)
lower_series = pd.Series(conf[:,0], index=test.index)
upper_series = pd.Series(conf[:,1], index=test.index)
# Plot
plt.figure(figsize=(12,5), dpi=100)
plt.plot(train, label='training')
plt.plot(test, label='actual')
plt.plot(fc_series, label='forecast')
plt.fill_between(lower_series.index, lower_series, upper_series,
color='k', alpha=.15)
plt.title('maXbox4 Forecast vs Actuals ARIMA')
plt.legend(loc='upper left', fontsize=8)
plt.show()
print(forecast_accuracy(fc, test.values))
print('Around 5% MAPE implies a model is about 95% accurate in predicting next 20 observations.')
Explanation: <a href="https://colab.research.google.com/github/maxkleiner/maXbox4/blob/master/ARIMA_Predictor2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
How to find the order of (p,d,q) in ARIMA timeseries model
A time series is a sequence where a metric is recorded over regular time intervals.
Inspired by
https://www.machinelearningplus.com/time-series/arima-model-time-series-forecasting-python/
End of explanation
#1. How to find the order of differencing (d) in ARIMA model
result = adfuller(wwwus.value.dropna())
print('ADF Statistic: %f' % result[0])
print('p-value: %f' % result[1])
Explanation: Step by Step Code Order
#1. How to find the order of differencing (d) in ARIMA model
p is the order of the AR term
q is the order of the MA term
d is the number of differencing required to make the time series stationary as I term
End of explanation
# Original Series
fig, axes = plt.subplots(3, 2, sharex=True)
axes[0, 0].plot(wwwus.value); axes[0, 0].set_title('Orig Series')
plot_acf(wwwus.value, ax=axes[0, 1], lags=60)
# 1st Differencing
axes[1, 0].plot(wwwus.value.diff()); axes[1, 0].set_title('1st Order Differencing')
plot_acf(wwwus.value.diff().dropna(), ax=axes[1, 1], lags=60)
# 2nd Differencing
axes[2, 0].plot(wwwus.value.diff().diff()); axes[2, 0].set_title('2nd Order Differencing')
plot_acf(wwwus.value.diff().diff().dropna(), ax=axes[2, 1], lags=60)
plt.show()
Explanation: A p-value less than 0.05 (typically ≤ 0.05) is statistically significant. It indicates strong evidence against the null hypothesis, as there is less than a 5% probability the null hypo is correct (and the results are by random). Therefore, we reject the null hypothesis, and accept the alternative hypothesis (there is correlation).
Our timeserie is not significant, why?
Example:
ADF Statistic: -2.464240
p-value: 0.124419
0-Hypothesis non stationary
0.12 > 0.05 -> not significant, therefore we can not reject the 0-hypthesis so our time series is non stationary and we had to differencing it to make it stationary.
The purpose of differencing it is to make the time series stationary.
End of explanation
plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120})
fig, axes = plt.subplots(1, 2, sharex=True)
axes[0].plot(wwwus.value.diff()); axes[0].set_title('1st Differencing')
axes[1].set(ylim=(0,5))
plot_pacf(wwwus.value.diff().dropna(), ax=axes[1], lags=100)
plt.show()
Explanation: For the above series, the time series reaches stationarity with two orders of differencing. But we use for the beginning 1 order as a conservative part. Let me explain that:
D>2 is not allowed in statsmodels.tsa.arima_model!
Maybe d>2 is not allowed means our best bet is to start simple, check if integrating once grants stationarity. If so, we can fit a simple ARIMA model and examine the ACF of the residual values to get a better feel about what orders of differencing to use. Also a drawback, if we integrate more than two times (d>2), we lose n observations, one for each integration. And one of the most common errors in ARIMA modeling is to "overdifference" the series and end up adding extra AR or MA terms to undo the forecast damage, so the author (I assume) decides to raise this exception.
#2. How to find the order of the AR term (p)
End of explanation
fig, axes = plt.subplots(1, 2, sharex=True)
axes[0].plot(wwwus.value.diff()); axes[0].set_title('1st Differencing')
axes[1].set(ylim=(0,1.2))
plot_acf(wwwus.value.diff().dropna(), ax=axes[1] , lags=90)
plt.show()
Explanation: #3. How to find the order of the MA term (q)
End of explanation
model = ARIMA(wwwus.value, order=(1,1,2))
model_fit = model.fit(disp=0)
print('first fit ',model_fit.summary())
Explanation: 4. How to build the ARIMA Model
End of explanation
residuals = pd.DataFrame(model_fit.resid)
plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120})
fig, ax = plt.subplots(1,2)
residuals.plot(title="Residuals", ax=ax[0])
residuals.plot(kind='kde', title='Density', ax=ax[1])
plt.show()
Explanation: Notice here the coefficient of the MA2 term is close to zero (-0.0010 ) and the P-Value in ‘P>|z|’ column is highly insignificant (0.9). It should ideally be less than 0.05 for the respective X to be significant << 0.05.
5. Plot residual errors
Let’s plot the residuals to ensure there are no patterns (that is, look for constant mean and variance).
End of explanation
plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120})
model_fit.plot_predict(dynamic=False)
plt.show()
Explanation: 6. Plot Predict Actual vs Fitted
When you set dynamic=False in-sample lagged values are used for prediction.
That is, the model gets trained up until the previous values to make next prediction. This can make a fitted forecast and actuals look artificially good.
End of explanation
train = wwwus.value[:80]
test = wwwus.value[80:]
#model = ARIMA(train, order=(3, 2, 1))
model = ARIMA(train, order=(2, 2, 3))
fitted = model.fit(disp=-1)
print('second fit ',fitted.summary())
# Forecast
fc,se,conf = fitted.forecast(20, alpha=0.05) # 95% conf
# Make as pandas series
fc_series = pd.Series(fc, index=test.index)
lower_series = pd.Series(conf[:,0], index=test.index)
upper_series = pd.Series(conf[:,1], index=test.index)
# Plot
plt.figure(figsize=(12,5), dpi=100)
plt.plot(train, label='training')
plt.plot(test, label='actual')
plt.plot(fc_series, label='forecast')
plt.fill_between(lower_series.index, lower_series, upper_series,
color='k', alpha=.15)
plt.title('maXbox4 Forecast vs Actuals ARIMA')
plt.legend(loc='upper left', fontsize=8)
plt.show()
Explanation: 7. Now Create Training and Test Validation
We can see that ARIMA is adequately forecasting the seasonal pattern in the series. In terms of the model performance, the RMSE (root mean squared error) and MFE (mean forecast error) and also best in terms of the lowest BIC .
End of explanation
print(forecast_accuracy(fc, test.values))
print('Around 5% MAPE implies a model is about 95% accurate in predicting next 20 observations.')
Explanation: 8. Some scores and performance
The 20 observations depends on the train/test set fc,se,conf = fitted.forecast(20, alpha=0.05) # 95% conf
End of explanation |
6,294 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Step1: In order to get you familiar with graph ideas,
I have deliberately chosen to steer away from
the more pedantic matters
of loading graph data to and from disk.
That said, the following scenario will eventually happen,
where a graph dataset lands on your lap,
and you'll need to load it in memory
and start analyzing it.
Thus, we're going to go through graph I/O,
specifically the APIs on how to convert
graph data that comes to you
into that magical NetworkX object G.
Let's get going!
Graph Data as Tables
Let's recall what we've learned in the introductory chapters.
Graphs can be represented using two sets
Step2: Firstly, we need to unzip the dataset
Step3: Now, let's load in both tables.
First is the stations table
Step4: Now, let's load in the trips table.
Step5: Graph Model
Given the data, if we wished to use a graph as a data model
for the number of trips between stations,
then naturally, nodes would be the stations,
and edges would be trips between them.
This graph would be directed,
as one could have more trips from station A to B
and less in the reverse.
With this definition,
we can begin graph construction!
Create NetworkX graph from pandas edgelist
NetworkX provides an extremely convenient way
to load data from a pandas DataFrame
Step6: Inspect the graph
Once the graph is in memory,
we can inspect it to get out summary graph statistics.
Step7: You'll notice that the edge metadata have been added correctly
Step8: However, the node metadata is not present
Step9: Annotate node metadata
We have rich station data on hand,
such as the longitude and latitude of each station,
and it would be a pity to discard it,
especially when we can potentially use it as part of the analysis
or for visualization purposes.
Let's see how we can add this information in.
Firstly, recall what the stations dataframe looked like
Step10: The id column gives us the node ID in the graph,
so if we set id to be the index,
if we then also loop over each row,
we can treat the rest of the columns as dictionary keys
and values as dictionary values,
and add the information into the graph.
Let's see this in action.
Step11: Now, our node metadata should be populated.
Step13: In nxviz, a GeoPlot object is available
that allows you to quickly visualize
a graph that has geographic data.
However, being matplotlib-based,
it is going to be quickly overwhelmed
by the sheer number of edges.
As such, we are going to first filter the edges.
Exercise
Step14: Visualize using GeoPlot
nxviz provides a GeoPlot object
that lets you quickly visualize geospatial graph data.
A note on geospatial visualizations
Step15: Does that look familiar to you? Looks quite a bit like Chicago, I'd say
Step16: And just to show that it can be loaded back into memory
Step18: Exercise
Step19: Other text formats
CSV files and pandas DataFrames
give us a convenient way to store graph data,
and if possible, do insist with your data collaborators
that they provide you with graph data that are in this format.
If they don't, however, no sweat!
After all, Python is super versatile.
In this ebook, we have loaded data in
from non-CSV sources,
sometimes by parsing text files raw,
sometimes by treating special characters as delimiters in a CSV-like file,
and sometimes by resorting to parsing JSON.
You can see other examples of how we load data
by browsing through the source file of load_data.py
and studying how we construct graph objects.
Solutions
The solutions to this chapter's exercises are below | Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo(id="3sJnTpeFXZ4", width="100%")
Explanation: Introduction
End of explanation
from pyprojroot import here
Explanation: In order to get you familiar with graph ideas,
I have deliberately chosen to steer away from
the more pedantic matters
of loading graph data to and from disk.
That said, the following scenario will eventually happen,
where a graph dataset lands on your lap,
and you'll need to load it in memory
and start analyzing it.
Thus, we're going to go through graph I/O,
specifically the APIs on how to convert
graph data that comes to you
into that magical NetworkX object G.
Let's get going!
Graph Data as Tables
Let's recall what we've learned in the introductory chapters.
Graphs can be represented using two sets:
Node set
Edge set
Node set as tables
Let's say we had a graph with 3 nodes in it: A, B, C.
We could represent it in plain text, computer-readable format:
csv
A
B
C
Suppose the nodes also had metadata.
Then, we could tag on metadata as well:
csv
A, circle, 5
B, circle, 7
C, square, 9
Does this look familiar to you?
Yes, node sets can be stored in CSV format,
with one of the columns being node ID,
and the rest of the columns being metadata.
Edge set as tables
If, between the nodes, we had 4 edges (this is a directed graph),
we can also represent those edges in plain text, computer-readable format:
csv
A, C
B, C
A, B
C, A
And let's say we also had other metadata,
we can represent it in the same CSV format:
csv
A, C, red
B, C, orange
A, B, yellow
C, A, green
If you've been in the data world for a while,
this should not look foreign to you.
Yes, edge sets can be stored in CSV format too!
Two of the columns represent the nodes involved in an edge,
and the rest of the columns represent the metadata.
Combined Representation
In fact, one might also choose to combine
the node set and edge set tables together in a merged format:
n1, n2, colour, shape1, num1, shape2, num2
A, C, red, circle, 5, square, 9
B, C, orange, circle, 7, square, 9
A, B, yellow, circle, 5, circle, 7
C, A, green, square, 9, circle, 5
In this chapter, the datasets that we will be looking at
are going to be formatted in both ways.
Let's get going.
Dataset
We will be working with the Divvy bike sharing dataset.
Divvy is a bike sharing service in Chicago.
Since 2013, Divvy has released their bike sharing dataset to the public.
The 2013 dataset is comprised of two files:
- Divvy_Stations_2013.csv, containing the stations in the system, and
- DivvyTrips_2013.csv, containing the trips.
Let's dig into the data!
End of explanation
import zipfile
import os
from nams.load_data import datasets
# This block of code checks to make sure that a particular directory is present.
if "divvy_2013" not in os.listdir(datasets):
print('Unzipping the divvy_2013.zip file in the datasets folder.')
with zipfile.ZipFile(datasets / "divvy_2013.zip","r") as zip_ref:
zip_ref.extractall(datasets)
Explanation: Firstly, we need to unzip the dataset:
End of explanation
import pandas as pd
stations = pd.read_csv(datasets / 'divvy_2013/Divvy_Stations_2013.csv', parse_dates=['online date'], encoding='utf-8')
stations.head()
stations.describe()
Explanation: Now, let's load in both tables.
First is the stations table:
End of explanation
trips = pd.read_csv(datasets / 'divvy_2013/Divvy_Trips_2013.csv',
parse_dates=['starttime', 'stoptime'])
trips.head()
import janitor
trips_summary = (
trips
.groupby(["from_station_id", "to_station_id"])
.count()
.reset_index()
.select_columns(
[
"from_station_id",
"to_station_id",
"trip_id"
]
)
.rename_column("trip_id", "num_trips")
)
trips_summary.head()
Explanation: Now, let's load in the trips table.
End of explanation
import networkx as nx
G = nx.from_pandas_edgelist(
df=trips_summary,
source="from_station_id",
target="to_station_id",
edge_attr=["num_trips"],
create_using=nx.DiGraph
)
Explanation: Graph Model
Given the data, if we wished to use a graph as a data model
for the number of trips between stations,
then naturally, nodes would be the stations,
and edges would be trips between them.
This graph would be directed,
as one could have more trips from station A to B
and less in the reverse.
With this definition,
we can begin graph construction!
Create NetworkX graph from pandas edgelist
NetworkX provides an extremely convenient way
to load data from a pandas DataFrame:
End of explanation
print(nx.info(G))
Explanation: Inspect the graph
Once the graph is in memory,
we can inspect it to get out summary graph statistics.
End of explanation
list(G.edges(data=True))[0:5]
Explanation: You'll notice that the edge metadata have been added correctly: we have recorded in there the number of trips between stations.
End of explanation
list(G.nodes(data=True))[0:5]
Explanation: However, the node metadata is not present:
End of explanation
stations.head()
Explanation: Annotate node metadata
We have rich station data on hand,
such as the longitude and latitude of each station,
and it would be a pity to discard it,
especially when we can potentially use it as part of the analysis
or for visualization purposes.
Let's see how we can add this information in.
Firstly, recall what the stations dataframe looked like:
End of explanation
for node, metadata in stations.set_index("id").iterrows():
for key, val in metadata.items():
G.nodes[node][key] = val
Explanation: The id column gives us the node ID in the graph,
so if we set id to be the index,
if we then also loop over each row,
we can treat the rest of the columns as dictionary keys
and values as dictionary values,
and add the information into the graph.
Let's see this in action.
End of explanation
list(G.nodes(data=True))[0:5]
Explanation: Now, our node metadata should be populated.
End of explanation
def filter_graph(G, minimum_num_trips):
Filter the graph such that
only edges that have minimum_num_trips or more
are present.
G_filtered = G.____()
for _, _, _ in G._____(data=____):
if d[___________] < ___:
G_________.___________(_, _)
return G_filtered
from nams.solutions.io import filter_graph
G_filtered = filter_graph(G, 50)
Explanation: In nxviz, a GeoPlot object is available
that allows you to quickly visualize
a graph that has geographic data.
However, being matplotlib-based,
it is going to be quickly overwhelmed
by the sheer number of edges.
As such, we are going to first filter the edges.
Exercise: Filter graph edges
Leveraging what you know about how to manipulate graphs,
now try filtering edges.
Hint: NetworkX graph objects can be deep-copied using G.copy():
python
G_copy = G.copy()
Hint: NetworkX graph objects also let you remove edges:
python
G.remove_edge(node1, node2) # does not return anything
End of explanation
import nxviz as nv
c = nv.geo(G_filtered, node_color_by="dpcapacity")
Explanation: Visualize using GeoPlot
nxviz provides a GeoPlot object
that lets you quickly visualize geospatial graph data.
A note on geospatial visualizations:
As the creator of nxviz,
I would recommend using proper geospatial packages
to build custom geospatial graph viz,
such as pysal.)
That said, nxviz can probably do what you need
for a quick-and-dirty view of the data.
End of explanation
nx.write_gpickle(G, "/tmp/divvy.pkl")
Explanation: Does that look familiar to you? Looks quite a bit like Chicago, I'd say :)
Jesting aside, this visualization does help illustrate
that the majority of trips occur between stations that are
near the city center.
Pickling Graphs
Since NetworkX graphs are Python objects,
the canonical way to save them is by pickling them.
You can do this using:
python
nx.write_gpickle(G, file_path)
Here's an example in action:
End of explanation
G_loaded = nx.read_gpickle("/tmp/divvy.pkl")
Explanation: And just to show that it can be loaded back into memory:
End of explanation
def test_graph_integrity(G):
Test integrity of raw Divvy graph.
# Your solution here
pass
from nams.solutions.io import test_graph_integrity
test_graph_integrity(G)
Explanation: Exercise: checking graph integrity
If you get a graph dataset as a pickle,
you should always check it against reference properties
to make sure of its data integrity.
Write a function that tests that the graph
has the correct number of nodes and edges inside it.
End of explanation
from nams.solutions import io
import inspect
print(inspect.getsource(io))
Explanation: Other text formats
CSV files and pandas DataFrames
give us a convenient way to store graph data,
and if possible, do insist with your data collaborators
that they provide you with graph data that are in this format.
If they don't, however, no sweat!
After all, Python is super versatile.
In this ebook, we have loaded data in
from non-CSV sources,
sometimes by parsing text files raw,
sometimes by treating special characters as delimiters in a CSV-like file,
and sometimes by resorting to parsing JSON.
You can see other examples of how we load data
by browsing through the source file of load_data.py
and studying how we construct graph objects.
Solutions
The solutions to this chapter's exercises are below
End of explanation |
6,295 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Crime
Boilerplate
Step1: Read the Data
Step2: ## Transforming the Original Data Using a Mapped Dictionary
Step3: Linear Regression through Ordinary Least Squares
Step4: So only in District Dorchester, Mattapan, Roxbury, and HTU have a statistical significance as far as shooting is concerned
Step5: Residual Testing
Step6: Linear Regression through Ordinary Least Squares based on Main Crime Code
Step7: Shows that class_04XX and class_01XX are quite significant in shooting | Python Code:
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.formula.api import ols
from statsmodels.formula.api import logit
import pylab as pl
import seaborn as sns
mpl.style.use('fivethirtyeight')
%matplotlib inline
Explanation: Crime
Boilerplate
End of explanation
df=pd.read_csv('Sample_Crime_Incident_Reports_Cleaned01.csv',low_memory=False).dropna()
df[:5]
df.describe()
Explanation: Read the Data
End of explanation
#creating a remapped shooting variable
df['Shoot_Status']=df['Shooting'].map({'No':0,'Yes':1}).astype(int)
df[:5]
main_crimecode_dummies=pd.get_dummies(df['MAIN_CRIMECODE'], prefix='class').iloc[:, 1:]
list(main_crimecode_dummies.columns.values)
reporting_district_dummies=pd.get_dummies(df['REPTDISTRICT'], prefix='class').iloc[:, 1:]
list(reporting_district_dummies.columns.values)
data=df.join([main_crimecode_dummies,reporting_district_dummies])
data[:5]
Explanation: ## Transforming the Original Data Using a Mapped Dictionary
End of explanation
model = ols(data=data, formula='Shoot_Status~class_Dorchester+class_Downtown+class_Downtown5+class_EastBoston+class_HydePark+class_JamaicaPlain+class_Mattapan+class_Roxbury+class_SouthBoston+class_SouthEnd+class_WestRoxbury')
result = model.fit()
result.summary()
residuals=result.resid
sns.distplot(residuals)
# Checking the residuals
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(10,5))
sns.distplot(residuals, ax=axes[0]);
sm.qqplot(residuals, fit=True, line='s', ax=axes[1]);
Explanation: Linear Regression through Ordinary Least Squares
End of explanation
model = ols(data=data, formula='Shoot_Status~class_Dorchester+class_Mattapan+class_Roxbury')
result = model.fit()
result.summary()
Explanation: So only in District Dorchester, Mattapan, Roxbury, and HTU have a statistical significance as far as shooting is concerned
End of explanation
residuals=result.resid
sns.distplot(residuals)
Explanation: Residual Testing
End of explanation
model = ols(data=data, formula='Shoot_Status~class_01xx +class_03xx +class_04xx +class_05CB +class_05RB +class_06MV +class_06xx +class_07RV+class_07xx +class_08xx +class_09xx +class_10xx +class_11xx +class_12xx +class_13xx + class_14xx +class_15xx +class_16xx +class_18xx +class_20xx +class_21xx +class_22xx +class_24xx +class_32GUN +class_Argue +class_Arrest +class_BENoProp +class_Ballist +class_Bomb + class_BurgTools +class_Explos +class_FIRE +class_Gather +class_Harass +class_Harbor +class_Hazardous +class_InvPer +class_InvProp +class_InvVeh +class_LICViol +class_Labor +class_Landlord +class_MVAcc+class_Manslaug +class_MedAssist +class_OTHER +class_PRISON +class_PersLoc +class_PersMiss +class_PhoneCalls +class_Plates +class_PropDam +class_PropFound +class_PropLost +class_PubDrink +class_Restrain +class_Runaway +class_SearchWarr +class_Service +class_SexReg +class_SkipFare +class_TOWED +class_TRESPASS +class_VAL')
result = model.fit()
result.summary()
Explanation: Linear Regression through Ordinary Least Squares based on Main Crime Code
End of explanation
residuals=result.resid
sns.distplot(residuals)
# Checking the residuals
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(10,5))
sns.distplot(residuals, ax=axes[0]);
sm.qqplot(residuals, fit=True, line='s', ax=axes[1]);
Explanation: Shows that class_04XX and class_01XX are quite significant in shooting
End of explanation |
6,296 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Registration Initialization
Step1: Loading Data
Note
Step2: Register using centered transform initializer (assumes orientation is similar)
Step3: Visually evaluate the results using a linked cursor approach, a mouse click in one image will create the "corresponding" point in the other image. Don't be fooled by clicking on the "ribs" (symmetry is the bane of registration).
Step4: Register using sampling of the parameter space
As we want to account for significant orientation differences due to erroneous patient position (HFS...) we evaluate the similarity measure at locations corresponding to the various orientation differences. This can be done in two ways which will be illustrated below
Step5: Why loop if you can process in parallel
As the metric evaluations are independent of each other, we can easily replace looping with parallelization.
Step6: Visually evaluate the registration results
Step7: Exhaustive optimizer
The exhaustive optimizer evaluates the similarity metric on a grid in parameter space centered on the parameters of the initial transform. This grid is defined using three elements
Step8: Run the registration and visually evaluate the results
Step9: Exhaustive optimizer - an exploration-exploitation view
In the example above we used the exhaustive optimizer to obtain an initial value for our transformation parameter values. This approach has two limitations
Step10: Register using manual initialization
When all else fails, a human in the loop will almost always be able to robustly initialize the registration.
In the example below we identify corresponding points to compute an initial rigid transformation. You will need to click on corresponding points in each of the images, going back and forth between them. The interface will "force" you to create point pairs (you will not be able to add multiple points in one image).
Note
Step11: Run the registration and visually evaluate the results
Step12: Register using manual initialization - slice to volume
In some cases, initialization may be more critical than others. For example, when registering a 2D slice to a 3D image. This requires careful initialization because the potential for converging to a local minimum is much larger than when we register two corresponding volumes. Note that SimpleITK does not support 2D/3D registration. A slice to volume registration is a 3D/3D registration. The slice is a "very thin" 3D image, one of the axes has a size of one.
In the next cells we explore the basic slice to volume registration scenario.
Step13: Find an initial translation for the image by scrolling through the moving image volume.
Step14: Now that we have our initial transformation, mapping the slice to volume, we can run the final registration step. As we utilize multi-resolution registration, it includes image smoothing when creating the pyramid. This will cause the SmoothingRecursiveGaussianImageFilter to throw an exception as it expects images to have at least four pixels in each dimension and we only have one pixel in the z axis. In the code below we remedy this by expanding our image in the z-direction using the ExpandImageFilter. This filter utilizes interpolation to increase the image's pixel count while maintaining its spatial extent. This means that when we increase the number of pixels the spacing between them will decrease. As a reminder, an image's physical region extends 0.5*spacing beyond the first and last pixel locations. | Python Code:
import SimpleITK as sitk
# If the environment variable SIMPLE_ITK_MEMORY_CONSTRAINED_ENVIRONMENT is set, this will override the ReadImage
# function so that it also resamples the image to a smaller size (testing environment is memory constrained).
%run setup_for_testing
import os
import numpy as np
from ipywidgets import interact, fixed
%run update_path_to_download_script
from downloaddata import fetch_data as fdata
%matplotlib notebook
import gui
# This is the registration configuration which we use in all cases. The only parameter that we vary
# is the initial_transform.
def multires_registration(fixed_image, moving_image, initial_transform):
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)
registration_method.SetOptimizerAsGradientDescent(
learningRate=1.0,
numberOfIterations=100,
estimateLearningRate=registration_method.Once,
)
registration_method.SetOptimizerScalesFromPhysicalShift()
registration_method.SetInitialTransform(initial_transform, inPlace=False)
registration_method.SetShrinkFactorsPerLevel(shrinkFactors=[4, 2, 1])
registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2, 1, 0])
registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()
final_transform = registration_method.Execute(fixed_image, moving_image)
print(f"Final metric value: {registration_method.GetMetricValue()}")
print(
f"Optimizer's stopping condition, {registration_method.GetOptimizerStopConditionDescription()}"
)
return (final_transform, registration_method.GetMetricValue())
Explanation: Registration Initialization: We Have to Start Somewhere
Initialization is a critical aspect of most registration algorithms, given that most algorithms are formulated as an iterative optimization problem. It effects both the runtime and convergence to the correct minimum. Ideally our transformation is initialized close to the correct solution ensuring convergence in a timely manner. Problem specific initialization will often yield better results than generic initialization approaches.
Rule of thumb: use as much prior information (external to the image content) as you can to initialize your registration.
Common initializations strategies when no prior information is available:
1. Do nothing (hope springs eternal) - initialize using the identity transformation.
2. CenteredTransformInitializer (GEOMETRY or MOMENTS) - translation based initialization, align the geometric centers of the images or the intensity based centers of mass of the image contents.
3. Use a sampling of the parameter space (useful mostly for low dimensional parameter spaces).
4. Manual initialization - allow an operator to control transformation parameter settings directly using a GUI with visual feedback or identify multiple corresponding points in the two images and compute an initial rigid or affine transformation.
In many cases we perform initialization in an automatic manner by making assumptions with regard to the contents of the image and the imaging protocol. For instance, if we expect that images were acquired with the patient in a known orientation we can align the geometric centers of the two volumes or the center of mass of the image contents if the anatomy is not centered in the image (this is what we previously did in this example).
When the orientation is not known, or is known but incorrect, this approach will not yield a reasonable initial estimate for the registration.
When working with clinical images, the DICOM tags define the orientation and position of the anatomy in the volume. The tags of interest are:
<ul>
<li> (0020|0032) Image Position (Patient) : coordinates of the the first transmitted voxel. </li>
<li>(0020|0037) Image Orientation (Patient): directions of first row and column in 3D space. </li>
<li>(0018|5100) Patient Position: Patient placement on the table
<ul>
<li> Head First Prone (HFP)</li>
<li> Head First Supine (HFS)</li>
<li> Head First Decubitus Right (HFDR)</li>
<li> Head First Decubitus Left (HFDL)</li>
<li> Feet First Prone (FFP)</li>
<li> Feet First Supine (FFS)</li>
<li> Feet First Decubitus Right (FFDR)</li>
<li> Feet First Decubitus Left (FFDL)</li>
</ul>
</li>
</ul>
The patient position is manually entered by the CT/MR operator and thus can be erroneous (HFP instead of FFP will result in a $180^o$ orientation error). In this notebook we use data acquired using an abdominal phantom which made it hard to identify the "head" and "feet" side, resulting in an incorrect value entered by the technician.
End of explanation
data_directory = os.path.dirname(fdata("CIRS057A_MR_CT_DICOM/readme.txt"))
fixed_series_ID = "1.2.840.113619.2.290.3.3233817346.783.1399004564.515"
moving_series_ID = "1.3.12.2.1107.5.2.18.41548.30000014030519285935000000933"
reader = sitk.ImageSeriesReader()
fixed_image = sitk.ReadImage(
reader.GetGDCMSeriesFileNames(data_directory, fixed_series_ID), sitk.sitkFloat32
)
moving_image = sitk.ReadImage(
reader.GetGDCMSeriesFileNames(data_directory, moving_series_ID), sitk.sitkFloat32
)
# To provide a reasonable display we need to window/level the images. By default we could have used the intensity
# ranges found in the images [SimpleITK's StatisticsImageFilter], but these are not the best values for viewing.
# Try using the full intensity range in the GUI to see that it is not a good choice for visualization.
ct_window_level = [932, 180]
mr_window_level = [286, 143]
gui.MultiImageDisplay(
image_list=[fixed_image, moving_image],
title_list=["fixed image", "moving image"],
figure_size=(8, 4),
window_level_list=[ct_window_level, mr_window_level],
intensity_slider_range_percentile=[0, 100],
);
Explanation: Loading Data
Note: While the images are of the same phantom, they were acquired at different times and the fiducial markers visible on the phantom are not in the same locations.
Scroll through the data to gain an understanding of the spatial relationship along the viewing (z) axis.
End of explanation
initial_transform = sitk.CenteredTransformInitializer(
fixed_image,
moving_image,
sitk.Euler3DTransform(),
sitk.CenteredTransformInitializerFilter.GEOMETRY,
)
final_transform, _ = multires_registration(fixed_image, moving_image, initial_transform)
Explanation: Register using centered transform initializer (assumes orientation is similar)
End of explanation
gui.RegistrationPointDataAquisition(
fixed_image,
moving_image,
figure_size=(8, 4),
known_transformation=final_transform,
fixed_window_level=ct_window_level,
moving_window_level=mr_window_level,
);
Explanation: Visually evaluate the results using a linked cursor approach, a mouse click in one image will create the "corresponding" point in the other image. Don't be fooled by clicking on the "ribs" (symmetry is the bane of registration).
End of explanation
# Dictionary with all the orientations we will try. We omit the identity (x=0, y=0, z=0) as we always use it. This
# set of rotations is arbitrary. For a complete grid coverage we would naively have 64 entries
# (0, pi/2, pi, 1.5pi for each angle), but we know better, there are only 24 unique rotation matrices defined by
# these parameter value combinations.
all_orientations = {
"x=0, y=0, z=180": (0.0, 0.0, np.pi),
"x=0, y=180, z=0": (0.0, np.pi, 0.0),
"x=0, y=180, z=180": (0.0, np.pi, np.pi),
}
# Registration framework setup.
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)
# Evaluate the similarity metric using the rotation parameter space sampling, translation remains the same for all.
initial_transform = sitk.Euler3DTransform(
sitk.CenteredTransformInitializer(
fixed_image,
moving_image,
sitk.Euler3DTransform(),
sitk.CenteredTransformInitializerFilter.GEOMETRY,
)
)
registration_method.SetInitialTransform(initial_transform, inPlace=False)
best_orientation = (0.0, 0.0, 0.0)
best_similarity_value = registration_method.MetricEvaluate(fixed_image, moving_image)
# Iterate over all other rotation parameter settings.
for key, orientation in all_orientations.items():
initial_transform.SetRotation(*orientation)
registration_method.SetInitialTransform(initial_transform)
current_similarity_value = registration_method.MetricEvaluate(
fixed_image, moving_image
)
if current_similarity_value < best_similarity_value:
best_similarity_value = current_similarity_value
best_orientation = orientation
print("best orientation is: " + str(best_orientation))
Explanation: Register using sampling of the parameter space
As we want to account for significant orientation differences due to erroneous patient position (HFS...) we evaluate the similarity measure at locations corresponding to the various orientation differences. This can be done in two ways which will be illustrated below:
<ul>
<li>Use the ImageRegistrationMethod.MetricEvaluate() method.</li>
<li>Use the Exhaustive optimizer.
</ul>
The former approach is more computationally intensive as it constructs and configures a metric object each time it is invoked. It is therefore more appropriate for use if the set of parameter values we want to evaluate are not on a rectilinear grid in the parameter space. The latter approach is appropriate if the set of parameter values are on a rectilinear grid, in which case the approach is more computationally efficient.
In both cases we use the CenteredTransformInitializer to obtain the initial translation.
MetricEvaluate
To use the MetricEvaluate method we create a ImageRegistrationMethod, set its metric and interpolator. We then iterate over all parameter settings, set the initial transform and evaluate the metric. The minimal similarity measure value corresponds to the best parameter settings.
End of explanation
from multiprocessing.pool import ThreadPool
from functools import partial
# This function evaluates the metric value in a thread safe manner
def evaluate_metric(current_rotation, tx, f_image, m_image):
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)
current_transform = sitk.Euler3DTransform(tx)
current_transform.SetRotation(*current_rotation)
registration_method.SetInitialTransform(current_transform)
res = registration_method.MetricEvaluate(f_image, m_image)
return res
p = ThreadPool(len(all_orientations) + 1)
orientations_list = [(0, 0, 0)] + list(all_orientations.values())
all_metric_values = p.map(
partial(
evaluate_metric, tx=initial_transform, f_image=fixed_image, m_image=moving_image
),
orientations_list,
)
best_orientation = orientations_list[np.argmin(all_metric_values)]
print("best orientation is: " + str(best_orientation))
initial_transform.SetRotation(*best_orientation)
final_transform, _ = multires_registration(fixed_image, moving_image, initial_transform)
Explanation: Why loop if you can process in parallel
As the metric evaluations are independent of each other, we can easily replace looping with parallelization.
End of explanation
gui.RegistrationPointDataAquisition(
fixed_image,
moving_image,
figure_size=(8, 4),
known_transformation=final_transform,
fixed_window_level=ct_window_level,
moving_window_level=mr_window_level,
);
Explanation: Visually evaluate the registration results:
End of explanation
initial_transform = sitk.CenteredTransformInitializer(
fixed_image,
moving_image,
sitk.Euler3DTransform(),
sitk.CenteredTransformInitializerFilter.GEOMETRY,
)
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)
# The order of parameters for the Euler3DTransform is [angle_x, angle_y, angle_z, t_x, t_y, t_z]. The parameter
# sampling grid is centered on the initial_transform parameter values, that are all zero for the rotations. Given
# the number of steps and their length and optimizer scales we have:
# angle_x = 0
# angle_y = -pi, 0, pi
# angle_z = -pi, 0, pi
registration_method.SetOptimizerAsExhaustive(
numberOfSteps=[0, 1, 1, 0, 0, 0], stepLength=np.pi
)
registration_method.SetOptimizerScales([1, 1, 1, 1, 1, 1])
# Perform the registration in-place so that the initial_transform is modified.
registration_method.SetInitialTransform(initial_transform, inPlace=True)
registration_method.Execute(fixed_image, moving_image)
print("best initial transformation is: " + str(initial_transform.GetParameters()))
Explanation: Exhaustive optimizer
The exhaustive optimizer evaluates the similarity metric on a grid in parameter space centered on the parameters of the initial transform. This grid is defined using three elements:
1. numberOfSteps.
2. stepLength.
3. optimizer scales.
The similarity metric is evaluated on the resulting parameter grid:
initial_parameters ± numberOfSteps × stepLength × optimizerScales
Example:
1. numberOfSteps=[1,0,2,0,0,0]
2. stepLength = np.pi
3. optimizer scales = [1,1,0.5,1,1,1]
Will perform 15 metric evaluations ($\displaystyle\prod_i (2*numberOfSteps[i] + 1)$).
The parameter values for the second parameter and the last three parameters are the initial parameter values. The parameter values for the first parameter are $v_{init}-\pi, v_{init}, v_{init}+\pi$ and the parameter values for the third parameter are $v_{init}-\pi, v_{init}-\pi/2, v_{init}, v_{init}+\pi/2, v_{init}+\pi$.
The transformation corresponding to the lowest similarity metric is returned.
Using this approach we have superfluous evaluations, due to the symmetry of the grid in parameter space. On the other hand this method is often faster than evaluating the metric using the MetricEvaluate method (due to the setup and tear down time).
End of explanation
final_transform, _ = multires_registration(fixed_image, moving_image, initial_transform)
gui.RegistrationPointDataAquisition(
fixed_image,
moving_image,
figure_size=(8, 4),
known_transformation=final_transform,
fixed_window_level=ct_window_level,
moving_window_level=mr_window_level,
);
Explanation: Run the registration and visually evaluate the results:
End of explanation
#
# Exploration step.
#
def start_observer():
global metricvalue_parameters_list
metricvalue_parameters_list = []
def iteration_observer(registration_method):
metricvalue_parameters_list.append(
(
registration_method.GetMetricValue(),
registration_method.GetOptimizerPosition(),
)
)
initial_transform = sitk.CenteredTransformInitializer(
fixed_image,
moving_image,
sitk.Euler3DTransform(),
sitk.CenteredTransformInitializerFilter.GEOMETRY,
)
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)
# The order of parameters for the Euler3DTransform is [angle_x, angle_y, angle_z, t_x, t_y, t_z]. The parameter
# sampling grid is centered on the initial_transform parameter values, that are all zero for the rotations. Given
# the number of steps and their length and optimizer scales we have:
# angle_x = 0
# angle_y = -pi, 0, pi
# angle_z = -pi, 0, pi
registration_method.SetOptimizerAsExhaustive(
numberOfSteps=[0, 1, 1, 0, 0, 0], stepLength=np.pi
)
registration_method.SetOptimizerScales([1, 1, 1, 1, 1, 1])
# We don't really care if transformation is modified in place or not, we will select the k
# best transformations from the parameters_metricvalue_list.
registration_method.SetInitialTransform(initial_transform, inPlace=True)
registration_method.AddCommand(
sitk.sitkIterationEvent, lambda: iteration_observer(registration_method)
)
registration_method.AddCommand(sitk.sitkStartEvent, start_observer)
_ = registration_method.Execute(fixed_image, moving_image)
#
# Exploitation step.
#
# Sort our list from most to least promising solutions (low to high metric values).
metricvalue_parameters_list.sort(key=lambda x: x[0])
# We exploit the k_most_promising parameter value settings.
k_most_promising = min(3, len(metricvalue_parameters_list))
final_results = []
for metricvalue, parameters in metricvalue_parameters_list[0:k_most_promising]:
initial_transform.SetParameters(parameters)
final_results.append(
multires_registration(fixed_image, moving_image, initial_transform)
)
final_transform, _ = min(final_results, key=lambda x: x[1])
gui.RegistrationPointDataAquisition(
fixed_image,
moving_image,
figure_size=(8, 4),
known_transformation=final_transform,
fixed_window_level=ct_window_level,
moving_window_level=mr_window_level,
);
Explanation: Exhaustive optimizer - an exploration-exploitation view
In the example above we used the exhaustive optimizer to obtain an initial value for our transformation parameter values. This approach has two limitations:
1. It assumes we only want to sample the parameter space using a continuous regular grid. This is not appropriate if we want to explore multiple discontinuous regions of the parameter space (e.g. tx = [0.5, 1.0, 1.5] and tx = [ 12.0, 12.5, 13.0] and...).
2. It assumes that the parameter values corresponding to the best metric value from our sample will enable convergence to the desired optimum. To quote George Gershwin - "It Ain't Necessarily So".
If we consider the exhaustive optimizer in the context of the exploration-exploitation heuristic framework, we first search the parameter space for promising solution(s) and then refine the solution(s), we can readily address these limitations:
Explore multiple discontinuous regions using a single or multiple instances of the ExhaustiveOptimizer or use the MetricEvaluate approach if the samples do not define a regular grid.
Obtain all of the parameter space samples and exploit (run final registration) for each of the k most promising solutions.
Below we implement the latter: (a) We explore the parameter space using the exhaustive optimizer's callback mechanism to obtain all of the parameter values and their corresponding metric values. (b) We exploit the k_most_promising parameter values to obtain the final transformation.
NOTE: This is a heuristic and only increases the probability of convergence to the desired optimum. In some cases this approach may be detrimental when the parameter values we seek correspond to a local optimum and not a global one.
End of explanation
point_acquisition_interface = gui.RegistrationPointDataAquisition(
fixed_image,
moving_image,
figure_size=(8, 4),
fixed_window_level=ct_window_level,
moving_window_level=mr_window_level,
);
# Get the manually specified points and compute the transformation.
fixed_image_points, moving_image_points = point_acquisition_interface.get_points()
# FOR TESTING: previously localized points
fixed_image_points = [
(24.062587103074605, 14.594981536981521, -58.75),
(6.178716135332678, 53.93949766601378, -58.75),
(74.14383149714774, -69.04462737237648, -76.25),
(109.74899278747029, -14.905272533666817, -76.25),
]
moving_image_points = [
(4.358707846364581, 60.46357110706131, -71.53120422363281),
(24.09010295252645, 98.21840981673873, -71.53120422363281),
(-52.11888008581127, -26.57984635768439, -58.53120422363281),
(-87.46150681392184, 28.73904765153219, -58.53120422363281),
]
fixed_image_points_flat = [c for p in fixed_image_points for c in p]
moving_image_points_flat = [c for p in moving_image_points for c in p]
init_transform = sitk.VersorRigid3DTransform(
sitk.LandmarkBasedTransformInitializer(
sitk.VersorRigid3DTransform(), fixed_image_points_flat, moving_image_points_flat
)
)
# Convert from Versor to Euler, as Versor does not always work well with the optimization.
# Internally the optimization sets new parameter values without any constraints, and the versor
# normalizes its vector component if it is greater than 1-epsilon.
initial_transform = sitk.Euler3DTransform()
initial_transform.SetCenter(init_transform.GetCenter())
initial_transform.SetMatrix(init_transform.GetMatrix())
initial_transform.SetTranslation(init_transform.GetTranslation())
print("manual initial transformation is: " + str(initial_transform.GetParameters()))
Explanation: Register using manual initialization
When all else fails, a human in the loop will almost always be able to robustly initialize the registration.
In the example below we identify corresponding points to compute an initial rigid transformation. You will need to click on corresponding points in each of the images, going back and forth between them. The interface will "force" you to create point pairs (you will not be able to add multiple points in one image).
Note:
1. There is no correspondence between the fiducial markers on the phantom.
2. After localizing points in the GUI below, comment out the hard-coded point data, two cells below, which is there FOR TESTING.
End of explanation
final_transform, _ = multires_registration(fixed_image, moving_image, initial_transform)
gui.RegistrationPointDataAquisition(
fixed_image,
moving_image,
figure_size=(8, 4),
known_transformation=final_transform,
fixed_window_level=ct_window_level,
moving_window_level=mr_window_level,
);
Explanation: Run the registration and visually evaluate the results:
End of explanation
fixed_image_slice = fixed_image[
:, :, fixed_image.GetDepth() // 2 : fixed_image.GetDepth() // 2 + 1
]
# We know our image is rotated by 180 degrees around the y axis, and the rotation center
# should be the center of the image.
fixed_image_center = fixed_image_slice.TransformContinuousIndexToPhysicalPoint(
[(sz - 1) / 2 for sz in fixed_image_slice.GetSize()]
)
initial_transform = sitk.Euler3DTransform(fixed_image_center, 0, 3.141592653589793, 0)
Explanation: Register using manual initialization - slice to volume
In some cases, initialization may be more critical than others. For example, when registering a 2D slice to a 3D image. This requires careful initialization because the potential for converging to a local minimum is much larger than when we register two corresponding volumes. Note that SimpleITK does not support 2D/3D registration. A slice to volume registration is a 3D/3D registration. The slice is a "very thin" 3D image, one of the axes has a size of one.
In the next cells we explore the basic slice to volume registration scenario.
End of explanation
slice_index_interface = gui.MultiImageDisplay(
[fixed_image_slice, moving_image],
title_list=["fixed image", "moving image"],
figure_size=(8, 4),
window_level_list=[ct_window_level, mr_window_level],
intensity_slider_range_percentile=[0, 100],
);
# Align the center of the fixed image slice to the center of the user selected moving image slice
selected_slice_index = slice_index_interface.slider_list[1].value
# FOR TESTING: next line sets a value for the selected slice, comment out when you actually want to work.
selected_slice_index = moving_image.GetDepth() // 2
moving_image_slice_center = moving_image.TransformContinuousIndexToPhysicalPoint(
[
(moving_image.GetWidth() - 1) / 2,
(moving_image.GetHeight() - 1) / 2,
selected_slice_index,
]
)
initial_transform.SetTranslation(
[m - f for m, f in zip(moving_image_slice_center, fixed_image_center)]
)
gui.RegistrationPointDataAquisition(
fixed_image_slice,
moving_image,
figure_size=(8, 4),
known_transformation=initial_transform,
fixed_window_level=ct_window_level,
moving_window_level=mr_window_level,
);
Explanation: Find an initial translation for the image by scrolling through the moving image volume.
End of explanation
expanded_fixed_image_slice = sitk.Expand(fixed_image_slice, [1, 1, 4], sitk.sitkLinear)
# Don't trust us, confirm that the expanded_fixed_image_slice and fixed_image_slice occupy the same
# region in physical space.
final_transform, _ = multires_registration(
expanded_fixed_image_slice, moving_image, initial_transform
)
gui.RegistrationPointDataAquisition(
fixed_image_slice,
moving_image,
figure_size=(8, 4),
known_transformation=final_transform,
fixed_window_level=ct_window_level,
moving_window_level=mr_window_level,
);
Explanation: Now that we have our initial transformation, mapping the slice to volume, we can run the final registration step. As we utilize multi-resolution registration, it includes image smoothing when creating the pyramid. This will cause the SmoothingRecursiveGaussianImageFilter to throw an exception as it expects images to have at least four pixels in each dimension and we only have one pixel in the z axis. In the code below we remedy this by expanding our image in the z-direction using the ExpandImageFilter. This filter utilizes interpolation to increase the image's pixel count while maintaining its spatial extent. This means that when we increase the number of pixels the spacing between them will decrease. As a reminder, an image's physical region extends 0.5*spacing beyond the first and last pixel locations.
End of explanation |
6,297 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algo - Calculs de surface et autres calculs
C'est l'histoire d'une boucle, puis d'une autre, puis enfin d'un couple de boucles, voire d'un triplé.
Step1: Enoncé
Exercice 1
Step2: 1.1 En utilisant la constante pi
1.2 Sans utiliser pi ni aucune autre fonction
Donc juste des additions, des multiplications, des divisions. On a le droit aux boucles aussi.
Exercice 2
Step3: 1.2. calcul de la surface d'un cercle sans pi ou autre fonction
Une approche possible est probabiliste
Step4: 2. tri aléatoire
Step5: Et si i > j, on ne fait rien et c'est bien dommage.
Step6: Le résultat n'est pas forcément meilleur mais il est plus rapide à obtenir puisqu'on fait un test en moins.
Et si on s'arrête quand cinq permutations aléatoires de suite ne mènen à aucune permutations dans le tableau.
Step7: 3. petits calculs parfaits pour une machine | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: Algo - Calculs de surface et autres calculs
C'est l'histoire d'une boucle, puis d'une autre, puis enfin d'un couple de boucles, voire d'un triplé.
End of explanation
def surface_cerle(r):
# ...
return 0.
Explanation: Enoncé
Exercice 1 : calcul de la surface d'un cercle
On cherche à écrire une fonction qui calcule la surface d'un cercle de rayon r.
End of explanation
from math import pi
def surface_cercle(r):
return r ** 2 * pi
surface_cercle(5)
Explanation: 1.1 En utilisant la constante pi
1.2 Sans utiliser pi ni aucune autre fonction
Donc juste des additions, des multiplications, des divisions. On a le droit aux boucles aussi.
Exercice 2 : tri aléatoire
On implémente le tri suivant (est-ce vraiment un tri d'ailleurs ?) :
Dans un tableau T, on tire deux élements aléatoires i < j, si T[i] > T[j], on les permute.
On s'arrête après n tirages sans permutations.
Exercice 3 : petits calculs parfaits pour une machine
On suppose que le tableau précédent est de taille n=10, l'algorithme précédent s'arrête après n tirages sans permutations. Comment choisir n de telle sorte que le tableau finisse trié dans 90% des cas...
Réponses
1.1. calcul de la surface d'un cercle avec pi
End of explanation
import numpy
def estimation_pi(n=10000):
rnd = numpy.random.rand(1000, 2)
norme = rnd[:, 0] ** 2 + rnd[:, 1] ** 2
dedans = norme <= 1
dedans_entier = dedans.astype(numpy.int64)
return dedans_entier.sum() / dedans.shape[0] * 4
pi = estimation_pi()
pi
def surface_cercle_pi(r, pi):
return r ** 2 * pi
surface_cercle_pi(5, pi)
Explanation: 1.2. calcul de la surface d'un cercle sans pi ou autre fonction
Une approche possible est probabiliste : on construit un estimateur de $\pi$ en tirant aléatoirement des points dans un carré de côté 1. Si le point $P_i$ tombe dans le quart de cercle inscrit dans le carré, on compte 1, sinon on compte 0. Donc:
$$\frac{1}{n} \sum_{i=1}^n \mathbb{1}_{\Vert P_i \Vert^2 \leqslant 1} \rightarrow \frac{\pi}{4}$$
Ce ratio converge vers la probabilité pour le point $P_i$ de tomber dans le quart de cercle, qui est égale au ratio des deux aires : $\frac{\pi r^2}{r^2}$ avec $ r=1$.
End of explanation
def tri_alea(T, n=1000):
T = T.copy()
for i in range(0, n):
i, j = numpy.random.randint(0, len(T), 2)
if i < j and T[i] > T[j]:
T[i], T[j] = T[j], T[i]
return T
tableau = [1, 3, 4, 5, 3, 2, 7, 11, 10, 9, 8, 0]
tri_alea(tableau)
Explanation: 2. tri aléatoire
End of explanation
def tri_alea2(T, n=1000):
T = T.copy()
for i in range(0, n):
i = numpy.random.randint(0, len(T) - 1)
j = numpy.random.randint(i + 1, len(T))
if T[i] > T[j]:
T[i], T[j] = T[j], T[i]
return T
tableau = [1, 3, 4, 5, 3, 2, 7, 11, 10, 9, 8, 0]
tri_alea2(tableau)
Explanation: Et si i > j, on ne fait rien et c'est bien dommage.
End of explanation
def tri_alea3(T, c=100):
T = T.copy()
compteur = 0
while compteur < c:
i = numpy.random.randint(0, len(T) - 1)
j = numpy.random.randint(i + 1, len(T))
if T[i] > T[j]:
T[i], T[j] = T[j], T[i]
compteur = 0
else:
compteur += 1
return T
tableau = [1, 3, 4, 5, 3, 2, 7, 11, 10, 9, 8, 0]
tri_alea3(tableau)
Explanation: Le résultat n'est pas forcément meilleur mais il est plus rapide à obtenir puisqu'on fait un test en moins.
Et si on s'arrête quand cinq permutations aléatoires de suite ne mènen à aucune permutations dans le tableau.
End of explanation
def est_trie(T):
for i in range(1, len(T)):
if T[i] < T[i-1]:
return False
return True
def eval_c(n, c, N=100):
compteur = 0
for i in range(N):
T = numpy.random.randint(0, 20, n)
T2 = tri_alea3(T, c=c)
if est_trie(T2):
compteur += 1
return compteur * 1. / N
eval_c(10, 100)
from tqdm import tqdm # pour afficher une barre de défilement
cs = []
ecs = []
for c in tqdm(range(1, 251, 25)):
cs.append(c)
ecs.append(eval_c(10, c=c))
ecs[-5:]
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(cs, ecs)
plt.plot([0, max(cs)], [0.9, 0.9], '--');
Explanation: 3. petits calculs parfaits pour une machine
End of explanation |
6,298 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: ODE to joy
Jens Hahn - 27/05/2016
Countinuous deterministic modelling with differential equations
Numerical integration
Every numerical procedure to solve an ODE is based on the discretisation of the system and the difference quotient. It's very easy to understand, you just have to read the $\frac{\text{d}\vec{x}}{\text{d}t}$ as a $\frac{\Delta \vec{x}}{\Delta t}$. Then you can multiply both sides of the equation with $\Delta t$ and you have an equation describing the change of your variables during a certain time intervall $\Delta t$
Step3: To test the accuracy, we run the simulation with 2 different time grids, one with a step size of 0.01 and one with step size 0.001
Step5: Heun's method
If you want to increase the accuracy of your method, you could use the trapezoidal rule you know from the approximation of integrals. The second point is of course missing, but here you could use Euler's method!
As we will see, this method is a huge improvement compared to Euler's method!
$$\Phi (t,x,h) = x + \frac{h}{2}\Bigl(f(t,x)+f\bigl(t+h,\underbrace{x+h\cdot f(t,x)}_{Euler's\ method}\bigr)\Bigr)$$
Runge - Kutta method
The idea of Runge and Kutta were quite straight forward
Step6: Let's simulate the same system also with odeint, the standard ODE solver of the scipy Python package.
Step7: And now we compare the results. I marked the amplitude and position of the maxima with red dotted lines.
Step8: As you can see, the Heun's method seems to have already a remarkable accuracy, even if it is a very simple method. Let's compare the results of odeint and the Heun's method directly | Python Code:
import numpy as np
# Lotka Volterra model
# initialise parameters
k1 = 1.5
k2 = 1.
k3 = 3.
k4 = 1.
def my_dxdt(s,t):
Function returns values of derivatives of Lotka Volterra model
return [k1*s[0] - k2*s[0]*s[1], - k3*s[1]+k4*s[0]*s[1]]
def my_euler_solver(dxdt, s0, timegrid):
Implementation of a simple Euler method (constant stepsize)
# first species values are s0
s = s0
# do timesteps
for j, time in enumerate(timegrid):
# first time step, just save initial values
if j == 0:
result = [[value] for value in s0]
continue
# next time step, calculate values and save them
for i, species in enumerate(s):
hi = (timegrid[j] - timegrid[j-1])
species = species + dxdt(s,time)[i] * hi
result[i].append(species)
# update species with new values
s[0] = result[0][-1]
s[1] = result[1][-1]
return result
Explanation: ODE to joy
Jens Hahn - 27/05/2016
Countinuous deterministic modelling with differential equations
Numerical integration
Every numerical procedure to solve an ODE is based on the discretisation of the system and the difference quotient. It's very easy to understand, you just have to read the $\frac{\text{d}\vec{x}}{\text{d}t}$ as a $\frac{\Delta \vec{x}}{\Delta t}$. Then you can multiply both sides of the equation with $\Delta t$ and you have an equation describing the change of your variables during a certain time intervall $\Delta t$:
$$ \Delta \vec{x} = \vec{f}(\vec{x}, t)\times \Delta t$$
Next step, the discretisation:
$$\vec{x}_{i+1} - \vec{x}_i = \vec{f}(\vec{x}_i, t)\times \Delta t$$
Next thing is putting the $\vec{x}_i$ on the other side and naming $\Delta t$ to $h$:
$$\vec{x}_{i+1} = \vec{x}_i + \vec{f}(\vec{x}_i, t)\times h$$
Of course, the smaller you choose the time intervall $h$, the more accurate your result will be in comparison to the analytical solution.
So it's clear, we chose a tiny one, right? Well, not exactly, the smaller your time intervall the longer the simulation will take. Therefore, we need a compromise and here the provided software will help us by constantly testing and observing the numerical solution and adapt the "step size" $h$ automatically.
Euler's method
The Euler's method is the simplest way to solve ODEs numerically. It can be written in a short formula.
$h$ is again the difference of time step: h_i = $t_{i+1} - t_i$
Then, the solution looks like that:
$$\Phi (t,x,h) = x + h\cdot f(t,x)$$
Unfortunately, this method is highly dependent on the size of $h_i$, the smaller the more accurate is the solution.
<img src="Euler.png">
Another way to understand this is to take a look at the Riemann sum. You probably know it already, calculate the value of $f(x)$ and multiply it by the step size. So it's not a new idea to you.
<img src="Riemann.gif">
Let's test the method with our well-known predator-prey model (Lotka-Volterra):
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
# timegrids
timegrid_e3 = np.linspace(0,20,2000)
timegrid_e4 = np.linspace(0,20,20000)
# get solutions
s0=[5,10]
my_euler_result_e3 = my_euler_solver(my_dxdt, s0, timegrid_e3)
s0=[5,10]
my_euler_result_e4 = my_euler_solver(my_dxdt, s0, timegrid_e4)
Explanation: To test the accuracy, we run the simulation with 2 different time grids, one with a step size of 0.01 and one with step size 0.001
End of explanation
def my_heun_solver(dxdt, s0, timegrid):
Implementation of the Heun method (constant stepsize)
# first species values are s0
s = s0
# do timesteps
for j, time in enumerate(timegrid):
# first time step, just save initial values
if j == 0:
result = [[value] for value in s0]
continue
# next time step, calculate values and save them
for i, species in enumerate(s):
hi = (timegrid[j] - timegrid[j-1])
species = species + (hi/2)*(dxdt(s,time)[i]+dxdt([s[k]+hi*dxdt(s,time)[k] for k in range(len(s))], time+hi)[i])
result[i].append(species)
# update species with new values
s[0] = result[0][-1]
s[1] = result[1][-1]
return result
import matplotlib.pyplot as plt
%matplotlib inline
# timegrids
timegrid_e3 = np.linspace(0,20,2000)
timegrid_e4 = np.linspace(0,20,20000)
# plot results
s0=[5,10]
my_heun_result_e3 = my_heun_solver(my_dxdt, s0, timegrid_e3)
s0=[5,10]
my_heun_result_e4 = my_heun_solver(my_dxdt, s0, timegrid_e4)
Explanation: Heun's method
If you want to increase the accuracy of your method, you could use the trapezoidal rule you know from the approximation of integrals. The second point is of course missing, but here you could use Euler's method!
As we will see, this method is a huge improvement compared to Euler's method!
$$\Phi (t,x,h) = x + \frac{h}{2}\Bigl(f(t,x)+f\bigl(t+h,\underbrace{x+h\cdot f(t,x)}_{Euler's\ method}\bigr)\Bigr)$$
Runge - Kutta method
The idea of Runge and Kutta were quite straight forward: Why not using Heun's method recursive? To get the second point you do not use Euler's method but again the trapezoidal rule... and again... and again. This method is still used and very good for most of the ODE systems!
End of explanation
import scipy.integrate
timegrid = np.linspace(0,20,2000)
s0 = [5,10]
result = scipy.integrate.odeint(my_dxdt, s0, timegrid)
Explanation: Let's simulate the same system also with odeint, the standard ODE solver of the scipy Python package.
End of explanation
plt.figure(1)
plt.plot(timegrid_e3, my_euler_result_e3[0], label="X 0.01")
plt.plot(timegrid_e3, my_euler_result_e3[1], label="Y 0.01")
plt.plot(timegrid_e4, my_euler_result_e4[0], label="X 0.001")
plt.plot(timegrid_e4, my_euler_result_e4[1], label="Y 0.001")
plt.plot([0,20], [13.67, 13.67], 'r--')
plt.plot([4.32,4.32], [0,14], 'r--')
plt.plot([8.9,8.9], [0,14], 'r--')
plt.plot([13.46,13.46], [0,14], 'r--')
plt.plot([18.06,18.06], [0,14], 'r--')
plt.legend(loc=2)
plt.title('Euler method')
plt.figure(2)
plt.plot(timegrid_e3, my_heun_result_e3[0], label="X 0.01")
plt.plot(timegrid_e3, my_heun_result_e3[1], label="Y 0.01")
plt.plot(timegrid_e4, my_heun_result_e4[0], label="X 0.001")
plt.plot(timegrid_e4, my_heun_result_e4[1], label="Y 0.001")
plt.plot([0,20], [13.67, 13.67], 'r--')
plt.plot([4.32,4.32], [0,14], 'r--')
plt.plot([8.9,8.9], [0,14], 'r--')
plt.plot([13.46,13.46], [0,14], 'r--')
plt.plot([18.06,18.06], [0,14], 'r--')
plt.legend(loc=2)
plt.title('Heun method')
plt.figure(3)
plt.plot(timegrid, result.T[0], label='X')
plt.plot(timegrid, result.T[1], label='Y')
plt.plot([0,20], [13.67, 13.67], 'r--')
plt.plot([4.32,4.32], [0,14], 'r--')
plt.plot([8.9,8.9], [0,14], 'r--')
plt.plot([13.46,13.46], [0,14], 'r--')
plt.plot([18.06,18.06], [0,14], 'r--')
plt.legend(loc=2)
plt.title('odeint')
Explanation: And now we compare the results. I marked the amplitude and position of the maxima with red dotted lines.
End of explanation
plt.plot(timegrid, result.T[0], label='X odeint')
plt.plot(timegrid, result.T[1], label='Y odeint')
plt.legend(loc=2)
plt.plot(timegrid_e4, my_heun_result_e4[0], label="X Heun")
plt.plot(timegrid_e4, my_heun_result_e4[1], label="Y Heun")
plt.plot([0,20], [13.67, 13.67], 'r--')
plt.plot([4.32,4.32], [0,14], 'r--')
plt.plot([8.9,8.9], [0,14], 'r--')
plt.plot([13.46,13.46], [0,14], 'r--')
plt.plot([18.06,18.06], [0,14], 'r--')
plt.legend(loc=2)
plt.title('Comparison odeint & Heun method')
Explanation: As you can see, the Heun's method seems to have already a remarkable accuracy, even if it is a very simple method. Let's compare the results of odeint and the Heun's method directly:
End of explanation |
6,299 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Jupyter Notebook é uma ferramenta excelente para
Step1: Como você viu, uma célula de código pode ter várias linhas de código em Python (ou de outra linguagem suportada), incluindo definições de funções e classes. Muitas vezes uma célula tem apenas uma linha, apenas para fazer um cálculo
Step2: O notebook é dinâmico porque todas as células são editáveis. Experimente clicar na célula acima e alterar o expoente de 100 para 1000. Depois de editar, tecle [SHIFT]+[RETURN] para executar o código.
Atenção
Step3: Também é fácil gerar gráficos, usando o comando mágico %matplotlib inline e funções da biblioteca matplotlib
Step4: Quem usa o Jupyter Notebook
O Jupyter Notebook e seu antecessor, iPython Notebook já são largamente utilizados em | Python Code:
def fatorial(n):
return 1 if n < 2 else n * fatorial(n-1)
fatorial(42)
Explanation: Jupyter Notebook é uma ferramenta excelente para:
Exploração interativa de dados;
Análise de dados (i.e. Analytics);
Aprender Python, R, Julia e dezenas de outras linguagens suportadas
O nome Jupyter é mistura de Julia, Python e R, as três primeiras linguagens suportadas depois que o projeto iPython Notebook se tornou agnóstico em relação a linguagens.
O Jupyter é um aplicativo que permite ver e editar documentos interativos chamados notebooks, assim como o Excel é um aplicativo para ver editar documentos interativos chamados planilhas.
Um notebook é composto de células com texto, código e imagens. Você está lendo agora uma célula de texto. A seguir temos uma célula de código:
End of explanation
2**100
Explanation: Como você viu, uma célula de código pode ter várias linhas de código em Python (ou de outra linguagem suportada), incluindo definições de funções e classes. Muitas vezes uma célula tem apenas uma linha, apenas para fazer um cálculo:
End of explanation
from IPython.display import IFrame
IFrame('http://jupyter.org/', width='100%', height=400)
Explanation: O notebook é dinâmico porque todas as células são editáveis. Experimente clicar na célula acima e alterar o expoente de 100 para 1000. Depois de editar, tecle [SHIFT]+[RETURN] para executar o código.
Atenção: se você está vendo este notebook online é provável que você não tenha permissão para editá-lo. Para editar você precisa instalar o Jupyter Notebook em seu computador e executar este documento localmente.
Além de ser um aplicativo, Jupyter é uma plataforma que inclui bibliotecas muito poderosas. Por exemplo, com uma chamada de função é possível embutir uma página Web em um notebook:
End of explanation
%matplotlib inline
# use o comando acima só uma vez, para configurar a exibição dos gráficos
import matplotlib.pyplot as plt
plt.plot([1, 2, 4, 8, 16])
plt.ylabel('litros de sorvete')
plt.show()
Explanation: Também é fácil gerar gráficos, usando o comando mágico %matplotlib inline e funções da biblioteca matplotlib:
End of explanation
IFrame('http://forpythonquants.com/', width='100%', height=400)
Explanation: Quem usa o Jupyter Notebook
O Jupyter Notebook e seu antecessor, iPython Notebook já são largamente utilizados em:
Análise de dados de experimentos científicos (Scientific Computing)
Análise quantitativa de dados financeiros (Quantitative Analysis)
Na área de finanças quantitativas a primeira conferências especializadas no uso de Jupyter/iPython notebook aconteceu em 2014 em New York, e teve novas edições lá em em Londres:
Para aprender mais
Sites principais:
Jupyter Notebook
iPython, que fornece os kernels para usar Python 2 ou Python 3 no Jupyter
pandas
matplotlib
NumPy e SciPy
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.