Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
4,300 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Levy Stable models of Stochastic Volatility
This tutorial demonstrates inference using the Levy Stable distribution through a motivating example of a non-Gaussian stochastic volatilty model.
Inference with stable distribution is tricky because the density Stable.log_prob() is not defined. In this tutorial we demonstrate two approaches to inference
Step1: Of interest are the log returns, i.e. the log ratio of price on two subsequent days.
Step2: Fitting a single distribution to log returns <a class="anchor" id="fitting"></a>
Log returns appear to be heavy-tailed. First let's fit a single distribution to the returns. To fit the distribution, we'll use a likelihood free statistical inference algorithm EnergyDistance, which matches fractional moments of observed data and can handle data with heavy tails.
Step3: This is a poor fit, but that was to be expected since we are mixing all time steps together
Step4: We use two reparameterizers
Step5: It appears the log returns exhibit very little skew, but exhibit a stability parameter slightly but significantly less than 2. This contrasts the usual Normal model corresponding to a Stable distribution with skew=0 and stability=2. We can now visualize the estimated volatility | Python Code:
import math
import os
import torch
import pyro
import pyro.distributions as dist
from matplotlib import pyplot
from torch.distributions import constraints
from pyro import poutine
from pyro.contrib.examples.finance import load_snp500
from pyro.infer import EnergyDistance, Predictive, SVI, Trace_ELBO
from pyro.infer.autoguide import AutoDiagonalNormal
from pyro.infer.reparam import DiscreteCosineReparam, StableReparam
from pyro.optim import ClippedAdam
from pyro.ops.tensor_utils import convolve
%matplotlib inline
assert pyro.__version__.startswith('1.7.0')
smoke_test = ('CI' in os.environ)
df = load_snp500()
dates = df.Date.to_numpy()
x = torch.tensor(df["Close"]).float()
x.shape
pyplot.figure(figsize=(9, 3))
pyplot.plot(x)
pyplot.yscale('log')
pyplot.ylabel("index")
pyplot.xlabel("trading day")
pyplot.title("S&P 500 from {} to {}".format(dates[0], dates[-1]));
Explanation: Levy Stable models of Stochastic Volatility
This tutorial demonstrates inference using the Levy Stable distribution through a motivating example of a non-Gaussian stochastic volatilty model.
Inference with stable distribution is tricky because the density Stable.log_prob() is not defined. In this tutorial we demonstrate two approaches to inference: (i) using the poutine.reparam effect to transform models in to a tractable form, and (ii) using the likelihood-free loss EnergyDistance with SVI.
Summary
Stable.log_prob() is undefined.
Stable inference requires either reparameterization or a likelihood-free loss.
Reparameterization:
The poutine.reparam() handler can transform models using various strategies.
The StableReparam strategy can be used for Stable distributions in SVI or HMC.
The LatentStableReparam strategy is a little cheaper, but cannot be used for likelihoods.
The DiscreteCosineReparam strategy improves geometry in batched latent time series models.
Likelihood-free loss with SVI:
The EnergyDistance loss allows stable distributions in the guide and in model likelihoods.
Table of contents
Daily S&P data
Fitting a single distribution to log returns using EnergyDistance
Modeling stochastic volatility using poutine.reparam
Daily S&P 500 data <a class="anchor" id="data"></a>
The following daily closing prices for the S&P 500 were loaded from Yahoo finance.
End of explanation
pyplot.figure(figsize=(9, 3))
r = (x[1:] / x[:-1]).log()
pyplot.plot(r, "k", lw=0.1)
pyplot.title("daily log returns")
pyplot.xlabel("trading day");
pyplot.figure(figsize=(9, 3))
pyplot.hist(r.numpy(), bins=200)
pyplot.yscale('log')
pyplot.ylabel("count")
pyplot.xlabel("daily log returns")
pyplot.title("Empirical distribution. mean={:0.3g}, std={:0.3g}".format(r.mean(), r.std()));
Explanation: Of interest are the log returns, i.e. the log ratio of price on two subsequent days.
End of explanation
def model():
stability = pyro.param("stability", torch.tensor(1.9),
constraint=constraints.interval(0, 2))
skew = 0.
scale = pyro.param("scale", torch.tensor(0.1), constraint=constraints.positive)
loc = pyro.param("loc", torch.tensor(0.))
with pyro.plate("data", len(r)):
return pyro.sample("r", dist.Stable(stability, skew, scale, loc), obs=r)
%%time
pyro.clear_param_store()
pyro.set_rng_seed(1234567890)
num_steps = 1 if smoke_test else 201
optim = ClippedAdam({"lr": 0.1, "lrd": 0.1 ** (1 / num_steps)})
svi = SVI(model, lambda: None, optim, EnergyDistance())
losses = []
for step in range(num_steps):
loss = svi.step()
losses.append(loss)
if step % 20 == 0:
print("step {} loss = {}".format(step, loss))
print("-" * 20)
pyplot.figure(figsize=(9, 3))
pyplot.plot(losses)
pyplot.yscale("log")
pyplot.ylabel("loss")
pyplot.xlabel("SVI step")
for name, value in sorted(pyro.get_param_store().items()):
if value.numel() == 1:
print("{} = {:0.4g}".format(name, value.squeeze().item()))
samples = poutine.uncondition(model)().detach()
pyplot.figure(figsize=(9, 3))
pyplot.hist(samples.numpy(), bins=200)
pyplot.yscale("log")
pyplot.xlabel("daily log returns")
pyplot.ylabel("count")
pyplot.title("Posterior predictive distribution");
Explanation: Fitting a single distribution to log returns <a class="anchor" id="fitting"></a>
Log returns appear to be heavy-tailed. First let's fit a single distribution to the returns. To fit the distribution, we'll use a likelihood free statistical inference algorithm EnergyDistance, which matches fractional moments of observed data and can handle data with heavy tails.
End of explanation
def model(data):
# Note we avoid plates because we'll later reparameterize along the time axis using
# DiscreteCosineReparam, breaking independence. This requires .unsqueeze()ing scalars.
h_0 = pyro.sample("h_0", dist.Normal(0, 1)).unsqueeze(-1)
sigma = pyro.sample("sigma", dist.LogNormal(0, 1)).unsqueeze(-1)
v = pyro.sample("v", dist.Normal(0, 1).expand(data.shape).to_event(1))
log_h = pyro.deterministic("log_h", h_0 + sigma * v.cumsum(dim=-1))
sqrt_h = log_h.mul(0.5).exp().clamp(min=1e-8, max=1e8)
# Observed log returns, assumed to be a Stable distribution scaled by sqrt(h).
r_loc = pyro.sample("r_loc", dist.Normal(0, 1e-2)).unsqueeze(-1)
r_skew = pyro.sample("r_skew", dist.Uniform(-1, 1)).unsqueeze(-1)
r_stability = pyro.sample("r_stability", dist.Uniform(0, 2)).unsqueeze(-1)
pyro.sample("r", dist.Stable(r_stability, r_skew, sqrt_h, r_loc * sqrt_h).to_event(1),
obs=data)
Explanation: This is a poor fit, but that was to be expected since we are mixing all time steps together: we would expect this to be a scale-mixture of distributions (Normal, or Stable), but are modeling it as a single distribution (Stable in this case).
Modeling stochastic volatility <a class="anchor" id="modeling"></a>
We'll next fit a stochastic volatility model.
Let's begin with a constant volatility model where log price $p$ follows Brownian motion
$$
\log p_t = \log p_{t-1} + w_t \sqrt h
$$
where $w_t$ is a sequence of standard white noise. We can rewrite this model in terms of the log returns $r_t=\log(p_t\,/\,p_{t-1})$:
$$
r_t = w_t \sqrt h
$$
Now to account for volatility clustering we can generalize to a stochastic volatility model where volatility $h$ depends on time $t$. Among the simplest such models is one where $h_t$ follows geometric Brownian motion
$$
\log h_t = \log h_{t-1} + \sigma v_t
$$
where again $v_t$ is a sequence of standard white noise. The entire model thus consists of a geometric Brownian motion $h_t$ that determines the diffusion rate of another geometric Brownian motion $p_t$:
$$
\log h_t = \log h_{t-1} + \sigma v_t \
\log p_t = \log p_{t-1} + w_t \sqrt h_t
$$
Usually $v_1$ and $w_t$ are both Gaussian. We will generalize to a Stable distribution for $w_t$, learning three parameters (stability, skew, and location), but still scaling by $\sqrt h_t$.
Our Pyro model will sample the increments $v_t$ and record the computation of $\log h_t$ via pyro.deterministic. Note that there are many ways of implementing this model in Pyro, and geometry can vary depending on implementation. The following version seems to have good geometry, when combined with reparameterizers.
End of explanation
reparam_model = poutine.reparam(model, {"v": DiscreteCosineReparam(),
"r": StableReparam()})
%%time
pyro.clear_param_store()
pyro.set_rng_seed(1234567890)
num_steps = 1 if smoke_test else 1001
optim = ClippedAdam({"lr": 0.05, "betas": (0.9, 0.99), "lrd": 0.1 ** (1 / num_steps)})
guide = AutoDiagonalNormal(reparam_model)
svi = SVI(reparam_model, guide, optim, Trace_ELBO())
losses = []
for step in range(num_steps):
loss = svi.step(r) / len(r)
losses.append(loss)
if step % 50 == 0:
median = guide.median()
print("step {} loss = {:0.6g}".format(step, loss))
print("-" * 20)
for name, (lb, ub) in sorted(guide.quantiles([0.325, 0.675]).items()):
if lb.numel() == 1:
lb = lb.squeeze().item()
ub = ub.squeeze().item()
print("{} = {:0.4g} ± {:0.4g}".format(name, (lb + ub) / 2, (ub - lb) / 2))
pyplot.figure(figsize=(9, 3))
pyplot.plot(losses)
pyplot.ylabel("loss")
pyplot.xlabel("SVI step")
pyplot.xlim(0, len(losses))
pyplot.ylim(min(losses), 20)
Explanation: We use two reparameterizers: StableReparam to handle the Stable likelihood (since Stable.log_prob() is undefined), and DiscreteCosineReparam to improve geometry of the latent Gaussian process for v. We'll then use reparam_model for both inference and prediction.
End of explanation
fig, axes = pyplot.subplots(2, figsize=(9, 5), sharex=True)
pyplot.subplots_adjust(hspace=0)
axes[1].plot(r, "k", lw=0.2)
axes[1].set_ylabel("log returns")
axes[1].set_xlim(0, len(r))
# We will pull out median log returns using the autoguide's .median() and poutines.
with torch.no_grad():
pred = Predictive(reparam_model, guide=guide, num_samples=20, parallel=True)(r)
log_h = pred["log_h"]
axes[0].plot(log_h.median(0).values, lw=1)
axes[0].fill_between(torch.arange(len(log_h[0])),
log_h.kthvalue(2, dim=0).values,
log_h.kthvalue(18, dim=0).values,
color='red', alpha=0.5)
axes[0].set_ylabel("log volatility")
stability = pred["r_stability"].median(0).values.item()
axes[0].set_title("Estimated index of stability = {:0.4g}".format(stability))
axes[1].set_xlabel("trading day");
Explanation: It appears the log returns exhibit very little skew, but exhibit a stability parameter slightly but significantly less than 2. This contrasts the usual Normal model corresponding to a Stable distribution with skew=0 and stability=2. We can now visualize the estimated volatility:
End of explanation |
4,301 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Getting started with TensorFlow </h1>
In this notebook, you play around with the TensorFlow Python API.
Step1: <h2> Adding two tensors </h2>
First, let's try doing this using numpy, the Python numeric package. numpy code is immediately evaluated.
Step2: The equivalent code in TensorFlow consists of two steps
Step3: c is an Op ("Add") that returns a tensor of shape (3,) and holds int32. The shape is inferred from the computation graph.
Try the following in the cell above
Step4: <h2> Using a feed_dict </h2>
Same graph, but without hardcoding inputs at build stage
Step5: <h2> Heron's Formula in TensorFlow </h2>
The area of triangle whose three sides are $(a, b, c)$ is $\sqrt{s(s-a)(s-b)(s-c)}$ where $s=\frac{a+b+c}{2}$
Look up the available operations at https
Step6: <h2> Placeholder and feed_dict </h2>
More common is to define the input to a program as a placeholder and then to feed in the inputs. The difference between the code below and the code above is whether the "area" graph is coded up with the input values or whether the "area" graph is coded up with a placeholder through which inputs will be passed in at run-time.
Step7: tf.eager
tf.eager allows you to avoid the build-then-run stages. However, most production code will follow the lazy evaluation paradigm because the lazy evaluation paradigm is what allows for multi-device support and distribution.
<p>
One thing you could do is to develop using tf.eager and then comment out the eager execution and add in the session management code.
<b>You may need to restart the session to try this out.</b> | Python Code:
import tensorflow as tf
import numpy as np
print(tf.__version__)
Explanation: <h1> Getting started with TensorFlow </h1>
In this notebook, you play around with the TensorFlow Python API.
End of explanation
a = np.array([5, 3, 8])
b = np.array([3, -1, 2])
c = np.add(a, b)
print(c)
Explanation: <h2> Adding two tensors </h2>
First, let's try doing this using numpy, the Python numeric package. numpy code is immediately evaluated.
End of explanation
a = tf.constant([5, 3, 8])
b = tf.constant([3, -1, 2])
c = tf.add(a, b)
print(c)
Explanation: The equivalent code in TensorFlow consists of two steps:
<p>
<h3> Step 1: Build the graph </h3>
End of explanation
with tf.Session() as sess:
result = sess.run(c)
print(result)
Explanation: c is an Op ("Add") that returns a tensor of shape (3,) and holds int32. The shape is inferred from the computation graph.
Try the following in the cell above:
<ol>
<li> Change the 5 to 5.0, and similarly the other five numbers. What happens when you run this cell? </li>
<li> Add an extra number to a, but leave b at the original (3,) shape. What happens when you run this cell? </li>
<li> Change the code back to a version that works </li>
</ol>
<p/>
<h3> Step 2: Run the graph
End of explanation
a = tf.placeholder(dtype=tf.int32, shape=(None,)) # batchsize x scalar
b = tf.placeholder(dtype=tf.int32, shape=(None,))
c = tf.add(a, b)
with tf.Session() as sess:
result = sess.run(c, feed_dict={
a: [3, 4, 5],
b: [-1, 2, 3]
})
print(result)
Explanation: <h2> Using a feed_dict </h2>
Same graph, but without hardcoding inputs at build stage
End of explanation
def compute_area(sides):
# slice the input to get the sides
a = sides[:,0] # 5.0, 2.3
b = sides[:,1] # 3.0, 4.1
c = sides[:,2] # 7.1, 4.8
# Heron's formula
s = (a + b + c) * 0.5 # (a + b) is a short-cut to tf.add(a, b)
areasq = s * (s - a) * (s - b) * (s - c) # (a * b) is a short-cut to tf.multiply(a, b), not tf.matmul(a, b)
return tf.sqrt(areasq)
with tf.Session() as sess:
# pass in two triangles
area = compute_area(tf.constant([
[5.0, 3.0, 7.1],
[2.3, 4.1, 4.8]
]))
result = sess.run(area)
print(result)
Explanation: <h2> Heron's Formula in TensorFlow </h2>
The area of triangle whose three sides are $(a, b, c)$ is $\sqrt{s(s-a)(s-b)(s-c)}$ where $s=\frac{a+b+c}{2}$
Look up the available operations at https://www.tensorflow.org/api_docs/python/tf
End of explanation
with tf.Session() as sess:
sides = tf.placeholder(tf.float32, shape=(None, 3)) # batchsize number of triangles, 3 sides
area = compute_area(sides)
result = sess.run(area, feed_dict = {
sides: [
[5.0, 3.0, 7.1],
[2.3, 4.1, 4.8]
]
})
print(result)
Explanation: <h2> Placeholder and feed_dict </h2>
More common is to define the input to a program as a placeholder and then to feed in the inputs. The difference between the code below and the code above is whether the "area" graph is coded up with the input values or whether the "area" graph is coded up with a placeholder through which inputs will be passed in at run-time.
End of explanation
import tensorflow as tf
tf.enable_eager_execution()
def compute_area(sides):
# slice the input to get the sides
a = sides[:,0] # 5.0, 2.3
b = sides[:,1] # 3.0, 4.1
c = sides[:,2] # 7.1, 4.8
# Heron's formula
s = (a + b + c) * 0.5 # (a + b) is a short-cut to tf.add(a, b)
areasq = s * (s - a) * (s - b) * (s - c) # (a * b) is a short-cut to tf.multiply(a, b), not tf.matmul(a, b)
return tf.sqrt(areasq)
area = compute_area(tf.constant([
[5.0, 3.0, 7.1],
[2.3, 4.1, 4.8]
]))
print(area)
Explanation: tf.eager
tf.eager allows you to avoid the build-then-run stages. However, most production code will follow the lazy evaluation paradigm because the lazy evaluation paradigm is what allows for multi-device support and distribution.
<p>
One thing you could do is to develop using tf.eager and then comment out the eager execution and add in the session management code.
<b>You may need to restart the session to try this out.</b>
End of explanation |
4,302 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Automating the Analysis HIV Immunogen Antigenic Characteristics
(Alt Title
Step1: (1) Demo Script | Python Code:
#This is the MSD .txt output file I'll be working with:
data = open('data.txt')
data.read()
Explanation: Automating the Analysis HIV Immunogen Antigenic Characteristics
(Alt Title: I'm getting lazy)
Michael Chambers
What I Do: A lotta quality control for HIV Immunogens
Objective: Automate the data analysis for these immunogens
My Goals:
Convert MSD output file to .csv
Break up the .csv into 8x12 arrays
Average column duplicates and create graph
Import raw data into Prism for further analysis
What I made:
A script that accomplishes ALMOST all of the above
A rudimentary module to easily manipulate the data from each plate
DEMO TIME!
End of explanation
#Import msd_module
%matplotlib inline
import msd_module
#Check Docstring
msd_module?
project1 = msd_module.msd_96()
project1.create_df('data.txt')
project1.df
project1.split_plates()
project1.dilution
project1.create_dilution(5,3,8)
project1.dilution
project1.split_plates()
Explanation: (1) Demo Script: msd_script.py
(2) Demo Module: msd_module.py
End of explanation |
4,303 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Iowa State Liquor Sale Projection
The goal for this task was to provide a projection of liquor sales in the state of Iowa based on a dataset of sales of liquor from distributors to individual stores.
The data provided was the item that was sold, the date it was sold, the location of the store, the quantity of what was sold, the price per item from distributor to store, and then the retail price of the item.
First, I need to import all necessary libraries.
Step1: The next step is to read in the data and do any necessary cleaning. This data required a large amount of cleaning and imputing missing data.
Also of note was that this was all completed on a dataset that was 10 percent of the original data - 30,000 records instead of 300,000. The projections will later be run on the full data, but the model will be built on the smaller set.
Step2: There were several records with missing counties but with the city included. Many of these cities had other records in the dataframe with the county listed, so the code below maps those values to the missing values.
Step3: The code below reads in county level population data for use in analysis.
Step4: There are over 70 categories of liquor in the dataset. The code below consolidates these.
Step5: The next step is to review the data. The preliminary data exploration shows that there is a wide distribution of sale dollars, meaning this feature should probably not be used. There is also an uptick in sales in the 4th quarter of each year. Certain counties also have much higher volume of sales than others.
While there are a number of features in this dataset, and there was much effort put into organizing the county information and categories of liquors, these may not be useful in projecting sales for the remainder of 2016.
Step6: The next step is to remove all columns that are not needed., and make transformations to get metrics that are more useful.
Step7: The next step is to finalize the features needed to create a model to predict average total store profit by county for a time period, given the average number of bottles sold of each type of liquor by each store in that county during that time period. These counts of bottles sold are normalized by dividing the counts by the number of people in the county.
Step8: Now that those features are finalized above, we can start to build a model.
Step9: The model score is less than ideal, but we will continue to predict 2016 sales using it and attempt to determine the best county to open a store based on the profit that can be expected for an average store in the county, after accounting for the number of people living there. | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler
import statsmodels.api as sm
from sklearn.cross_validation import train_test_split
from sklearn.linear_model import LogisticRegression, LinearRegression
from sklearn.grid_search import GridSearchCV
import warnings
warnings.filterwarnings('ignore')
Explanation: Iowa State Liquor Sale Projection
The goal for this task was to provide a projection of liquor sales in the state of Iowa based on a dataset of sales of liquor from distributors to individual stores.
The data provided was the item that was sold, the date it was sold, the location of the store, the quantity of what was sold, the price per item from distributor to store, and then the retail price of the item.
First, I need to import all necessary libraries.
End of explanation
## Load the data into a DataFrame
df_10 = pd.read_csv('/Users/jcarr/Documents/GA/Data/Iowa_Liquor_sales_sample_10pct.csv')
## Transform the data
df_10["Date"] = pd.to_datetime(df_10["Date"])
df_10['quarter'] = df_10.Date.apply(lambda x: x.quarter)
df_10['month'] = df_10.Date.apply(lambda x: x.month)
df_10['year'] = df_10.Date.apply(lambda x: x.year)
df_10.rename(columns=lambda x: x.replace(' ','_'), inplace=True)
df_10.rename(columns=lambda x: x.replace('(',''), inplace=True)
df_10.rename(columns=lambda x: x.replace(')',''), inplace=True)
# Get dollar amounts into usable floats
df_10['State_Bottle_Cost'] = df_10['State_Bottle_Cost'].replace( '[\$,]','', regex=True).astype(float)
df_10['State_Bottle_Retail'] = df_10['State_Bottle_Retail'].replace( '[\$,)]','', regex=True).astype(float)
df_10['Sale_Dollars'] = df_10['Sale_Dollars'].replace( '[\$,)]','', regex=True).astype(float)
Explanation: The next step is to read in the data and do any necessary cleaning. This data required a large amount of cleaning and imputing missing data.
Also of note was that this was all completed on a dataset that was 10 percent of the original data - 30,000 records instead of 300,000. The projections will later be run on the full data, but the model will be built on the smaller set.
End of explanation
cnty_not_null = df_10[['City','County']][df_10.County.notnull()].drop_duplicates()
cnty_dict = dict(zip(cnty_not_null.City, cnty_not_null.County))
# Using dictionary created w/ distinct county/city combinations, maps null values.
df_10['County'] = df_10['City'].map(cnty_dict)
# If necessary, assigning counties for cities that can't be mapped
df_10.loc[df_10.City == 'TABOR', 'County'] = 'Mills'
df_10.loc[df_10.City == 'SEYMOUR', 'County'] = 'Wayne'
df_10.loc[df_10.City == 'RUNNELLS', 'County'] = 'Polk'
# Making counties and cities uppercase to merge w/ pop data dataframes below
df_10['County'] = df_10['County'].map(lambda x: x.upper())
df_10['City'] = df_10['City'].map(lambda x: x.upper())
Explanation: There were several records with missing counties but with the city included. Many of these cities had other records in the dataframe with the county listed, so the code below maps those values to the missing values.
End of explanation
county = pd.read_csv('/Users/jcarr/Downloads/pop.csv')
county["Year"] = pd.to_datetime(county["Year"])
county['County'] = county['County'].map(lambda x: x.upper())
county_2015 = county.loc[county['Year'].dt.year == 2015]
county_data = county_2015.loc[county_2015.City.str.contains('Balance')]
Explanation: The code below reads in county level population data for use in analysis.
End of explanation
# Several records did not have a corresponding Category_Name
df_10.loc[df_10.Item_Description.str.contains('Hennessy|Cognac|VSOP'), 'Category_Name'] = 'BRANDIES'
df_10.loc[df_10.Item_Description.str.contains('Vodka'), 'Category_Name'] = 'VODKA'
df_10.loc[df_10.Item_Description.str.contains('Rum'), 'Category_Name'] = 'RUM'
df_10.loc[df_10.Item_Description.str.contains('Amaretto|Liqueur|Grand Marnier'), 'Category_Name'] = 'LIQUEUERS'
df_10.loc[df_10.Item_Description.str.contains('Reposado|Tequila|Anejo'), 'Category_Name'] = 'TEQUILA'
df_10.loc[df_10.Item_Description.str.contains('Whisk|Rye'), 'Category_Name'] = 'WHISKIES'
# Creating dictionary of categories for mapping to below
cat_list = list(df_10.Category_Name.unique())
categories = {}
for i in cat_list:
if 'VODKA' in str(i):
categories[i] = 'VODKA'
if 'BRAND' in str(i):
categories[i] = 'BRANDIES'
if 'WHISK' in str(i) or 'SINGLE MALT SCOTCH' == str(i) or 'SCOTCH WHISKIES' == str(i)\
or 'BOURBON' in str(i) or 'RYE' in str(i):
categories[i] = 'WHISKIES'
if 'RUM' in str(i):
categories[i] = 'RUM'
if 'AMERICAN DRY GINS' == str(i) or 'IMPORTED DRY GINS' == str(i) or 'FLAVORED GINS' == str(i):
categories[i] = 'GINS'
if 'SCHNAPPS' in str(i) or 'LIQUEUER' in str(i) or 'LIQUEUR' in str(i) or 'CREME' in str(i) or 'COCKTAILS' in str(i)\
or 'AMARETTO' in str(i) or 'ANISETTE' in str(i) or 'TRIPLE SEC' in str(i) or 'AMERICAN SLOE GINS' == str(i):
categories[i] = 'LIQUEUERS'
if str(i) == 'AMERICAN ALCOHOL':
categories[i] = 'AMERICAN ALCOHOL'
if 'TEQUILA' in str(i):
categories[i] = 'TEQUILA'
if 'SPECIALTY' in str(i) or str(i) == 'HIGH PROOF BEER - AMERICAN' or i is None:
categories[i] = 'OTHER'
# Binning categories based on dictionary created above
df_10['Cat_New'] = df_10['Category_Name'].map(categories)
# Assign null categories to OTHER
df_10.loc[df_10.Cat_New.isnull(), 'Cat_New'] = 'OTHER'
Explanation: There are over 70 categories of liquor in the dataset. The code below consolidates these.
End of explanation
sns.distplot(df_10.groupby('Store_Number').Sale_Dollars.sum())
plt.show()
df_10.groupby(['quarter','year']).Cat_New.value_counts().unstack(level=0).plot(kind='bar', legend=True)
plt.xlabel('Category')
plt.ylabel('Total Transactions by Category and Quarter/Year')
plt.show()
df_10.groupby('Cat_New').Bottles_Sold.sum().sort_values(ascending = False).plot(kind='bar')
plt.title('Total Bottles Sold by Category')
plt.xlabel('Category')
plt.ylabel('Bottles')
plt.show()
df_10.groupby('Cat_New').Sale_Dollars.sum().sort_values(ascending = False).plot(kind='bar')
plt.title('Total Dollars Sold by Category')
plt.xlabel('Category')
plt.ylabel('Sale Dollars')
plt.show()
df_10.groupby('County').Sale_Dollars.sum().plot(kind="hist")
plt.xlabel('Sale Dollars By County')
plt.show()
df_10.groupby('County').Bottles_Sold.sum().plot(kind="hist")
plt.xlabel('Bottles Sold By County')
plt.show()
Explanation: The next step is to review the data. The preliminary data exploration shows that there is a wide distribution of sale dollars, meaning this feature should probably not be used. There is also an uptick in sales in the 4th quarter of each year. Certain counties also have much higher volume of sales than others.
While there are a number of features in this dataset, and there was much effort put into organizing the county information and categories of liquors, these may not be useful in projecting sales for the remainder of 2016.
End of explanation
# Getting column of profit per bottle per transaction
new_df['Bottle_Profit'] = new_df.State_Bottle_Retail - new_df.State_Bottle_Cost
## Counts of stores by county
store_count_county = pd.DataFrame(new_df.groupby('County').Store_Number.nunique()).reset_index()
store_count_county.rename(columns= {'Store_Number':'Stores'}, inplace = True)
## Merging county pop data
county_df = pd.merge(new_df, county_data, on = 'County', how = 'left')
county_df.drop(['City','FIPS','Primary County Coordinates','Year'], axis=1, inplace = True)
## Merge df with all data, counts of stores by county
county_df2 = pd.merge(county_df, store_count_county, how = 'left', on = 'County')
## Stores per capita by county
county_df2['Stores_Capita'] = county_df2.Estimate / county_df2.Stores
## Uses profit from retail - cost to calculate profit per entire sale
county_df2['Sale_Profit'] = county_df2.Bottles_Sold * county_df2.Bottle_Profit
Explanation: The next step is to remove all columns that are not needed., and make transformations to get metrics that are more useful.
End of explanation
store_profit_q1_15 = county_df2[(county_df2.quarter == 1) & (county_df2.year == 2015)].groupby(['County','Store_Number']).Sale_Profit.sum().reset_index()
Q1_15_y = store_profit_q1_15.groupby('County').Sale_Profit.mean().reset_index()
store_profit_q234_15 = county_df2[(county_df2.quarter != 1) & (county_df2.year == 2015)].groupby(['County','Store_Number']).Sale_Profit.sum().reset_index()
Q234_15_y = store_profit_q234_15.groupby('County').Sale_Profit.mean().reset_index()
test_cnty = county_df2[['Store_Number','quarter','year','County','Cat_New','Bottles_Sold','Estimate','Stores','Stores_Capita','Sale_Profit']]
cnty_val_X_train = test_cnty.loc[(test_cnty.quarter == 1) & (test_cnty.year == 2015)]
cnty_val_X_train = pd.pivot_table(cnty_val_X_train, values='Bottles_Sold', index=['Store_Number','County','Stores_Capita'],
columns=['Cat_New'], aggfunc=np.sum).reset_index()
cnty_bottle_typ_X_train = cnty_val_X_train.groupby(['County','Stores_Capita']).mean().reset_index().drop('Store_Number'
, axis = 1).fillna(0)
pop_norm_bottle_typ_X_train = cnty_bottle_typ_X_train[['AMERICAN ALCOHOL','BRANDIES','GINS','LIQUEUERS','OTHER',
'RUM','TEQUILA','VODKA','WHISKIES']].divide(cnty_bottle_typ_X_train['Stores_Capita'], axis = 'index')
cnty_val_X_test = test_cnty.loc[(test_cnty.quarter != 1) & (test_cnty.year == 2015)]
cnty_val_X_test = pd.pivot_table(cnty_val_X_test, values='Bottles_Sold', index=['Store_Number','County','Stores_Capita'],
columns=['Cat_New'], aggfunc=np.sum).reset_index()
cnty_bottle_typ_X_test = cnty_val_X_train.groupby(['County','Stores_Capita']).mean().reset_index().drop('Store_Number'
, axis = 1).fillna(0)
pop_norm_bottle_typ_X_test = cnty_bottle_typ_X_test[['AMERICAN ALCOHOL','BRANDIES','GINS','LIQUEUERS','OTHER',
'RUM','TEQUILA','VODKA','WHISKIES']].divide(cnty_bottle_typ_X_test['Stores_Capita'], axis = 'index')
Explanation: The next step is to finalize the features needed to create a model to predict average total store profit by county for a time period, given the average number of bottles sold of each type of liquor by each store in that county during that time period. These counts of bottles sold are normalized by dividing the counts by the number of people in the county.
End of explanation
X_train = pop_norm_bottle_typ_X_train
y_train = Q1_15_y.Sale_Profit
linreg = LinearRegression()
linreg.fit(X_train, y_train)
X_test = pop_norm_bottle_typ_X_test
y_test = Q234_15_y.Sale_Profit
linreg.score(X_test, y_test)
sns.regplot(predictions, y_test)
plt.title('Predicted Profits vs Actual')
plt.xlabel('Predicted Total Profit')
plt.ylabel('Actual Profit')
plt.show()
Explanation: Now that those features are finalized above, we can start to build a model.
End of explanation
predict_2016 = test_cnty.loc[(test_cnty.quarter == 1) & (test_cnty.year == 2015)]
predict_2016 = pd.pivot_table(predict_2016, values='Bottles_Sold', index=['Store_Number','County','Stores_Capita'],
columns=['Cat_New'], aggfunc=np.sum).reset_index()
predict_2016_X = predict_2016.groupby(['County','Stores_Capita']).mean().reset_index().drop('Store_Number'
, axis = 1).fillna(0)
predict_2016_X_final = predict_2016_X[['AMERICAN ALCOHOL','BRANDIES','GINS','LIQUEUERS','OTHER',
'RUM','TEQUILA','VODKA','WHISKIES']].divide(predict_2016_X['Stores_Capita'], axis = 'index')
pd.DataFrame(zip(np.sort(test_cnty.County.unique()), linreg.predict(predict_2016_X_final)),
columns = ['County', 'Avg_Pred_Profits']).sort_values('Avg_Pred_Profits', ascending = False).head()
Explanation: The model score is less than ideal, but we will continue to predict 2016 sales using it and attempt to determine the best county to open a store based on the profit that can be expected for an average store in the county, after accounting for the number of people living there.
End of explanation |
4,304 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working on new API
The clustergrammer_widget class is now being loaded into the Network class. The class and widget instance are saved in th Network instance, net. This allows us to load data, cluster, and finally produce a new widget instance using the widget method. The instance of the widget is saved in net and can be used to grab the data from the clustergram as a Pandas DataFrame using the widget_df method. The exported DataFrame will reflect any filtering or imported categories that were added on the front end.
In these examples, we will filter the matrix using the brush crop tool, export the filtered matrix as a DataFrame, and finally visualize this as a new clustergram widget.
Step1: Make widget using new API
Step2: Above, we have filtered the matrix to a region of interest using the brush cropping tool. Below we will get export this region of interest, defined on the front end, to a DataFrame, df_genes. This demonstrates the two-way communication capabilities of widgets.
Step3: Above, we made a new widget visualizing this region of interest.
Generate random DataFrame
Here we will genrate a DataFrame with random data and visualize it using the widget.
Step4: Above, we selected a region of interest using the front-end brush crop tool and export to DataFrame, df_random. Below we will visualize it using a new widget. | Python Code:
import numpy as np
import pandas as pd
from clustergrammer_widget import *
net = Network(clustergrammer_widget)
Explanation: Working on new API
The clustergrammer_widget class is now being loaded into the Network class. The class and widget instance are saved in th Network instance, net. This allows us to load data, cluster, and finally produce a new widget instance using the widget method. The instance of the widget is saved in net and can be used to grab the data from the clustergram as a Pandas DataFrame using the widget_df method. The exported DataFrame will reflect any filtering or imported categories that were added on the front end.
In these examples, we will filter the matrix using the brush crop tool, export the filtered matrix as a DataFrame, and finally visualize this as a new clustergram widget.
End of explanation
net.load_file('rc_two_cats.txt')
net.cluster()
net.widget()
Explanation: Make widget using new API
End of explanation
df_genes = net.widget_df()
df_genes.shape
net.load_df(df_genes)
net.cluster()
net.widget()
Explanation: Above, we have filtered the matrix to a region of interest using the brush cropping tool. Below we will get export this region of interest, defined on the front end, to a DataFrame, df_genes. This demonstrates the two-way communication capabilities of widgets.
End of explanation
# generate random matrix
num_rows = 500
num_cols = 10
np.random.seed(seed=100)
mat = np.random.rand(num_rows, num_cols)
# make row and col labels
rows = range(num_rows)
cols = range(num_cols)
rows = [str(i) for i in rows]
cols = [str(i) for i in cols]
# make dataframe
df = pd.DataFrame(data=mat, columns=cols, index=rows)
net.load_df(df)
net.cluster()
net.widget()
Explanation: Above, we made a new widget visualizing this region of interest.
Generate random DataFrame
Here we will genrate a DataFrame with random data and visualize it using the widget.
End of explanation
df_random = net.widget_df()
df_random.shape
net.load_df(df_random)
net.cluster()
net.widget()
Explanation: Above, we selected a region of interest using the front-end brush crop tool and export to DataFrame, df_random. Below we will visualize it using a new widget.
End of explanation |
4,305 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Byte 4
Step1: Custom functions and global variables
Step2: Dataset
Step3: Next, read the two documents describing the dataset (data/ACS2015_PUMS_README.pdf and data/PUMSDataDict15.txt) and get familiar with the different variables. You can use Seaborn to visualize the data. For example, you can start with comparing the number of records in the two classes that we will try to predict (1-MALE, 2-FEMALE). Although SEX contains numerical values, it is categorical. Thus, it is important to treat it as such.
<strong>In classification, the class variable is always cathegorical!</strong>
Step4: The count plot above shows that the two classes are fairly balanced. This is good because when classes are NOT balanced, we have to make special considerations when training our algorithms.
Feature Engineering and Selection
Next, we look for features that might be predictive of the two sexes. Often real world datasets are not clean and require a good understanding of the data before we can apply a machine learning algorithm. Also, data often contains features that are in a format that is not predictive of the class. In such cases training an algorithm on the features without any changes of modification will produce bismal results and the automated feature selection algorithms will not work.
That is why in this byte we approach this the classification as a feature engineering problem. Here we will explore the data set first and use our intuition, assumptions, and existing knowledge to select a few features that will hopefully help us in the prediction task.
We start off by looking at personal income.
Personal Income (PINCP)
We begin by exploring the personal income feature. We hypothesize that females will have lower income than males because of wage inequality (https
Step5: We then compare the distribution of personal income across the two classes
Step6: And the distribution of wage income across the two classes.
Step7: The boxplots shows that there is likely a difference in personal income and wages between the two classes (despite large number of ourliers). We also note that the data is likely not normally distributed, which will come into play later. However, there also does not seem to be a clear (linear) separation between the two classes.
We pause our exploration of personal income to take a look at other features.
Age
We look at age because we noted that we will have difficulty classifying children ages 15 and below, so we should probably consider age in our model somehow.
We note and confirm that there should be no missing values, and plot our data again
Step8: Eyeballing the data, we do not see any major differences between the two sexes. Also, note that although there are differences in the life expectency between the two sexes (https
Step9: Eyeballing the data, it looks like we are correct. However, this feature will only help in a small number of cases when a person is widowed. What else can you see in this chart?
Occupation
Gender differences in occupational distribution among workers persist even if they are volountary choices (https
Step10: And then we preserve the special value of NaN in this case (less than 16 years old or never worked) and assign it to a special code '00'
Step11: We now look at the difference in occupation across sexes.
Step12: We see that there are still differences between occupation categories between the two sexes (e.g., construction '47' is still dominated by males, but education and administrative is dominated by females).
Revisiting Wage Gap
Now we are ready to look at the wage gap again. Our goal is to capture the wage gap (if it exists). We consider three different ways to do this
Step13: Followed by wages (note the difference between wages and total income)
Step14: Eyeballing the results, we can conclude that females are on average paid less than males across different occupation fields. Here we decide to include the income and wage features as they are and let the model decide the differences between the classes.
Now try finding other features that also highlight gender gaps and may be predictive of sex.
Your Features
<em>This is where you would explore your own features that could potentially improve the classification performance. Use the examples above to come up with new features. You can reuse some of the features above or completely discard them.</em>
Feature Summary
Here we finalize a list of features we created and/or selected in the previous step. Below is a list of features we will use. Remember to add your own features if you selected and/or created any.
Step15: We will now create a new dev data frame containing only the selected features and the class.
Step16: Questions you need to answer
Step17: We prepare out development set by creating 10 folds we will use to evaluate the algorithms.
We now iterate over the classifiers and report the metrics for all of them. Sklearn offers a number of functions to quickly score results of a cross validation, but we would like to make sure that all of the algorightms run on the same folds. So we quickly code our own experiment below. Note that this will take some time to run, so don't rush past it until it stops producing output.
Step18: Selecting the Best Algorithm
Here we analyze the results of our experiment and compare the algorithm scores to find the best algorithm. The sumary and box plot below shows the accuracy of the best algorithms in the outer fold. See if you can also plot the precision and recall to better understand your results. Do you see anything unusual? Can you explain it?
Step19: From the boxplots we can see that the Decision Tree was the best. That said, the accuracy of the decision tree is still pretty low at .71 (71%).
Running Statistical Tests
We now run a statistical test on the accuracies across different algorithms to ensure that the results above did not happen by chance. We ensured that we trained and evalueted all of the algorithms on the same outer folds, which means that we need to run a pairwise comparison. Also, although eyeballing the boxplot above we could assume that the accuracies came from normal distribution, we know that accuracy is takes value on an interval from [0,1] so we choose a non-parametric test instead (Friedman test, https
Step20: The results reject the null hypothesis (because the p values are small, <.005), meaning that we have some confidence there is a repeatable, real difference beteen the algorithms (even if the accuracy is low).
The Decision Tree classifier is unique in the sense that it is easy to visualize the decisions that it is making. Here we look at the top 3 levels of the best algorithm we trained on the whole development set (if you have graph viz installed on your machine)
Step21: Questions you need to answer
Step22: Marriage
Step23: Occupation
Step24: Income and Wages
Step25: Train the Best Algorithm
Now we train the best algorithm using the training data set (the pre-processed features) and the specifications about the best algorithm.
Step26: We use the same pipeline as we did in the development stage, except that we only use one set of parameters (in this case max_depth=12).
Step27: Evaluate the Best Algorithm
We now calculate a series of statistics that allow us to gauge how well the algorithm will perform on unseen data.
Step28: ROC Curve
Now we plot the ROC curve that tells us how well our algorithm detected true positives vs. false positives (from http
Step29: Any curve above the blue line means that the algorithm is predicting better than by random chance. However, we would ideally like to have the orange curve as close as possible to the y-axis and ROC curve area to be in the .90's.
Intermediate/Expert Stream Precision/Recall Curve
Step30: The Precision/Recall Curve is not impressive either. Ideally, a rule of thumb tells us that a good model would have a curve that crosses (Precision, Recall) at (90, 80).
We can now take a quick glance at the errors and if there is anything else we could have done better in hindsight. The code below selects all records where the algorithm awas wrond on the test set. What insights can you make from that? (hint
Step31: For example, the example below shows the distribution of different values of marriage feature . | Python Code:
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import pickle
import os
from IPython.display import Image
from IPython.display import display
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import FunctionTransformer
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.dummy import DummyClassifier
from sklearn.naive_bayes import BernoulliNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import export_graphviz
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn import metrics
from scipy import stats
# Allow modules and files to be loaded with relative paths
from pkg_resources import resource_filename as fpath
#from pkg_resources import ResourceManager
#from pkg_resources import DefaultProvider
#from pkg_resources import resource_string as fpath
import sys
from IPython.display import HTML
Explanation: Byte 4: Machine Learning
In this byte you will use machine learning to understand a data set. We will explore the differences in demographics data between males and females. Your goal is to use a combination of exploratory analysis and machine learning to train a model that best predicts the sex of a person (as defined in the US census data) based on their demographics data collected in an anual US census dataset. You will discuss the implications of being able to predict a person's sex and what that means in terms of differences between the sexes (including any biases and discriminations). We will feature models that best predict the sexes in class.
This assignment has two difficulties:
beggining programmer, in which you will simply execute the provided code, make small, guided modifications, develop your own features, and interpret the results
intermediate to expert programmer, in which you will independently modify the code to add and optimize an algorithm of your choosing
Import and Configure Required Libraries
We begin by importing libraries you will use in this byte. We will use Pandas to pre-process and clean the data, Seaborn to visualize the data and perform exploratory analysis, scipy for statistical analysis, and scikit learn (a Python ML library) to train a model that predicts the sex of a person.
End of explanation
# set up path
#sys.path.append(/usr/local/www/data/)
if not "" in sys.path:
sys.path.append("") #fpath(__name__, "")"")
userpath = os.path.expanduser('~')
print(userpath)
if not userpath in sys.path:
sys.path.append(userpath)
sys.path.append(userpath+"/idata")
sys.path.append(userpath+"/idata/data")
#sys.path.append(fpath(__name__, "/"+userpath))
#sys.path.append(fpath(__name__, "/"+userpath+"/idata"))
#sys.path.append(fpath(__name__, "/"+userpath+"/idata/data"))
# We don't execute everything unless we're viewing in a notebook
IN_JUPYTER = 'get_ipython' in globals() and get_ipython().__class__.__name__ == "ZMQInteractiveShell"
IN_DEBUG = True
OPTIMIZING = False
# Set up fonts
if IN_DEBUG:
print ([f.name for f in mpl.font_manager.fontManager.ttflist])
from matplotlib import rcParams
rcParams['font.family'] = 'DejaVu Sans Mono'
print(__name__)
print(sys.path)
print(fpath(__name__,"dev.csv.zip"))
if IN_JUPYTER:
#print("In Jupyter")
%matplotlib inline
sns.set_style("whitegrid")
def load_csv_file(file):
if IN_JUPYTER:
filename = userpath = os.path.expanduser('~')+"/idata/data/" + file
else:
filename = fpath(__name__, file)
#filename = ResourceManager.resource_filename(DefaultProvider, file)
#filename = fpath(__name__, file)
print("found file")
print(filename)
return pd.read_csv(filename)
# Discritizes columns. Note it changes the order of columns.
def digitize(x, bins=[], cols=[]):
mask = np.ones(x.shape[1], np.bool)
mask[cols] = 0
return np.hstack((np.apply_along_axis(np.digitize, 1, x[:,cols], bins=bins), x[:,mask]))
Explanation: Custom functions and global variables:
End of explanation
if IN_JUPYTER and OPTIMIZING:
dev = load_csv_file('dev.csv.zip')
Explanation: Dataset: American Community Survey 2015
In this byte we will use data from the 2015 American Community Survey which includes random sample of 3 million people that live in the US. You can find detailed information about the dataset in data/ACS2015_PUMS_README.pdf and a complete data dictionary in data/PUMSDataDict15.txt. We combined the data from the housing and personal files for you, so that each record in the files contains a person record together with their housing record. We also included only persons ages 18 and up. Also note that we have removed some of the variables from the data set.
In this assignment you will experience a machine learning method for selecting and optimizing an algorithm that prevents overfitting and biasing your algorithm on your training data. This helps improve the ability of your model to predict unseen data.
This methodology calls for splitting the data into three different datasets:
dev (10% of the data) - optimization set we use for exploratory analysis, feature selection, and algorithm selection and optimization,
train (60% of the data) - training data set we use for training our optimized algorithm, and
test (30% of the data) - test data set for estimating the final algorithm performance.
At the end of this process you would combine all three datasets back into a single dataset and train your final model on the whole data.
We already split the data for you, although this can easily be done by loading the whole dataset into a Pandas dataframe and using scikit-learn to split it into different parts using the train_test_split() function from the sklearn.model_selection package. The files are:
data/data.csv.zip - whole dataset
data/dev.csv.zip - development set
data/train.csv.zip - training set
data/test.csv.zip - testing set
Note that we have also taken a part of the original data out that you will not have access to. This holdout set will simulate unseen data for you and we will use it to test your final algorithm.
Exploratory Analysis and Feature Selection
In this section you will use the development set to explore the data and select your features using a combination of your intuition and statistical methods. Then you will perform a preliminary algorithm selection.
We begin by loading the development set from the data directory. Note that the file is compressed, comma separated values file. We load it into a Pandas dataframe:
End of explanation
rcParams['font.family'] = 'DejaVu Sans Mono'
if IN_JUPYTER and OPTIMIZING:
sns.countplot(x="SEX", data=dev)
Explanation: Next, read the two documents describing the dataset (data/ACS2015_PUMS_README.pdf and data/PUMSDataDict15.txt) and get familiar with the different variables. You can use Seaborn to visualize the data. For example, you can start with comparing the number of records in the two classes that we will try to predict (1-MALE, 2-FEMALE). Although SEX contains numerical values, it is categorical. Thus, it is important to treat it as such.
<strong>In classification, the class variable is always cathegorical!</strong>
End of explanation
if IN_JUPYTER and OPTIMIZING:
dev.loc[dev['PINCP'].isnull(),'PINCP'] = 0
dev.loc[dev['WAGP'].isnull(),'WAGP'] = 0
Explanation: The count plot above shows that the two classes are fairly balanced. This is good because when classes are NOT balanced, we have to make special considerations when training our algorithms.
Feature Engineering and Selection
Next, we look for features that might be predictive of the two sexes. Often real world datasets are not clean and require a good understanding of the data before we can apply a machine learning algorithm. Also, data often contains features that are in a format that is not predictive of the class. In such cases training an algorithm on the features without any changes of modification will produce bismal results and the automated feature selection algorithms will not work.
That is why in this byte we approach this the classification as a feature engineering problem. Here we will explore the data set first and use our intuition, assumptions, and existing knowledge to select a few features that will hopefully help us in the prediction task.
We start off by looking at personal income.
Personal Income (PINCP)
We begin by exploring the personal income feature. We hypothesize that females will have lower income than males because of wage inequality (https://en.wikipedia.org/wiki/Gender_pay_gap_in_the_United_States).
Dataset documentation tells us that the feature contains NaN values, but that those values mean that the person is 15 years or younger. That means that the child had no income (the parent or guardian would get the income in this case). That is fine in our case because we only conisder adults, ages 18 and up.
We still impute any possible missing values by replacing them with an income of 0.
End of explanation
if IN_JUPYTER and OPTIMIZING:
sns.boxplot(data=dev, x='SEX', y='PINCP')
Explanation: We then compare the distribution of personal income across the two classes:
End of explanation
if IN_JUPYTER and OPTIMIZING:
sns.boxplot(data=dev, x='SEX', y='WAGP')
Explanation: And the distribution of wage income across the two classes.
End of explanation
if IN_JUPYTER and OPTIMIZING:
len(dev[dev['AGEP'].isnull()])
if IN_JUPYTER and OPTIMIZING:
sns.boxplot(data=dev, x='SEX', y='AGEP')
Explanation: The boxplots shows that there is likely a difference in personal income and wages between the two classes (despite large number of ourliers). We also note that the data is likely not normally distributed, which will come into play later. However, there also does not seem to be a clear (linear) separation between the two classes.
We pause our exploration of personal income to take a look at other features.
Age
We look at age because we noted that we will have difficulty classifying children ages 15 and below, so we should probably consider age in our model somehow.
We note and confirm that there should be no missing values, and plot our data again:
End of explanation
if IN_JUPYTER and OPTIMIZING:
len(dev[dev['MAR'].isnull()])
if IN_JUPYTER and OPTIMIZING:
sns.countplot(data=dev, x='MAR', hue='SEX')
Explanation: Eyeballing the data, we do not see any major differences between the two sexes. Also, note that although there are differences in the life expectency between the two sexes (https://en.wikipedia.org/wiki/List_of_countries_by_life_expectancy), the data reffers to the person's current age, and not their projected life expectancy. We choose not to include age as it is right now.
Marital Status
The age discussion above brings up an interesting point: females have higher life expectancy. Thus, we would expect that there would be more widowed females than males. Thus, we search the dataset for a feature that indicates if a person is widowed or not: marital status.
However, unlike the previous two features, this feature is categorical although in the data set it is encoded as a number. <strong>You always have to ensure that features have the right type!</strong>
Because it is categorical, we use count plot to look at it:
End of explanation
if IN_JUPYTER and OPTIMIZING:
dev['SCOP_REDUCED'] = pd.to_numeric(dev[dev['SOCP'].notnull()]['SOCP'].str.slice(start=0, stop=2))
Explanation: Eyeballing the data, it looks like we are correct. However, this feature will only help in a small number of cases when a person is widowed. What else can you see in this chart?
Occupation
Gender differences in occupational distribution among workers persist even if they are volountary choices (https://www.bls.gov/opub/mlr/2007/06/art2full.pdf). Thus, we explore each person's occupation as a potential feature.
However, not only is this feature categorical, documentation reveals that there is also a large number of possible values for this feature. This often significantly degrades machine learning algorithm performance because there is usually not enough examples for each value to make accurate inference.
Since the first two digits of the occupation code represent an occupation class, we can reduce the number of values by grouping everything with the same starting digits together.
We preserve the old feature for reference, and add a new one. We first convert all values that are not null:
End of explanation
if IN_JUPYTER and OPTIMIZING:
dev.loc[dev['SCOP_REDUCED'].isnull(), 'SCOP_REDUCED'] = 0
Explanation: And then we preserve the special value of NaN in this case (less than 16 years old or never worked) and assign it to a special code '00':
End of explanation
if IN_JUPYTER and OPTIMIZING:
sns.countplot(data=dev, x='SCOP_REDUCED', hue='SEX')
Explanation: We now look at the difference in occupation across sexes.
End of explanation
if IN_JUPYTER and OPTIMIZING:
sns.factorplot(data=dev[['SCOP_REDUCED', 'SEX', 'PINCP']], x='SCOP_REDUCED', y='PINCP', hue='SEX', kind='box', size=7, aspect=1.5)
Explanation: We see that there are still differences between occupation categories between the two sexes (e.g., construction '47' is still dominated by males, but education and administrative is dominated by females).
Revisiting Wage Gap
Now we are ready to look at the wage gap again. Our goal is to capture the wage gap (if it exists). We consider three different ways to do this:
We could look at the income proportion of the person compared to the total family income.
We can compare how far the person's income is from the median or mean salary of males and females.
We can compare how far the person's income is from the median or mean salary of males and females in their occupation.
The following barplot shows personal income by occupation and gender, using comparative boxplots (option 3). Can you make a plot for option 1 (gender vs family income) or option 2 (gender vs personal income)? Can you plot the same things for wage instead of income? Ask yourself which of these plots is most informative and why?
End of explanation
if IN_JUPYTER and OPTIMIZING:
sns.factorplot(data=dev[['SCOP_REDUCED', 'SEX', 'WAGP']], x='SCOP_REDUCED', y='WAGP', hue='SEX', kind='box', size=7, aspect=1.5)
Explanation: Followed by wages (note the difference between wages and total income):
End of explanation
# Modify this cell to add more features if any.
select_features = ['PINCP', 'WAGP', 'MAR', 'SCOP_REDUCED']
categorical_features = ['MAR', 'SCOP_REDUCED']
# Used for specifying which features to bin in $20,000 increments.
# Note that if you have features you would like to bin in a custom way, then you will have to modify the Naive Bayes
# classifier below.
monetary_features = ['PINCP', 'WAGP']
Explanation: Eyeballing the results, we can conclude that females are on average paid less than males across different occupation fields. Here we decide to include the income and wage features as they are and let the model decide the differences between the classes.
Now try finding other features that also highlight gender gaps and may be predictive of sex.
Your Features
<em>This is where you would explore your own features that could potentially improve the classification performance. Use the examples above to come up with new features. You can reuse some of the features above or completely discard them.</em>
Feature Summary
Here we finalize a list of features we created and/or selected in the previous step. Below is a list of features we will use. Remember to add your own features if you selected and/or created any.
End of explanation
if IN_JUPYTER and OPTIMIZING:
select_dev = dev[select_features + ['SEX']]
Explanation: We will now create a new dev data frame containing only the selected features and the class.
End of explanation
classifiers = {}
classifier_parameters = {}
# Zero R
# This classifier does not require any additional preprocessing of data.
classifiers['ZeroR'] = DummyClassifier(strategy='prior')
# Binomial NB classifier
# This classifier requires that all features are in binary form.
# We can easily transform categorical data into binary form, but we have to first disretize continius variables first.
classifiers['Naive Bayes'] = Pipeline([
('discretize', FunctionTransformer(func=digitize, kw_args={'bins':np.array([0.0, 20000.0, 40000.0, 80000.0, 100000.0]), 'cols':pd.Series(select_features).isin(monetary_features)})),
('tranform', OneHotEncoder(categorical_features='all')),
('clf', BernoulliNB())])
# Decision Tree classifier
# This classifier can work on continious features and can find a good separation point on its own.
# We still have to convert categorical data to binary format.
classifiers['Decision Tree'] = Pipeline([('tranform', OneHotEncoder(categorical_features=pd.Series(select_features).isin(categorical_features))), ('clf', DecisionTreeClassifier())])
# Maximum Depth for a decision tree controls how many levels deep the tree will go before it stops.
# More levels means less generalizability, but fewer levels means less predictive power.
classifier_parameters['Decision Tree'] = {'clf__max_depth':(1, 3, 9, 12)}
# Create a label encoder to transform 1-MALE, 2-FEMALE into classes that sklearn can use (0 and 1).
le = LabelEncoder()
Explanation: Questions you need to answer:
<strong>Both paths:</strong> <em>Summarize your findings about different features you selected, including insights about why they should help you predict/classify males and females.</em>
Algorithm Selection and Optimization
In this section we will use the features we selected above to train and evaluate different machine leaning algorithms. Often it is not immediately clear which algorithm will perform the best on the dataset. Even if we are certain that an algorithm will do well, we need to compare it with a baseline algorithm (e.g., Zero R, which always selects the majority class) to make sure that we are improving on the status quo.
We will compare a few algorithms to find out which one is most promissing. We perform this selection on the development set so that we do not overfit on the training data (which would have effects on the performance of the lagorithm on unseen data). Because our development set is comparably small, we will use cross-validation to evaluate our algorithms. However, because we also want to optimize the algorithms we are comparing (to ensure we are selecting the best configuration) we will use what we call inner-outer 10 fold cross validation.
In the inner fold we will optimize an algorithm and pick the best optimization, and in the outerfold we will compare the best opimized algorithms.
In most cases we desire an algorithm with a high accuracy as a score. This metric is a decent indicator of performance when classifying balanced classes (as is our case). However, sometimes it is even more important to consider the impact of errors on the performance (precision and recall) and the general quality of the fit (kappa statistic).
We begin by defining a set of algorithms that we will compare. We chose 3 algorithms:
Zero R, which always picks the majority class. This is our baseline.
Naive Bayes, which is a fast algorithm based on Bayes' theorem, but with a naive assumption about independence of features given the class (http://scikit-learn.org/stable/modules/naive_bayes.html)
Decision Tree, which is a non-parametric supervised learning method used for classification that predicts the value of a target variable by learning simple decision rules inferred from the data features (copied from http://scikit-learn.org/stable/modules/tree.html).
<strong>If you are in the intermediate/expert path you need to pick your own algorithm to add to the race!</strong> If you are unsure where to start, you can use this chart to help you pick an algorithm: http://scikit-learn.org/stable/tutorial/machine_learning_map/. Then add specifications for your algorithm below (use the existing examples on how to create your own pipeline for the algorithm):
End of explanation
if IN_JUPYTER and OPTIMIZING:
# Split features and class into two dataframes.
X_dev = select_dev.ix[:, select_dev.columns != 'SEX'].values
y_dev = le.fit_transform(select_dev['SEX'].values)
kf = KFold(n_splits=10, shuffle=True)
# Initialize scores dict
scores = pd.DataFrame(columns=['fold', 'algorithm', 'parameters', 'accuracy', 'precision', 'recall'])
# For each fold run the classifier (outer CV).
fold = 0
for train_index, test_index in kf.split(X_dev):
X_train, X_test = X_dev[train_index], X_dev[test_index]
y_train, y_test = y_dev[train_index], y_dev[test_index]
fold = fold + 1
# Iterate over classifiers
for name, clf in classifiers.items():
# If the classifier has parameters, then run inner CV.
# Luckily sklearn provides a quick method to do this.
if name in classifier_parameters:
gs = GridSearchCV(estimator=clf, param_grid=classifier_parameters[name])
gs.fit(X_train, y_train)
y_pred = gs.predict(X_test)
best_params = str(gs.best_params_)
else:
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
best_params = 'default'
scores = scores.append(pd.DataFrame(data={'fold':[fold],
'algorithm':[name],
'parameters':[best_params],
'accuracy':[accuracy_score(y_test, y_pred)],
'precision':[precision_score(y_test, y_pred)],
'recall':[recall_score(y_test, y_pred)]}),
ignore_index=True)
Explanation: We prepare out development set by creating 10 folds we will use to evaluate the algorithms.
We now iterate over the classifiers and report the metrics for all of them. Sklearn offers a number of functions to quickly score results of a cross validation, but we would like to make sure that all of the algorightms run on the same folds. So we quickly code our own experiment below. Note that this will take some time to run, so don't rush past it until it stops producing output.
End of explanation
if IN_JUPYTER and OPTIMIZING:
scores[['algorithm', 'accuracy', 'precision', 'recall']].groupby(['algorithm']).median()
if IN_JUPYTER and OPTIMIZING:
sns.boxplot(data=scores, x='algorithm', y='accuracy')
Explanation: Selecting the Best Algorithm
Here we analyze the results of our experiment and compare the algorithm scores to find the best algorithm. The sumary and box plot below shows the accuracy of the best algorithms in the outer fold. See if you can also plot the precision and recall to better understand your results. Do you see anything unusual? Can you explain it?
End of explanation
if IN_JUPYTER and OPTIMIZING:
matrix = scores.pivot(index='fold', columns='algorithm', values='accuracy').as_matrix()
stats.friedmanchisquare(matrix[:,0], matrix[:,1], matrix[:,2])
if IN_JUPYTER and OPTIMIZING:
for i in range(np.shape(matrix)[1]):
for j in range(i+1, np.shape(matrix)[1]):
print(stats.wilcoxon(matrix[:,i], matrix[:,j], correction=True))
Explanation: From the boxplots we can see that the Decision Tree was the best. That said, the accuracy of the decision tree is still pretty low at .71 (71%).
Running Statistical Tests
We now run a statistical test on the accuracies across different algorithms to ensure that the results above did not happen by chance. We ensured that we trained and evalueted all of the algorithms on the same outer folds, which means that we need to run a pairwise comparison. Also, although eyeballing the boxplot above we could assume that the accuracies came from normal distribution, we know that accuracy is takes value on an interval from [0,1] so we choose a non-parametric test instead (Friedman test, https://en.wikipedia.org/wiki/Friedman_test). We will perform post-hoc pairwise comparison using a Wilcoxon test.
Our null-hypothesis is that there is no difference between the algorithms.
End of explanation
if IN_JUPYTER and OPTIMIZING:
features = select_dev.columns.tolist()
features = features[1:len(features)-1]
le = LabelEncoder()
# Split features and class into two dataframes.
X_dev = select_dev.ix[:, select_dev.columns != 'SEX']
y_dev = le.fit_transform(select_dev['SEX'].values)
X_dev_long = pd.get_dummies(data=X_dev, columns=categorical_features)
clf = DecisionTreeClassifier(max_depth=3)
clf.fit(X_dev_long, y_dev)
import pydotplus
if IN_JUPYTER and OPTIMIZING:
dot_data = export_graphviz(clf,
out_file=None,
feature_names=X_dev_long.columns,
class_names=['male', 'female'],
filled=True, rounded=True,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data)
i = Image(graph.create_png())
display(i)
Explanation: The results reject the null hypothesis (because the p values are small, <.005), meaning that we have some confidence there is a repeatable, real difference beteen the algorithms (even if the accuracy is low).
The Decision Tree classifier is unique in the sense that it is easy to visualize the decisions that it is making. Here we look at the top 3 levels of the best algorithm we trained on the whole development set (if you have graph viz installed on your machine):
End of explanation
train = load_csv_file('train01.csv.gz')
np.append(train, load_csv_file('train02.csv.gz'))
np.append(train, load_csv_file('train03.csv.gz'))
# np.append(train, load_csv_file('train04.csv.gz'))
# np.append(train, load_csv_file('train05.csv.gz'))
# np.append(train, load_csv_file('train06.csv.gz'))
# np.append(train, load_csv_file('train07.csv.gz'))
# np.append(train, load_csv_file('train08.csv.gz'))
# np.append(train, load_csv_file('train09.csv.gz'))
# np.append(train, load_csv_file('train10.csv.gz'))
# np.append(train, load_csv_file('train11.csv.gz'))
# this can be repeated for all 11 training files, but fails if you load them all
# if you are using the smalles machine (only 3.75 gigs ram) on google compute
test = load_csv_file('test01.csv.gz')
np.append(test, load_csv_file('test02.csv.gz'))
#np.append(test, load_csv_file('test03.csv.gz'))
#np.append(test, load_csv_file('test04.csv.gz'))
#np.append(test, load_csv_file('test05.csv.gz'))
#np.append(test, load_csv_file('test06.csv.gz'))
# this can be repeated for all 6 test files, but fails if you load them all
# if you are using the smalles machine (only 3.75 gigs ram) on google compute
Explanation: Questions you need to answer:
What do the results of the tests mean? What can we conclude from the analysis? Is the best algorithm making reasnoble decisions? What kind of errors is the best algorithm making and can we improve somehow?
Training a Machine Learning Algorithm
In this section, we will use the insights from our exploratory analysis and optimization to train and test our final algorithm. As an illustration, we will use a DecisionTreeClassifier with maximum depth of 12 because this algorithm performed the best in the development phase. We are going to use the training data set to train the algorithm and the test data set to test it. In this phase we should have enough data to accurately estimate the performance of the algorithm, so we do not need to use cross validation.
<strong>If you are in the intermediate/expert stream: </strong> if you changed or added any features of if your algorithm performed the best in the development stage, then make sure you make appropriate changes to both the features and algorithm.
Data pre-processing
Here we load and pre-process the data in exactly the same way as we did above (which means if you make any changes to features or algorithms above, you will need to copy them down here too). You can try plotting this data to make see if it looks about the same as our dev set (it should).
End of explanation
# Ensure there is no NaN values.
print(len(train[train['MAR'].isnull()]))
print(len(test[test['MAR'].isnull()]))
if IN_JUPYTER:
sns.countplot(data=train, x='MAR', hue='SEX')
Explanation: Marriage
End of explanation
# Reduce and make sure no NaN.
train['SCOP_REDUCED'] = pd.to_numeric(train[train['SOCP'].notnull()]['SOCP'].str.slice(start=0, stop=2))
train.loc[train['SCOP_REDUCED'].isnull(), 'SCOP_REDUCED'] = 0
test['SCOP_REDUCED'] = pd.to_numeric(test[test['SOCP'].notnull()]['SOCP'].str.slice(start=0, stop=2))
test.loc[test['SCOP_REDUCED'].isnull(), 'SCOP_REDUCED'] = 0
if IN_JUPYTER:
sns.countplot(data=train, x='SCOP_REDUCED', hue='SEX')
Explanation: Occupation
End of explanation
train.loc[train['PINCP'].isnull(),'PINCP'] = 0
train.loc[train['WAGP'].isnull(),'WAGP'] = 0
test.loc[test['PINCP'].isnull(),'PINCP'] = 0
test.loc[test['WAGP'].isnull(),'WAGP'] = 0
sns.factorplot(data=train[['SCOP_REDUCED', 'SEX', 'WAGP']], x='SCOP_REDUCED', y='WAGP', hue='SEX', kind='box', size=7, aspect=1.5)
Explanation: Income and Wages
End of explanation
select_train = train[select_features + ['SEX']]
select_test = test[select_features + ['SEX']]
Explanation: Train the Best Algorithm
Now we train the best algorithm using the training data set (the pre-processed features) and the specifications about the best algorithm.
End of explanation
# Decision Tree classifier
# This classifier can work on continious features and can find a good separation point on its own.
# We still have to convert categorical data to binary format.
best_clf = Pipeline([('tranform', OneHotEncoder(categorical_features=pd.Series(select_features).isin(categorical_features))), ('clf', DecisionTreeClassifier(max_depth=12))])
if IN_JUPYTER:
# Split features and class into two dataframes.
X_train = select_train.ix[:, select_train.columns != 'SEX'].values
y_train = le.fit_transform(select_train['SEX'].values)
X_test = select_test.ix[:, select_test.columns != 'SEX'].values
y_test = le.fit_transform(select_test['SEX'].values)
best_clf.fit(X_train, y_train)
y_pred = best_clf.predict(X_test)
y_score = best_clf.predict_proba(X_test)
else:
# to train a final classifier for use in a website, we want to combine the data
# into the largest data set possible
all_data = select_train
np.append(all_data, select_test)
X_all_data = all_data.ix[:, all_data.columns != 'SEX'].values
y_all_data = le.fit_transform(all_data['SEX'].values)
best_clf.fit()
Explanation: We use the same pipeline as we did in the development stage, except that we only use one set of parameters (in this case max_depth=12).
End of explanation
if IN_JUPYTER:
print('Accuracy: ' + str(metrics.accuracy_score(y_test, y_pred)))
print(metrics.classification_report(y_test, y_pred))
Explanation: Evaluate the Best Algorithm
We now calculate a series of statistics that allow us to gauge how well the algorithm will perform on unseen data.
End of explanation
# Compute ROC curve and ROC area for each class
fpr, tpr, _ = metrics.roc_curve(y_test, y_score[:,1])
roc_auc = metrics.auc(fpr, tpr)
if IN_JUPYTER:
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
Explanation: ROC Curve
Now we plot the ROC curve that tells us how well our algorithm detected true positives vs. false positives (from http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html).
End of explanation
if IN_JUPYTER:
precision, recall, thresholds = metrics.precision_recall_curve(y_test, y_score[:,1])
plt.clf()
plt.plot(recall, precision, label='Precision-Recall curve')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.show()
Explanation: Any curve above the blue line means that the algorithm is predicting better than by random chance. However, we would ideally like to have the orange curve as close as possible to the y-axis and ROC curve area to be in the .90's.
Intermediate/Expert Stream Precision/Recall Curve
End of explanation
if IN_JUPYTER and OPTIMIZING:
select_test = dev[select_features + ['SEX']]
X_test = select_test.ix[:, select_test.columns != 'SEX'].values
y_test = le.fit_transform(select_test['SEX'].values)
y_pred = best_clf.predict(X_test)
y_score = best_clf.predict_proba(X_test)
dev_wrong = dev[y_pred != y_test]
Explanation: The Precision/Recall Curve is not impressive either. Ideally, a rule of thumb tells us that a good model would have a curve that crosses (Precision, Recall) at (90, 80).
We can now take a quick glance at the errors and if there is anything else we could have done better in hindsight. The code below selects all records where the algorithm awas wrond on the test set. What insights can you make from that? (hint: use the same feature visualization techniques we used when selecting features). Note that we should do this with the dev set. Thus we quickly test on it before visualizing this. Ideally we would use a classifier also trained no the dev set, instead of best_clf. This is left as an exercise to the reader.
<strong>Intermediate/expert stream:</strong> Use the precision/recall and ROC curves to evaluate your feature engineering and algorithm selection, not just NSHT (but remember to focus on your dev set). Has your algorithm come close to .9 under the curve? Can you design a model (using your dev set) that has high performance on this ROC and precision/recall curves? How can you use explorations of errors such as the one illustrated below to improve your algorithms?
End of explanation
if IN_JUPYTER and OPTIMIZING:
sns.countplot(data=test_wrong, x='MAR', hue='SEX')
Explanation: For example, the example below shows the distribution of different values of marriage feature .
End of explanation |
4,306 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning curve
Table of contents
Data preprocessing
Fitting random forest
Feature importance
Step1: Data preprocessing
Load simulation dataframe and apply specified quality cuts
Extract desired features from dataframe
Get separate testing and training datasets
Step2: Produce 10-fold CV learning curve | Python Code:
import sys
sys.path.append('/home/jbourbeau/cr-composition')
print('Added to PYTHONPATH')
import argparse
from collections import defaultdict
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import seaborn.apionly as sns
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_score, learning_curve
import composition as comp
import composition.analysis.plotting as plotting
# Plotting-related
sns.set_palette('muted')
sns.set_color_codes()
color_dict = {'P': 'b', 'He': 'g', 'Fe': 'm', 'O': 'r'}
%matplotlib inline
Explanation: Learning curve
Table of contents
Data preprocessing
Fitting random forest
Feature importance
End of explanation
df, cut_dict = comp.load_sim(return_cut_dict=True)
selection_mask = np.array([True] * len(df))
standard_cut_keys = ['lap_reco_success', 'lap_zenith', 'num_hits_1_30', 'IT_signal',
'max_qfrac_1_30', 'lap_containment', 'energy_range_lap']
for key in standard_cut_keys:
selection_mask *= cut_dict[key]
df = df[selection_mask]
feature_list, feature_labels = comp.get_training_features()
print('training features = {}'.format(feature_list))
X_train, X_test, y_train, y_test, le = comp.get_train_test_sets(
df, feature_list, train_he=True, test_he=True)
print('number training events = ' + str(y_train.shape[0]))
Explanation: Data preprocessing
Load simulation dataframe and apply specified quality cuts
Extract desired features from dataframe
Get separate testing and training datasets
End of explanation
pipeline = comp.get_pipeline('RF')
train_sizes, train_scores, test_scores =\
learning_curve(estimator=pipeline,
X=X_train,
y=y_train,
train_sizes=np.linspace(0.1, 1.0, 10),
cv=10,
n_jobs=20,
verbose=3)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(train_sizes, train_mean,
color='b', linestyle='-',
marker='o', markersize=5,
label='training accuracy')
plt.fill_between(train_sizes,
train_mean + train_std,
train_mean - train_std,
alpha=0.15, color='blue')
plt.plot(train_sizes, test_mean,
color='g', linestyle='--',
marker='s', markersize=5,
label='validation accuracy')
plt.fill_between(train_sizes,
test_mean + test_std,
test_mean - test_std,
alpha=0.15, color='green')
plt.grid()
plt.xlabel('Number of training samples')
plt.ylabel('Accuracy')
plt.title('RF Classifier')
plt.legend()
# plt.ylim([0.8, 1.0])
plt.tight_layout()
plt.show()
Explanation: Produce 10-fold CV learning curve
End of explanation |
4,307 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="../static/images/joinnode.png" width="240">
JoinNode
JoinNode have the opposite effect of a MapNode or iterables. Where they split up the execution workflow into many different branches, a JoinNode merges them back into on node. For a more detailed explanation, check out JoinNode, synchronize and itersource from the main homepage.
Simple example
Let's consider the very simple example depicted at the top of this page
Step1: As you can see, setting up a JoinNode is rather simple. The only difference to a normal Node are the joinsource and the joinfield. joinsource specifies from which node the information to join is coming and the joinfield specifies the input field of the JoinNode where the information to join will be entering the node.
More realistic example
Let's consider another example where we have one node that iterates over 3 different numbers and another node that joins those three different numbers (each coming from a separate branch of the workflow) into one list. To make the whole thing a bit more realistic, the second node will use the Function interface to do something with those numbers, before we spit them out again.
Step2: Now, let's look at the input and output of the joinnode | Python Code:
from nipype import Node, JoinNode, Workflow
# Specify fake input node A
a = Node(interface=A(), name="a")
# Iterate over fake node B's input 'in_file?
b = Node(interface=B(), name="b")
b.iterables = ('in_file', [file1, file2])
# Pass results on to fake node C
c = Node(interface=C(), name="c")
# Join forked execution workflow in fake node D
d = JoinNode(interface=D(),
joinsource="b",
joinfield="in_files",
name="d")
# Put everything into a workflow as usual
workflow = Workflow(name="workflow")
workflow.connect([(a, b, [('subject', 'subject')]),
(b, c, [('out_file', 'in_file')])
(c, d, [('out_file', 'in_files')])
])
Explanation: <img src="../static/images/joinnode.png" width="240">
JoinNode
JoinNode have the opposite effect of a MapNode or iterables. Where they split up the execution workflow into many different branches, a JoinNode merges them back into on node. For a more detailed explanation, check out JoinNode, synchronize and itersource from the main homepage.
Simple example
Let's consider the very simple example depicted at the top of this page:
End of explanation
from nipype import JoinNode, Node, Workflow
from nipype.interfaces.utility import Function, IdentityInterface
# Create iteration node
from nipype import IdentityInterface
iternode = Node(IdentityInterface(fields=['number_id']),
name="iternode")
iternode.iterables = [('number_id', [1, 4, 9])]
# Create join node - compute square root for each element in the joined list
def compute_sqrt(numbers):
from math import sqrt
return [sqrt(e) for e in numbers]
joinnode = JoinNode(Function(input_names=['numbers'],
output_names=['sqrts'],
function=compute_sqrt),
name='joinnode',
joinsource='iternode',
joinfield=['numbers'])
# Create the workflow and run it
joinflow = Workflow(name='joinflow')
joinflow.connect(iternode, 'number_id', joinnode, 'numbers')
res = joinflow.run()
Explanation: As you can see, setting up a JoinNode is rather simple. The only difference to a normal Node are the joinsource and the joinfield. joinsource specifies from which node the information to join is coming and the joinfield specifies the input field of the JoinNode where the information to join will be entering the node.
More realistic example
Let's consider another example where we have one node that iterates over 3 different numbers and another node that joins those three different numbers (each coming from a separate branch of the workflow) into one list. To make the whole thing a bit more realistic, the second node will use the Function interface to do something with those numbers, before we spit them out again.
End of explanation
res.nodes()[0].result.outputs
res.nodes()[0].inputs
Explanation: Now, let's look at the input and output of the joinnode:
End of explanation |
4,308 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Proton Titration of a Rigid Body in Explicit Salt Solution
This will simulate proton equilibria in a rigid body composed of particles in a spherical simulation container. We use a continuum solvent and explicit, soft spheres for salt particles which are treated grand canonically. During simulation, titratable sites are updated with swap moves according to input pH.
System Requirements
This Jupyter Notebook was originally run in MacOS 10.11 with GCC 4.8, Python2, matplotlib, pandas within the Anaconda environment. Contemporary Linux distributions such as Ubuntu 14.04 should work as well.
Download and build Faunus
We use a custom Metropolis Monte Carlo (MC) program build within the Faunus framework. The sections below will fetch the complete faunus project and compile the program.
Step1: Let's coarse grain an atomic PDB structure to the amino acid level
For this purpose we use the module pytraj and also locate possible disulfide bonds as these should not be allowed to titrate. Note that if found, the residues in the generated .aam file should be manually renamed to prevent them from being regarded as titratable. Likewise, the N and C terminal ends need to be handled manually.
Step2: Create Input and run MC simulation
Step3: Analysis | Python Code:
from __future__ import division, unicode_literals, print_function
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np, pandas as pd
import os.path, os, sys, json, filecmp, copy
plt.rcParams.update({'font.size': 16, 'figure.figsize': [8.0, 6.0]})
try:
workdir
except NameError:
workdir=%pwd
else:
%cd $workdir
print(workdir)
%%bash -s "$workdir"
%cd $1
if [ ! -d "faunus/" ]; then
git clone https://github.com/mlund/faunus.git
cd faunus
git checkout 86a1f74
else
cd faunus
fi
# if different, copy custom gctit.cpp into faunus
if ! cmp ../titrate.cpp src/examples/gctit.cpp
then
cp ../titrate.cpp src/examples/gctit.cpp
fi
pwd
cmake . -DCMAKE_BUILD_TYPE=Release -DENABLE_APPROXMATH=on &>/dev/null
make example_gctit -j4
Explanation: Proton Titration of a Rigid Body in Explicit Salt Solution
This will simulate proton equilibria in a rigid body composed of particles in a spherical simulation container. We use a continuum solvent and explicit, soft spheres for salt particles which are treated grand canonically. During simulation, titratable sites are updated with swap moves according to input pH.
System Requirements
This Jupyter Notebook was originally run in MacOS 10.11 with GCC 4.8, Python2, matplotlib, pandas within the Anaconda environment. Contemporary Linux distributions such as Ubuntu 14.04 should work as well.
Download and build Faunus
We use a custom Metropolis Monte Carlo (MC) program build within the Faunus framework. The sections below will fetch the complete faunus project and compile the program.
End of explanation
%cd $workdir
import mdtraj as md
traj = md.load_pdb('1BPI.pdb')
for chain in traj.topology.chains:
print('chain: ', chain.index)
# filter pdb to only select protein(s)
sel = chain.topology.select('protein')
top = chain.topology.subset(sel)
f = open('chain'+str(chain.index)+'.aam','w')
f.write(str(top.n_residues)+'\n')
# locate disulfide bonds (not used for anything yet)
for b in top.bonds:
i = b[0].residue.index
j = b[1].residue.index
if j>i+1:
if (b[0].residue.name == 'CYS'):
if (b[1].residue.name == 'CYS'):
print('SS-bond between residues', i, j)
# loop over residues and calculate residue mass centers, radius, and weight
top.create_disulfide_bonds( traj.xyz[0] )
for res in top.residues:
if res.is_protein:
cm = [0,0,0] # residue mass center
mw = 0 # residue weight
for a in res.atoms:
cm = cm + a.element.mass * traj.xyz[0][a.index]
mw = mw + a.element.mass
cm = cm/mw*10
radius = ( 3./(4*np.pi)*mw/1.0 )**(1/3.)
f.write('{0:4} {1:5} {2:8.3f} {3:8.3f} {4:8.3f} {5:6.3f} {6:6.2f} {7:6.2f}\n'\
.format(res.name,res.index,cm[0],cm[1],cm[2],0,mw,radius))
f.close()
Explanation: Let's coarse grain an atomic PDB structure to the amino acid level
For this purpose we use the module pytraj and also locate possible disulfide bonds as these should not be allowed to titrate. Note that if found, the residues in the generated .aam file should be manually renamed to prevent them from being regarded as titratable. Likewise, the N and C terminal ends need to be handled manually.
End of explanation
pH_range = [7.0]
salt_range = [0.03] # mol/l
%cd $workdir'/'
def mkinput():
js = {
"energy": {
"eqstate": { "processfile": "titrate.json" },
"nonbonded": {
"coulomb": { "epsr": 80 }
}
},
"system": {
"temperature": 298.15,
"sphere" : { "radius" : 90 },
"mcloop": { "macro": 10, "micro": micro }
},
"moves": {
"gctit" : { "molecule": "salt", "prob": 0.5 },
"atomtranslate" : {
"salt": { "prob": 0.5 }
}
},
"moleculelist": {
"protein": { "structure":"../chain0.aam", "Ninit":1, "insdir":"0 0 0"},
"salt": {"atoms":"Na Cl", "Ninit":60, "atomic":True }
},
"atomlist" : {
"Na" : { "q": 1, "r":1.9, "eps":0.005, "mw":22.99, "dp":100, "activity":salt },
"Cl" : { "q":-1, "r":1.7, "eps":0.005, "mw":35.45, "dp":100, "activity":salt },
"ASP" : { "q":-1, "r":3.6, "eps":0.05, "mw":110 },
"HASP" : { "q":0, "r":3.6, "eps":0.05, "mw":110 },
"LASP" : { "q":2, "r":3.6, "eps":0.05, "mw":110 },
"CTR" : { "q":-1, "r":2.0, "eps":0.05, "mw":16 },
"HCTR" : { "q":0, "r":2.0, "eps":0.05, "mw":16 },
"GLU" : { "q":-1, "r":3.8, "eps":0.05, "mw":122 },
"HGLU" : { "q":0, "r":3.8, "eps":0.05, "mw":122 },
"LGLU" : { "q":2, "r":3.8, "eps":0.05, "mw":122 },
"HIS" : { "q":0, "r":3.9, "eps":0.05, "mw":130 },
"HHIS" : { "q":1, "r":3.9, "eps":0.05, "mw":130 },
"NTR" : { "q":0, "r":2.0, "eps":0.05, "mw":14 },
"HNTR" : { "q":1, "r":2.0, "eps":0.05, "mw":14 },
"TYR" : { "q":-1, "r":4.1, "eps":0.05, "mw":154 },
"HTYR" : { "q":0, "r":4.1, "eps":0.05, "mw":154 },
"LYS" : { "q":0, "r":3.7, "eps":0.05, "mw":116 },
"HLYS" : { "q":1, "r":3.7, "eps":0.05, "mw":116 },
"CYb" : { "q":0, "r":3.6, "eps":0.05, "mw":103 },
"CYS" : { "q":-1, "r":3.6, "eps":0.05, "mw":103 },
"HCYS" : { "q":0, "r":3.6, "eps":0.05, "mw":103 },
"ARG" : { "q":0, "r":4.0, "eps":0.05, "mw":144 },
"HARG" : { "q":1, "r":4.0, "eps":0.05, "mw":144 },
"ALA" : { "q":0, "r":3.1, "eps":0.05, "mw":66 },
"ILE" : { "q":0, "r":3.6, "eps":0.05, "mw":102 },
"LEU" : { "q":0, "r":3.6, "eps":0.05, "mw":102 },
"MET" : { "q":0, "r":3.8, "eps":0.05, "mw":122 },
"PHE" : { "q":0, "r":3.9, "eps":0.05, "mw":138 },
"PRO" : { "q":0, "r":3.4, "eps":0.05, "mw":90 },
"TRP" : { "q":0, "r":4.3, "eps":0.05, "mw":176 },
"VAL" : { "q":0, "r":3.4, "eps":0.05, "mw":90 },
"SER" : { "q":0, "r":3.3, "eps":0.05, "mw":82 },
"THR" : { "q":0, "r":3.5, "eps":0.05, "mw":94 },
"ASN" : { "q":0, "r":3.6, "eps":0.05, "mw":108 },
"GLN" : { "q":0, "r":3.8, "eps":0.05, "mw":120 },
"GLY" : { "q":0, "r":2.9, "eps":0.05, "mw":54 }
},
"processes" : {
"H-Asp" : { "bound":"HASP" , "free":"ASP" , "pKd":4.0 , "pX":pH },
"H-Ctr" : { "bound":"HCTR" , "free":"CTR" , "pKd":2.6 , "pX":pH },
"H-Glu" : { "bound":"HGLU" , "free":"GLU" , "pKd":4.4 , "pX":pH },
"H-His" : { "bound":"HHIS" , "free":"HIS" , "pKd":6.3 , "pX":pH },
"H-Arg" : { "bound":"HARG" , "free":"ARG" , "pKd":12.0 , "pX":pH },
"H-Ntr" : { "bound":"HNTR" , "free":"NTR" , "pKd":7.5 , "pX":pH },
"H-Cys" : { "bound":"HCYS" , "free":"CYS" , "pKd":10.8 , "pX":pH },
"H-Tyr" : { "bound":"HTYR" , "free":"TYR" , "pKd":9.6 , "pX":pH },
"H-Lys" : { "bound":"HLYS" , "free":"LYS" , "pKd":10.4 , "pX":pH }
}
}
with open('titrate.json', 'w+') as f:
f.write(json.dumps(js, indent=4))
for pH in pH_range:
for salt in salt_range:
pfx='pH'+str(pH)+'-I'+str(salt)
if not os.path.isdir(pfx):
%mkdir -p $pfx
%cd $pfx
# equilibration run (no translation)
!rm -fR state
micro=100
mkinput()
!../faunus/src/examples/gctit > eq
# production run
micro=1000
mkinput()
%time !../faunus/src/examples/gctit > out
%cd ..
%cd ..
print('done.')
Explanation: Create Input and run MC simulation
End of explanation
%cd $workdir'/'
import json
for pH in pH_range:
for salt in salt_range:
pfx='pH'+str(pH)+'-I'+str(salt)
if os.path.isdir(pfx):
%cd $pfx
js = json.load( open('gctit-output.json') )
charge = js['protein']['charge']
index = js['protein']['index']
resname = js['protein']['resname']
plt.plot(index,charge, 'ro')
%cd ..
for i in range(0,len(index)):
label = resname[i]+' '+str(index[i]+1)
plt.annotate(label, xy=(index[i], charge[i]), fontsize=8, rotation=70)
plt.title('Protonation States of All Residues')
plt.legend(loc=0, frameon=False)
plt.xlabel(r'residue number')
plt.ylabel(r'average charge, $z$')
plt.ylim((-1.1, 1.1))
#plt.xticks(i, resname, rotation=70, fontsize='small')
plt.savefig('fig.pdf', bbox_inches='tight')
Explanation: Analysis
End of explanation |
4,309 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Flow rate waveform transformation
Arjan Geers
An artery's flow rate waveform (FRW) can be characterized in many ways. Three common descriptors are the heart rate (HR), pulsatility index (PI), and time-averaged flow rate (QTA). In this notebook, a FRW is linearly transformed to generate new FRWs with specified values for these descriptors.
In this work, we used FRW transformation to test how each descriptor in isolation affects the hemodynamics
Step1: Data
We start by reading the FRW of a healthy volunteer's internal carotid artery (ICA). This data was acquired with PC-MRA by Cebral et al. as part of this study.
The 99 unique flow rate values (in ml/s) are uniformly distributed in time. Corresponding time values will be added during the transformation to account for the heart rate.
Step3: The physiological ranges over which to vary the FRW descriptors are defined as follows
Step5: This dataframe corresponds to Table 1 of our paper.
For HR and PI, the three values are defined as
Step6: Interactive plot
To demo how the three descriptors change the shape of the FRW, we plot the transformed FRW and provide sliders to interactively adjust the descriptor values. | Python Code:
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from ipywidgets import interact, IntSlider, FloatSlider
%matplotlib inline
Explanation: Flow rate waveform transformation
Arjan Geers
An artery's flow rate waveform (FRW) can be characterized in many ways. Three common descriptors are the heart rate (HR), pulsatility index (PI), and time-averaged flow rate (QTA). In this notebook, a FRW is linearly transformed to generate new FRWs with specified values for these descriptors.
In this work, we used FRW transformation to test how each descriptor in isolation affects the hemodynamics:
Geers AJ, Larrabide I, Morales HG, Frangi AF. Approximating hemodynamics of cerebral aneurysms with steady flow simulations. Journal of Biomechanics, 47(1):178–185, 2014.
Preamble
End of explanation
frw = pd.read_csv(os.path.join('data', 'frw.csv'))
pd.set_option('display.max_rows', 10)
frw
Explanation: Data
We start by reading the FRW of a healthy volunteer's internal carotid artery (ICA). This data was acquired with PC-MRA by Cebral et al. as part of this study.
The 99 unique flow rate values (in ml/s) are uniformly distributed in time. Corresponding time values will be added during the transformation to account for the heart rate.
End of explanation
def qcebral(radius=2e-3):
Use relationship from Cebral et al. Physiol Meas, 2008,
to return a flow rate in ml/s for a given radius in meters.
area = np.pi * radius**2
return 48.21 * (1e4 * area)**1.84
frw_descriptors = pd.DataFrame(index=['lower',
'baseline',
'upper'],
columns=['HR (bpm)',
'PI (-)',
'QTA (ml/s)'])
frw_descriptors['HR (bpm)'] = [52, 68, 84]
frw_descriptors['PI (-)'] = [0.58, 0.92, 1.26]
frw_descriptors['QTA (ml/s)'] = [0.73*qcebral(), qcebral(), 1.27*qcebral()]
frw_descriptors
Explanation: The physiological ranges over which to vary the FRW descriptors are defined as follows:
End of explanation
def transform_frw(frw_in, hr_out, pi_out, qta_out):
Linearly transform a flow rate waveform (frw_in) to generate one with
specified heart rate (hr_out), pulsatility index (pi_out), and
time-averaged flow rate (qta_out).
The input flow rate waveform is a pandas DataFrame with a single column of
unique flow rate values uniformly distributed in time.
The output flow rate waveform is a pandas DataFrame with a time and a flow
rate column.
----------------------------------------------------------------------------
Time-averaged values are calculated by numerical integration using the
trapezoidal rule.
T is period
var[t] is variable as function of time
n is number of trapezoid areas
h = T/n is width of each trapezoid
varta = time-averaged value of variable
varta = 1/T * Integrate[var[t], {t, 0, T}]
= 1/T * h * (0.5*var[t_0] + var[t_1] + ... + 0.5*var[t_n])
= 1/n * (0.5*var[t_0] + var[t_1] + ... + 0.5*var[t_n])
= 1/n * (var[t_0] + ... + var[t_n-1])
= mean(var[t_0], ..., var[t_n-1])
In the last step, we used the property of a periodic function that
var[t_0] = var[t_n].
----------------------------------------------------------------------------
hr_in = 60
qta_in = np.mean(frw_in)
pi_in = (np.amax(frw_in) - np.amin(frw_in)) / qta_in
qta_ratio = qta_out / qta_in
pi_ratio = pi_out / pi_in
hr_ratio = hr_out / hr_in
a = qta_ratio * pi_ratio
b = qta_out * (1 - pi_ratio)
c = hr_ratio # inverse of cardiac period
frw_out = a * frw_in + b
numberoftimesteps = len(frw_out)
t = [timestep / (c * numberoftimesteps) for timestep in range(numberoftimesteps)]
frw_out.insert(0, 'time (s)', pd.Series(t))
return frw_out
Explanation: This dataframe corresponds to Table 1 of our paper.
For HR and PI, the three values are defined as:
* baseline = mean
* lower = mean - 2 SD
* upper = mean + 2 SD
where the mean and standard deviation (SD) are taken from Ford et al. and Hoi et al..
Values for QTA were derived from another paper by Cebral et al. in which they experimentally determined the relationship between the cross sectional area and time-averaged flow rate of ICAs and vertebral arteries (VAs):
\begin{equation}
\textrm{Qcebral} = 48.21 \, A^{1.84}
\end{equation}
where A is the cross sectional area in cm$^2$ and Qcebral is in ml/s.
Qcebral was taken as baseline value for QTA. The lower and upper values were 27% below and above the baseline, respectively. The 27% variation is the average relative error between prediction and measurement, derived from Cebral et al.'s paper.
To get some actual QTA values to work with, we assume an artery with a circular cross section and a radius of 2 mm, which is about the size of an ICA.
Transformation
Following Eq. 1 of our paper, the FRW transformation from $\textrm{Q}^0(t)$ to $\textrm{Q}(t)$ is given by:
\begin{equation}
\textrm{Q}(t) = a \, \textrm{Q}^0(c t) + b
\end{equation}
where
\begin{equation}
a = \frac{\textrm{QTA}}{\textrm{QTA}^0} \frac{\textrm{PI}}{\textrm{PI}^0},%
\qquad b = \textrm{QTA} \left(1 - \frac{\textrm{PI}}{\textrm{PI}^0}\right),%
\qquad c = \frac{\textrm{HR}}{\textrm{HR}^0}
\end{equation}
We implement this as transform_frw, which takes as input:
* frw_in, i.e. $\textrm{Q}^0(t)$
* hr_out, i.e. $\textrm{HR}$
* pi_out, i.e. $\textrm{PI}$
* qta_out, i.e. $\textrm{QTA}$
End of explanation
def plot_frw(hr_out, pi_out, q_out):
frw_out = transform_frw(frw, hr_out, pi_out, q_out)
frw_baseline = transform_frw(frw, *frw_descriptors.loc['baseline'].tolist())
fig, ax = plt.subplots()
ax.plot(frw_out['time (s)'], frw_out['flowrate (ml/s)'], c='black', lw=2, )
ax.plot(frw_baseline['time (s)'], frw_baseline['flowrate (ml/s)'], c='black', ls=':', lw=2, )
ax.set_xlabel('Time (s)', fontsize=18)
ax.set_ylabel('Flow rate (ml/s)', fontsize=18)
ax.set_xlim(0, 1)
ax.set_ylim(0, 2)
ax.tick_params(axis='both', which='major', labelsize=14)
plt.show()
hr_slider = IntSlider(description='HR (bpm)',
value=frw_descriptors['HR (bpm)']['baseline'],
min=frw_descriptors['HR (bpm)']['lower'],
max=frw_descriptors['HR (bpm)']['upper'],
step=1)
pi_slider = FloatSlider(description='PI (-)',
value=frw_descriptors['PI (-)']['baseline'],
min=frw_descriptors['PI (-)']['lower'],
max=frw_descriptors['PI (-)']['upper'],
step=0.01)
q_slider = FloatSlider(description='QTA (ml/s)',
value=round(frw_descriptors['QTA (ml/s)']['baseline'], 2),
min=round(frw_descriptors['QTA (ml/s)']['lower'], 2),
max=round(frw_descriptors['QTA (ml/s)']['upper'], 2),
step=0.01)
interact(plot_frw, hr_out=hr_slider, pi_out=pi_slider, q_out=q_slider);
Explanation: Interactive plot
To demo how the three descriptors change the shape of the FRW, we plot the transformed FRW and provide sliders to interactively adjust the descriptor values.
End of explanation |
4,310 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean_Variance_Image.png" style="height
Step6: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
Step7: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height
Step8: <img src="image/Learn_Rate_Tune_Image.png" style="height
Step9: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. | Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# Setting the equation parameters
image_data = image_data.astype(np.float32)
a = 0.1
b = 0.9
x_min = np.min(image_data)
x_max = np.max(image_data)
for idx in range(len(image_data)):
x = image_data[idx]
image_data[idx] = a + ((x - x_min) * (b - a) / (x_max - x_min))
return image_data
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
Explanation: <img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32, [None, features_count])
labels = tf.placeholder(tf.float32, [None, labels_count])
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal([features_count, labels_count]))
biases = tf.Variable(tf.zeros([labels_count]))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 30
learning_rate = 0.05
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
Explanation: <img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation |
4,311 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plot Kmeans clusters stored in a GeoTiff
This is a notebook plots the GeoTiffs created out of kmeans. Such GeoTiffs contains the Kmeans cluster IDs.
Dependencies
Step1: Spark Context
Step2: Mode of Operation setup
The user should modify the following variables to define which GeoTiffs should be loaded. In case it (s)he wants to visualize results that just came out of kmeans laste execution, just copy the values set at its Mode of Operation Setup.
Step3: Mode of Operation verification
Step4: Load GeoTiffs
Load the GeoTiffs into MemoryFiles.
Step5: Check GeoTiffs metadata
Step6: Plot GeoTiffs
Step7: Histogram | Python Code:
import sys
sys.path.append("/usr/lib/spark/python")
sys.path.append("/usr/lib/spark/python/lib/py4j-0.10.4-src.zip")
sys.path.append("/usr/lib/python3/dist-packages")
import os
os.environ["HADOOP_CONF_DIR"] = "/etc/hadoop/conf"
import os
os.environ["PYSPARK_PYTHON"] = "python3"
os.environ["PYSPARK_DRIVER_PYTHON"] = "ipython"
from pyspark.mllib.clustering import KMeans, KMeansModel
from pyspark import SparkConf, SparkContext
from osgeo import gdal
from io import BytesIO
import matplotlib.pyplot as plt
import rasterio
from rasterio import plot
from rasterio.io import MemoryFile
Explanation: Plot Kmeans clusters stored in a GeoTiff
This is a notebook plots the GeoTiffs created out of kmeans. Such GeoTiffs contains the Kmeans cluster IDs.
Dependencies
End of explanation
appName = "plot_kmeans_clusters"
masterURL="spark://pheno0.phenovari-utwente.surf-hosted.nl:7077"
try:
sc.stop()
except NameError:
print("A new Spark Context will be created.")
sc = SparkContext(conf = SparkConf().setAppName(appName).setMaster(masterURL))
Explanation: Spark Context
End of explanation
#GeoTiffs to be read from "hdfs:///user/hadoop/modis/"
offline_dir_path = "hdfs:///user/pheno/"
#
#Choose all and then the band or the dir which has the band extracted.
#0: Onset_Greenness_Increase
#1: Onset_Greenness_Maximum
#2: Onset_Greenness_Decrease
#3: Onset_Greenness_Minimum
#4: NBAR_EVI_Onset_Greenness_Minimum
#5: NBAR_EVI_Onset_Greenness_Maximum
#6: NBAR_EVI_Area
#7: Dynamics_QC
#
#for example:
#var geoTiff_dir = "Onset_Greenness_Increase"
#var band_num = 0
geoTiff_dir = "kmeans_BloomFinal_LeafFinal_test"
band_num = 3
#Satellite years between (inclusive) 1989 - 2014
#Model years between (inclusive) 1980 - 2015
first_year = 1980
last_year = 2015
#Kmeans number of iterations and clusters
numIterations = 75
minClusters = 60
maxClusters = 60
stepClusters = 1
Explanation: Mode of Operation setup
The user should modify the following variables to define which GeoTiffs should be loaded. In case it (s)he wants to visualize results that just came out of kmeans laste execution, just copy the values set at its Mode of Operation Setup.
End of explanation
geotiff_hdfs_paths = []
if minClusters > maxClusters:
maxClusters = minClusters
stepClusters = 1
if stepClusters < 1:
stepClusters = 1
#Satellite years between (inclusive) 1989 - 2014
#Model years between (inclusive) 1980 - 2015
years = list(range(1980,2015))
numClusters_id = 1
numClusters = minClusters
while numClusters <= maxClusters :
path = offline_dir_path + geoTiff_dir + '/clusters_' + str(band_num) + '_' + str(numClusters) + '_' + str(numIterations) + '_' + str(first_year) + '_' + str(last_year) + '_' + str(years[numClusters_id]) + '.tif'
geotiff_hdfs_paths.append(path)
numClusters_id += 1
numClusters += stepClusters
Explanation: Mode of Operation verification
End of explanation
clusters_dataByteArrays = []
numClusters_id = 0
numClusters = minClusters
while numClusters <= maxClusters :
print(geotiff_hdfs_paths[numClusters_id])
clusters_data = sc.binaryFiles(geotiff_hdfs_paths[numClusters_id]).take(1)
clusters_dataByteArrays.append(bytearray(clusters_data[0][1]))
numClusters_id += 1
numClusters += stepClusters
Explanation: Load GeoTiffs
Load the GeoTiffs into MemoryFiles.
End of explanation
for val in clusters_dataByteArrays:
#Create a Memory File
memFile = MemoryFile(val).open()
print(memFile.profile)
memFile.close()
Explanation: Check GeoTiffs metadata
End of explanation
%matplotlib inline
numClusters_id = 0
numClusters = minClusters
while numClusters <= maxClusters :
print ("Plot for " + str(numClusters) + " clusters!!!")
memFile = MemoryFile(clusters_dataByteArrays[numClusters_id]).open()
plt = plot.get_plt()
plt.figure(figsize=(20,20))
plot.show((memFile,1))
if (numClusters < maxClusters) :
_ = input("Press [enter] to continue.")
memFile.close()
numClusters_id += 1
numClusters += stepClusters
Explanation: Plot GeoTiffs
End of explanation
%matplotlib inline
numClusters_id = 0
numClusters = minClusters
while numClusters <= maxClusters :
print ("Plot for " + str(numClusters) + " clusters!!!")
memFile = MemoryFile(clusters_dataByteArrays[numClusters_id]).open()
plt = plot.get_plt()
plt.figure(figsize=(20,20))
plot.show_hist(memFile, bins=numClusters)
if (numClusters < maxClusters) :
_ = input("Press [enter] to continue.")
memFile.close()
numClusters_id += 1
numClusters += stepClusters
%pylab inline
from ipywidgets import interactive
def wave(i):
x = np.linspace(0, np.pi * 2)
y = np.sin(x * i)
plt.plot(x,y)
plt.show()
interactive_plot = interactive(wave, i=(1,3))
interactive_plot
import ipywidgets
ipywidgets.__version__
Explanation: Histogram
End of explanation |
4,312 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Sinous Violin
The aim of this short notebook is to show how to use NumPy and SciPy to play with spectral audio signal analysis (and synthesis).
Lots of prior knowledge is assumed, and here no signal theory (nor its mathematical details) will be discussed. The reader interested a more formal discussion is invited to read, for example
Step1: Let's begin
We'll fix the sampling rate once and for all to 8000Hz, that is sufficient for our audio purposes, yet low enough to reduce the number of samples involved in the following computations.
Step2: We define a couple of helper functions, to load the samples in a WAV file and to generate a sine wave of given frequency and duration (given the sampling RATE defined above).
Step3: Let's check we've done a good job by playing a couple of seconds of a "pure A", that is a sine wave at 440hz
Step4: Similarly, let's load our violin sample and play it
Step5: Some analysis
Using the specgram
function we can plot a spectrogram
Step6: even specifing just (the sampling frequency) Fs, that is the only required parameter, and without fiddling with all the others, we can already see qualitatively that there are just a few relevant frequencies (the yellow lines).
To get the precise values (and amplitudes) of such frequencies we'll need a more quantitative tool, namely the scipy.fftpack.fft function that performs a Fast Fourier Transform, and the helper function scipy.fftpack.fftfreq that locates the actual frequencies used by the FFT computation.
Step7: Since the signal is real (that is, is made of real values), we need just the first half of the returned values; moreover (even if the theory says that the phases also matter), we are interested just in the amplitudes of the spectrum
Step8: Plotting the result makes it evident that, in accordance with what we observed in the spectrogram, there are just a few peaks
Step9: Locating the maxima
To find the frequencies where such peaks are located turns out to be a little tricky
Step10: but plotting the peaks reveals that sometimes they are a bit off
Step11: let's look at 10 values around the located peaks to get the actual maxima of the amplitudes, and then use such values to locate the frequencies where they are attained
Step12: by plotting these values we can tell we did a good job; using a logarithmic scale we can better appreciate that also the few last values correspond to actual peaks (albeit of much smaller amplitude)
Step13: We can isolate our peak finding function for further use
Step14: Finally the synthesis
Now that we have both the relevant frequencies and amplitudes, we can put together the sine waves and build an approximation of the original signal
Step15: The spectrogram looks promising
Step16: but what is striking is how similar the reconstructed sound is with respect to the original one
Step17: exspecially if you compare it with just the sine wave corresponding to the maximum amplitude
Step18: Not just violins
Of course the same game can be played with other samples, let's try a flute
Step19: We can replicate the steps to obtain the relevant frequenceis and amplitudes, plotting the result as a quick check
Step20: and again, play the obtained sound, compared to the maximum amplitude sine wave | Python Code:
%matplotlib inline
from IPython.display import Audio
import librosa
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
plt.rcParams['figure.figsize'] = 8, 4
plt.style.use('ggplot')
Explanation: A Sinous Violin
The aim of this short notebook is to show how to use NumPy and SciPy to play with spectral audio signal analysis (and synthesis).
Lots of prior knowledge is assumed, and here no signal theory (nor its mathematical details) will be discussed. The reader interested a more formal discussion is invited to read, for example: "Spectral Audio Signal Processing" by Julius O. Smith III that is a precise and deep, yet manageable, introduction to the topic.
For the reader less inclined in formal details (heaven forbid that the others will read the following sentence) suffices it to say that any (periodic) signal can be obtained as a superposition of sine waves (with suitable frequencies and amplitures).
The roadmap of what we'll be doing is:
take a real signal (a violin and a flute sample),
perform a spectral analysis,
determine some of the frequencies having the strongest amplitudes in such spectrum,
"reconstruct" a signal using just a few sine waves,
play the orignal, and reconstructed signal.
As you'll see, beside what theory guarantees, this actually works and very few waves are enough to approximate the timbre of a musical instrument.
The source notebook is available on GitHub (under GPL v3), feel free to use issues to point out errors, or to fork it to suggest edits.
A special thanks to the friend and colleague Federico Pedersini for tolerating my endless discussion and my musings.
The usual notebook setup
Beside the already mentioned NumPy and SciPy, we'll use librosa to read the WAV files containing the samples, and matplotlib because a picture is worth a thousand words; to play the samples we'll use the standard Audio display class of IPython.
End of explanation
RATE = 8000
Explanation: Let's begin
We'll fix the sampling rate once and for all to 8000Hz, that is sufficient for our audio purposes, yet low enough to reduce the number of samples involved in the following computations.
End of explanation
def load_signal_wav(name):
signal, _ = librosa.load(name + '.wav', sr = RATE, mono = True)
return signal
def sine_wave(freq, duration):
return np.sin(np.arange(0, duration, 1 / RATE) * freq * 2 * np.pi)
Explanation: We define a couple of helper functions, to load the samples in a WAV file and to generate a sine wave of given frequency and duration (given the sampling RATE defined above).
End of explanation
samples_sine = sine_wave(440, 2)
Audio(samples_sine, rate = RATE)
Explanation: Let's check we've done a good job by playing a couple of seconds of a "pure A", that is a sine wave at 440hz
End of explanation
samples_original = load_signal_wav('violin')
Audio(samples_original, rate = RATE)
Explanation: Similarly, let's load our violin sample and play it
End of explanation
plt.specgram(samples_original, Fs = RATE);
Explanation: Some analysis
Using the specgram
function we can plot a spectrogram
End of explanation
N = samples_original.shape[0]
spectrum = sp.fftpack.fft(samples_original)
frequencies = sp.fftpack.fftfreq(N, 1 / RATE)
Explanation: even specifing just (the sampling frequency) Fs, that is the only required parameter, and without fiddling with all the others, we can already see qualitatively that there are just a few relevant frequencies (the yellow lines).
To get the precise values (and amplitudes) of such frequencies we'll need a more quantitative tool, namely the scipy.fftpack.fft function that performs a Fast Fourier Transform, and the helper function scipy.fftpack.fftfreq that locates the actual frequencies used by the FFT computation.
End of explanation
frequencies = frequencies[:N//2]
amplitudes = np.abs(spectrum[:N//2])
Explanation: Since the signal is real (that is, is made of real values), we need just the first half of the returned values; moreover (even if the theory says that the phases also matter), we are interested just in the amplitudes of the spectrum
End of explanation
plt.plot(frequencies, amplitudes);
Explanation: Plotting the result makes it evident that, in accordance with what we observed in the spectrogram, there are just a few peaks
End of explanation
peak_indices = sp.signal.find_peaks_cwt(amplitudes, widths = (60,))
Explanation: Locating the maxima
To find the frequencies where such peaks are located turns out to be a little tricky: to locate the peaks the scipy.signal.find_peaks_cwt needs a widths parameter specifing "the expected width of peaks of interest".
After some trial and error, one can see that 60 is a reasonable width to get close enough to the actual peaks.
End of explanation
plt.plot(frequencies, amplitudes)
plt.plot(frequencies[peak_indices], amplitudes[peak_indices], 'bx');
Explanation: but plotting the peaks reveals that sometimes they are a bit off
End of explanation
amplitudes_maxima = list(map(lambda idx: np.max(amplitudes[idx - 10:idx + 10]), peak_indices))
frequencies_maxima = frequencies[np.isin(amplitudes, amplitudes_maxima)]
Explanation: let's look at 10 values around the located peaks to get the actual maxima of the amplitudes, and then use such values to locate the frequencies where they are attained
End of explanation
plt.semilogy(frequencies, amplitudes)
plt.plot(frequencies_maxima, amplitudes_maxima, 'bx');
Explanation: by plotting these values we can tell we did a good job; using a logarithmic scale we can better appreciate that also the few last values correspond to actual peaks (albeit of much smaller amplitude)
End of explanation
def find_peaks(frequencies, amplitudes, width, lookaround):
peak_indices = sp.signal.find_peaks_cwt(amplitudes, widths = (width,))
amplitudes_maxima = list(map(lambda idx: np.max(amplitudes[idx - lookaround:idx + lookaround]), peak_indices))
frequencies_maxima = frequencies[np.isin(amplitudes, amplitudes_maxima)]
return frequencies_maxima, amplitudes_maxima
Explanation: We can isolate our peak finding function for further use
End of explanation
def compose_sine_waves(frequencies, amplitudes, duration):
return sum(map(lambda fa: sine_wave(fa[0], 2) * fa[1], zip(frequencies, amplitudes)))
samples_reconstructed = compose_sine_waves(frequencies_maxima, amplitudes_maxima, 2)
Explanation: Finally the synthesis
Now that we have both the relevant frequencies and amplitudes, we can put together the sine waves and build an approximation of the original signal
End of explanation
plt.specgram(samples_reconstructed, Fs = RATE);
Explanation: The spectrogram looks promising
End of explanation
Audio(samples_reconstructed, rate = RATE)
Explanation: but what is striking is how similar the reconstructed sound is with respect to the original one
End of explanation
Audio(sine_wave(frequencies_maxima[np.argmax(amplitudes_maxima)], 2), rate = RATE)
Explanation: exspecially if you compare it with just the sine wave corresponding to the maximum amplitude
End of explanation
samples_original = load_signal_wav('flute')
Audio(samples_original, rate = RATE)
Explanation: Not just violins
Of course the same game can be played with other samples, let's try a flute
End of explanation
N = samples_original.shape[0]
frequencies = sp.fftpack.fftfreq(N, 1 / RATE)[:N//2]
amplitudes = np.abs(sp.fftpack.fft(samples_original))[:N//2]
frequencies_maxima, amplitudes_maxima = find_peaks(frequencies, amplitudes, 100, 50)
plt.plot(frequencies, amplitudes)
plt.plot(frequencies_maxima, amplitudes_maxima, 'bx');
Explanation: We can replicate the steps to obtain the relevant frequenceis and amplitudes, plotting the result as a quick check
End of explanation
samples_reconstructed = compose_sine_waves(frequencies_maxima, amplitudes_maxima, 2)
Audio(samples_reconstructed, rate = RATE)
Audio(sine_wave(frequencies_maxima[np.argmax(amplitudes_maxima)], 2), rate = RATE)
Explanation: and again, play the obtained sound, compared to the maximum amplitude sine wave
End of explanation |
4,313 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Named Topologies
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step1: TiltedSquareLattice
This is a grid lattice rotated 45-degrees.
This topology is based on Google devices where plaquettes consist of four qubits in a square
connected to a central qubit
Step2: The corner nodes are not connected to each other. width and height refer to the rectangle
formed by rotating the lattice 45 degrees. width and height are measured in half-unit
cells, or equivalently half the number of central nodes.
Nodes are 2-tuples of integers which may be negative. Please see get_placements for
mapping this topology to a GridQubit Device.
Placement
Step3: You can manually generate mappings between NamedTopology nodes and device qubits using helper functions.
Step4: Or you can automatically generate placements using a subgraph monomorphism algorithm in NetworkX.
Step5: LineTopology
This is a 1D linear topology.
Node indices are contiguous integers starting from 0 with edges between
adjacent integers.
Step6: Manual placement
Step7: Automatic placement | Python Code:
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq --pre
print("installed cirq.")
import cirq
from typing import Iterable, List, Optional, Sequence
import matplotlib.pyplot as plt
import numpy as np
import networkx as nx
Explanation: Named Topologies
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/tutorials/google/named_topologies"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/google/named_topologies.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/named_topologies.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/google/named_topologies.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
Note: this notebook relies on unreleased Cirq features. If you want to try these features, make sure you install cirq via pip install cirq --pre.
End of explanation
import itertools
from cirq import TiltedSquareLattice
side_lens = np.arange(1, 4+1)
l = len(side_lens)
fig, axes = plt.subplots(l, l, figsize=(3.5*l, 3*l))
for widthi, heighti in itertools.product(np.arange(l), repeat=2):
width = side_lens[widthi]
height = side_lens[heighti]
ax = axes[heighti, widthi]
topo = TiltedSquareLattice(width, height)
topo.draw(ax=ax, tilted=False)
if widthi == 0:
ax.set_ylabel(f'Height {height}', fontsize=14)
if heighti == l-1:
ax.set_xlabel(f'Width {width}', fontsize=14)
ax.set_title(f'n = {topo.n_nodes}', fontsize=14)
fig.tight_layout()
Explanation: TiltedSquareLattice
This is a grid lattice rotated 45-degrees.
This topology is based on Google devices where plaquettes consist of four qubits in a square
connected to a central qubit:
x x
x
x x
End of explanation
import networkx as nx
SYC23_GRAPH = nx.from_edgelist([
((3, 2), (4, 2)), ((4, 1), (5, 1)), ((4, 2), (4, 1)),
((4, 2), (4, 3)), ((4, 2), (5, 2)), ((4, 3), (5, 3)),
((5, 1), (5, 0)), ((5, 1), (5, 2)), ((5, 1), (6, 1)),
((5, 2), (5, 3)), ((5, 2), (6, 2)), ((5, 3), (5, 4)),
((5, 3), (6, 3)), ((5, 4), (6, 4)), ((6, 1), (6, 2)),
((6, 2), (6, 3)), ((6, 2), (7, 2)), ((6, 3), (6, 4)),
((6, 3), (7, 3)), ((6, 4), (6, 5)), ((6, 4), (7, 4)),
((6, 5), (7, 5)), ((7, 2), (7, 3)), ((7, 3), (7, 4)),
((7, 3), (8, 3)), ((7, 4), (7, 5)), ((7, 4), (8, 4)),
((7, 5), (7, 6)), ((7, 5), (8, 5)), ((8, 3), (8, 4)),
((8, 4), (8, 5)), ((8, 4), (9, 4)),
])
Explanation: The corner nodes are not connected to each other. width and height refer to the rectangle
formed by rotating the lattice 45 degrees. width and height are measured in half-unit
cells, or equivalently half the number of central nodes.
Nodes are 2-tuples of integers which may be negative. Please see get_placements for
mapping this topology to a GridQubit Device.
Placement
End of explanation
topo = TiltedSquareLattice(4, 2)
cirq.draw_placements(SYC23_GRAPH, topo.graph, [
topo.nodes_to_gridqubits(offset=(3,2)),
topo.nodes_to_gridqubits(offset=(5,3)),
], tilted=False)
Explanation: You can manually generate mappings between NamedTopology nodes and device qubits using helper functions.
End of explanation
topo = TiltedSquareLattice(4, 2)
placements = cirq.get_placements(SYC23_GRAPH, topo.graph)
cirq.draw_placements(SYC23_GRAPH, topo.graph, placements[::3])
print('...\n')
print(f'{len(placements)} total placements')
Explanation: Or you can automatically generate placements using a subgraph monomorphism algorithm in NetworkX.
End of explanation
from cirq import LineTopology
lens = np.arange(3, 12+1, 3)
l = len(lens)
fig, axes = plt.subplots(1,l, figsize=(3.5*l, 3*1))
for ax, n_nodes in zip(axes, lens):
LineTopology(n_nodes).draw(ax=ax, tilted=False)
ax.set_title(f'n = {n_nodes}')
fig.tight_layout()
Explanation: LineTopology
This is a 1D linear topology.
Node indices are contiguous integers starting from 0 with edges between
adjacent integers.
End of explanation
topo = LineTopology(9)
cirq.draw_placements(SYC23_GRAPH, topo.graph, [
{i: q for i, q in enumerate([
cirq.GridQubit(4, 1), cirq.GridQubit(4, 2), cirq.GridQubit(5, 2),
cirq.GridQubit(5, 3), cirq.GridQubit(6, 3), cirq.GridQubit(6, 4),
cirq.GridQubit(7, 4), cirq.GridQubit(7, 5), cirq.GridQubit(8, 5),
])}
], tilted=False)
Explanation: Manual placement
End of explanation
topo = LineTopology(9)
placements = cirq.get_placements(SYC23_GRAPH, topo.graph)
cirq.draw_placements(SYC23_GRAPH, topo.graph, placements[::300])
print('...\n')
print(f'{len(placements)} total placements')
Explanation: Automatic placement
End of explanation |
4,314 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AutoML SDK
Step1: Install the Google cloud-storage library as well.
Step2: Restart the Kernel
Once you've installed the AutoML SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AutoML APIs and Compute Engine APIs.
Google Cloud SDK is already installed in AutoML Notebooks.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for AutoML. We recommend when possible, to choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your GCP account
If you are using AutoML Notebooks, your environment is already
authenticated. Skip this step.
Note
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import AutoML SDK
Import the AutoML SDK into our Python environment.
Step11: AutoML constants
Setup up the following constants for AutoML
Step12: Clients
The AutoML SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (AutoML).
You will use several clients in this tutorial, so set them all up upfront.
Dataset Service for managed datasets.
Model Service for managed models.
Pipeline Service for training.
Endpoint Service for deployment.
Job Service for batch jobs and custom training.
Prediction Service for serving. Note
Step13: Example output
Step14: Example output
Step15: Response
Step16: Example output
Step17: projects.locations.datasets.importData
Request
Step18: Example output
Step19: Response
Step20: Example output
Step21: Example output
Step22: Response
Step23: Example output
Step24: Evaluate the model
projects.locations.models.modelEvaluations.list
Call
Step25: Response
Step26: Example output
Step27: Response
Step28: Example output
Step29: Example output
Step30: Example output
Step31: Example output
Step32: Response
Step33: Example output
Step34: Example output | Python Code:
! pip3 install -U google-cloud-automl --user
Explanation: AutoML SDK: AutoML video object tracking model
Installation
Install the latest (preview) version of AutoML SDK.
End of explanation
! pip3 install google-cloud-storage
Explanation: Install the Google cloud-storage library as well.
End of explanation
import os
if not os.getenv("AUTORUN"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the Kernel
Once you've installed the AutoML SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" #@param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AutoML APIs and Compute Engine APIs.
Google Cloud SDK is already installed in AutoML Notebooks.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = 'us-central1' #@param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for AutoML. We recommend when possible, to choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You cannot use a Multi-Regional Storage bucket for training with AutoML. Not all regions provide support for all AutoML services. For the latest support per region, see Region support for AutoML services
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on AutoML, then don't execute this code
if not os.path.exists('/opt/deeplearning/metadata/env_version'):
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth login
Explanation: Authenticate your GCP account
If you are using AutoML Notebooks, your environment is already
authenticated. Skip this step.
Note: If you are on an AutoML notebook and run the cell, the cell knows to skip executing the authentication steps.
End of explanation
BUCKET_NAME = "[your-bucket-name]" #@param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
End of explanation
! gsutil mb -l $REGION gs://$BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al gs://$BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import json
import os
import sys
import time
from google.cloud import automl_v1beta1 as automl
from google.protobuf.json_format import MessageToJson
from google.protobuf.json_format import ParseDict
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import AutoML SDK
Import the AutoML SDK into our Python environment.
End of explanation
# AutoML location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: AutoML constants
Setup up the following constants for AutoML:
PARENT: The AutoML location root path for dataset, model and endpoint resources.
End of explanation
def automl_client():
return automl.AutoMlClient()
def prediction_client():
return automl.PredictionServiceClient()
def operations_client():
return automl.AutoMlClient()._transport.operations_client
clients = {}
clients["automl"] = automl_client()
clients["prediction"] = prediction_client()
clients["operations"] = operations_client()
for client in clients.items():
print(client)
IMPORT_FILE = 'gs://automl-video-demo-data/traffic_videos/traffic_videos.csv'
! gsutil cat $IMPORT_FILE | head -n 10
Explanation: Clients
The AutoML SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (AutoML).
You will use several clients in this tutorial, so set them all up upfront.
Dataset Service for managed datasets.
Model Service for managed models.
Pipeline Service for training.
Endpoint Service for deployment.
Job Service for batch jobs and custom training.
Prediction Service for serving. Note: Prediction has a different service endpoint.
End of explanation
dataset = {
"display_name": "traffic_" + TIMESTAMP,
"video_object_tracking_dataset_metadata": {}
}
print(MessageToJson(
automl.CreateDatasetRequest(
parent=PARENT,
dataset=dataset
).__dict__["_pb"])
)
Explanation: Example output:
UNASSIGNED,gs://automl-video-demo-data/traffic_videos/traffic_videos_labels.csv
Create a dataset
projects.locations.datasets.create
Request
End of explanation
request = clients["automl"].create_dataset(
parent=PARENT,
dataset=dataset
)
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"dataset": {
"displayName": "traffic_20210310004803",
"videoObjectTrackingDatasetMetadata": {}
}
}
Call
End of explanation
result = request
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split('/')[-1]
print(dataset_id)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/datasets/VOT4951119001817186304",
"displayName": "traffic_20210310004803",
"createTime": "2021-03-10T00:48:09.292248Z",
"etag": "AB3BwFp3u4a2oy-k3EgK6ci8zwrTqrd91_DmoaY8TYsxnb-N-aXwFefqCIm1z0YTM290",
"videoObjectTrackingDatasetMetadata": {}
}
End of explanation
input_config = {
"gcs_source": {
"input_uris": [IMPORT_FILE]
}
}
print(MessageToJson(
automl.ImportDataRequest(
name=dataset_short_id,
input_config=input_config
).__dict__["_pb"])
)
Explanation: projects.locations.datasets.importData
Request
End of explanation
request = clients["automl"].import_data(
name=dataset_id,
input_config=input_config
)
Explanation: Example output:
{
"name": "VOT4951119001817186304",
"inputConfig": {
"gcsSource": {
"inputUris": [
"gs://automl-video-demo-data/traffic_videos/traffic_videos.csv"
]
}
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result))
Explanation: Response
End of explanation
model = {
"display_name": "traffic_" + TIMESTAMP,
"dataset_id": dataset_short_id,
"video_object_tracking_model_metadata": {}
}
print(MessageToJson(
automl.CreateModelRequest(
parent=PARENT,
model=model
).__dict__["_pb"])
)
Explanation: Example output:
{}
Train a model
projects.locations.models.create
Request
End of explanation
request = clients["automl"].create_model(
parent=PARENT,
model=model
)
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"model": {
"displayName": "traffic_20210310004803",
"datasetId": "VOT4951119001817186304",
"videoObjectTrackingModelMetadata": {}
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
# The full unique ID for the training pipeline
model_id = result.name
# The short numeric ID for the training pipeline
model_short_id = model_id.split('/')[-1]
print(model_short_id)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/models/VOT6634816000837550080"
}
End of explanation
request = clients["automl"].list_model_evaluations(
parent=model_id,
)
Explanation: Evaluate the model
projects.locations.models.modelEvaluations.list
Call
End of explanation
for evaluation in request:
print(MessageToJson(evaluation.__dict__["_pb"]))
# The last evaluation slice
last_evaluation_slice = evaluation.name
Explanation: Response
End of explanation
request = clients["automl"].get_model_evaluation(
name=last_evaluation_slice
)
Explanation: Example output:
```
{
"name": "projects/116273516712/locations/us-central1/models/VOT6634816000837550080/modelEvaluations/3861662240237183015",
"annotationSpecId": "7080515133784457216",
"createTime": "2021-03-10T01:57:44.615737Z",
"evaluatedExampleCount": 6,
"videoObjectTrackingEvaluationMetrics": {
"boundingBoxMetricsEntries": [
{
"iouThreshold": 0.5,
"meanAveragePrecision": 0.30026233,
"confidenceMetricsEntries": [
{
"recall": 1.0,
"precision": 0.37222221,
"f1Score": 0.5425101
},
{
"confidenceThreshold": 0.028951555,
"recall": 0.13432837,
"precision": 0.07377049,
"f1Score": 0.0952381
},
# REMOVED FOR BREVITY
{
"confidenceThreshold": 1.0,
"precision": 1.0
}
]
}
],
"boundingBoxMeanAveragePrecision": 0.30026233
},
"displayName": "pickup_suv_van"
}
{
"name": "projects/116273516712/locations/us-central1/models/VOT6634816000837550080/modelEvaluations/4053501621797068779",
"annotationSpecId": "5927593629177610240",
"createTime": "2021-03-10T01:57:44.615737Z",
"evaluatedExampleCount": 5,
"videoObjectTrackingEvaluationMetrics": {
"boundingBoxMetricsEntries": [
{
"iouThreshold": 0.5,
"meanAveragePrecision": 0.42889464,
"confidenceMetricsEntries": [
{
"recall": 1.0,
"precision": 0.25490198,
"f1Score": 0.40625
},
# REMOVED FOR BREVITY
],
"boundingBoxMeanAveragePrecision": 0.34359422
},
"displayName": "sedan"
}
```
projects.locations.models.modelEvaluations.get
Call
End of explanation
print(MessageToJson(request.__dict__["_pb"]))
Explanation: Response
End of explanation
TRAIN_FILES = "gs://automl-video-demo-data/traffic_videos/traffic_videos_labels.csv"
test_items = ! gsutil cat $TRAIN_FILES | head -n2
cols = str(test_items[0]).split(',')
test_item_1 = str(cols[0])
test_label_1 = str(cols[1])
test_start_time_1 = str(0)
test_end_time_1 = "inf"
print(test_item_1, test_label_1)
cols = str(test_items[1]).split(',')
test_item_2 = str(cols[0])
test_label_2 = str(cols[1])
test_start_time_2 = str(0)
test_end_time_2 = "inf"
print(test_item_2, test_label_2)
Explanation: Example output:
```
{
"name": "projects/116273516712/locations/us-central1/models/VOT6634816000837550080/modelEvaluations/6603022638609369541",
"annotationSpecId": "1315907610750222336",
"createTime": "2021-03-10T01:57:44.615737Z",
"evaluatedExampleCount": 6,
"videoObjectTrackingEvaluationMetrics": {
"boundingBoxMetricsEntries": [
{
"iouThreshold": 0.5,
"meanAveragePrecision": 0.34359422,
"confidenceMetricsEntries": [
{
"recall": 1.0,
"precision": 0.41428572,
"f1Score": 0.5858586
},
{
"confidenceThreshold": 0.03328514,
"recall": 0.1724138,
"precision": 0.10869565,
"f1Score": 0.13333334
},
# REMOVED FOR BREVITY
]
}
],
"boundingBoxMeanAveragePrecision": 0.34359422
},
"displayName": "sedan"
}
```
Make batch predictions
Prepare batch prediction data
End of explanation
import tensorflow as tf
import json
gcs_input_uri = "gs://" + BUCKET_NAME + '/test.csv'
with tf.io.gfile.GFile(gcs_input_uri, 'w') as f:
data = f"{test_item_1}, {test_start_time_1}, {test_end_time_1}"
f.write(data + '\n')
data = f"{test_item_2}, {test_start_time_2}, {test_end_time_2}"
f.write(data + '\n')
print(gcs_input_uri)
!gsutil cat $gcs_input_uri
Explanation: Example output:
gs://automl-video-demo-data/traffic_videos/highway_005.mp4 sedan
gs://automl-video-demo-data/traffic_videos/highway_005.mp4 pickup_suv_van
Make the batch input file
To request a batch of predictions from AutoML Video, create a CSV file that lists the Cloud Storage paths to the videos that you want to annotate. You can also specify a start and end time to tell AutoML Video to only annotate a segment (segment-level) of the video. The start time must be zero or greater and must be before the end time. The end time must be greater than the start time and less than or equal to the duration of the video. You can also use inf to indicate the end of a video.
End of explanation
input_config = {
"gcs_source": {
"input_uris": [gcs_input_uri]
}
}
output_config = {
"gcs_destination": {
"output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/"
}
}
batch_prediction = automl.BatchPredictRequest(
name=model_id,
input_config=input_config,
output_config=output_config,
)
print(MessageToJson(batch_prediction.__dict__["_pb"]))
Explanation: Example output:
gs://migration-ucaip-trainingaip-20210310004803/test.csv
gs://automl-video-demo-data/traffic_videos/highway_005.mp4, 0, inf
gs://automl-video-demo-data/traffic_videos/highway_005.mp4, 0, inf
projects.locations.models.batchPredict
Request
End of explanation
request = clients["prediction"].batch_predict(
request=batch_prediction
)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/models/VOT6634816000837550080",
"inputConfig": {
"gcsSource": {
"inputUris": [
"gs://migration-ucaip-trainingaip-20210310004803/test.csv"
]
}
},
"outputConfig": {
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210310004803/batch_output/"
}
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
destination_uri = batch_prediction.output_config.gcs_destination.output_uri_prefix[:-1]
! gsutil ls $destination_uri/prediction-**
! gsutil cat $destination_uri/prediction-**
Explanation: Example output:
{}
End of explanation
delete_dataset = True
delete_model = True
delete_bucket = True
# Delete the dataset using the AutoML fully qualified identifier for the dataset
try:
if delete_dataset:
clients['automl'].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the model using the AutoML fully qualified identifier for the model
try:
if delete_model:
clients['automl'].delete_model(name=model_id)
except Exception as e:
print(e)
if delete_bucket and 'BUCKET_NAME' in globals():
! gsutil rm -r gs://$BUCKET_NAME
Explanation: Example output:
```
gs://migration-ucaip-trainingaip-20210310004803/batch_output/prediction-traffic_20210310004803-2021-03-10T01:57:52.257240Z/highway_005_1.json
gs://migration-ucaip-trainingaip-20210310004803/batch_output/prediction-traffic_20210310004803-2021-03-10T01:57:52.257240Z/video_object_tracking.csv
{
"object_annotations": [ {
"annotation_spec": {
"display_name": "sedan",
"description": "sedan"
},
"confidence": 0.52724433,
"frames": [ {
"normalized_bounding_box": {
"x_min": 0.27629745,
"y_min": 0.59244406,
"x_max": 0.53941643,
"y_max": 0.77127469
},
"time_offset": {
}
}, {
"normalized_bounding_box": {
"x_min": 0.135607,
"y_min": 0.58437037,
"x_max": 0.42441425,
"y_max": 0.77325606
},
"time_offset": {
"nanos": 100000000
}
},
# REMOVED FOR BREVITY
}
} ]
}
gs://automl-video-demo-data/traffic_videos/highway_005.mp4,0,315576000000,gs://migration-ucaip-trainingaip-20210310004803/batch_output/prediction-traffic_20210310004803-2021-03-10T01:57:52.257240Z/highway_005_1.json,OK
```
Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial.
End of explanation |
4,315 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Prepare Tidy CVC Datasets (with SWMM model hydrology)
Setup the basic working environment
Step1: Load External Data (this takes a while)
Step2: Load CVC Database
Step3: Hydrologic Relationships
$V_{\mathrm{runoff, \ LV1}} = \max\left(0,\
Step4: ED-1
$\log \left(V_{\mathrm{runoff, \ ED1}}\right) = 1.58 + 0.000667 \, I_{\mathrm{max}} + 0.0169 \, D_{\mathrm{precip}} $
$V_{\mathrm{bypass, \ ED1}} = \max \left(0,\
Step5: LV-2
$\log \left(V_{\mathrm{runoff, \ LV2}}\right) = 1.217 + 0.00622 \, I_{\mathrm{max}} + 0.0244 \, D_{\mathrm{precip}} $
$V_{\mathrm{bypass, \ LV2}} = 0$
$V_{\mathrm{inflow, \ LV2}} = \max \left(0,\
Step6: LV-4
$\log \left(V_{\mathrm{runoff, \ LV4}}\right) = 1.35 + 0.00650 \, I_{\mathrm{max}} + 0.00940 \, D_{\mathrm{precip}} $
$V_{\mathrm{bypass, \ LV4}} = \max \left(0,\
Step7: Water quality loading relationship
$ M_{\mathrm{runoff}} = V_{\mathrm{runoff}} \times \hat{\mathbb{C}}_{\mathrm{inflow}}\left(\mathrm{landuse,\ season}\right) $
$ M_{\mathrm{bypass}} = V_{\mathrm{bypass}} \times \hat{\mathbb{C}}_{\mathrm{inflow}}\left(\mathrm{landuse,\ season}\right) $
$ M_{\mathrm{inflow}} = M_{\mathrm{runoff}} - M_{\mathrm{bypass}} $
$ M_{\mathrm{outflow}} = V_{\mathrm{outflow}} \times \mathbb{C}_{\mathrm{outflow}} $
Define the site object for the reference site and compute its median values ("influent" to other sites)
Step8: Lakeview BMP sites get their "influent" data from LV-1
Step9: Elm Drive's "influent" data come from NSQD
Step10: Remaining site objects
Step11: Fix ED-1 storm that had two composite samples
Step12: Replace total inflow volume with estimate from simple method for 2013-07-08 storm
Step13: Export project-wide tidy datasets
Hydrologic (storm) data
The big event from July 8, 2013 is retained in this step
Step14: Water quality data
The loads from the big event on July 8, 2013 are removed in this step
Step15: Individual Storm Reports
(requires $\LaTeX$) | Python Code:
%matplotlib inline
import os
import sys
import datetime
import warnings
import csv
import numpy as np
import matplotlib.pyplot as plt
import pandas
import seaborn
seaborn.set(style='ticks', context='paper')
import wqio
import pybmpdb
import pynsqd
import pycvc
min_precip = 1.9999
big_storm_date = datetime.date(2013, 7, 8)
palette = seaborn.color_palette('deep', n_colors=6)
pybmpdb.setMPLStyle()
POCs = [
p['cvcname']
for p in filter(
lambda p: p['include'],
pycvc.info.POC_dicts
)
]
if wqio.testing.checkdep_tex() is None:
tex_msg = ("LaTeX not found on system path. You will "
"not be able to compile ISRs to PDF files")
warnings.warn(tex_msg, UserWarning)
warning_filter = "ignore"
warnings.simplefilter(warning_filter)
Explanation: Prepare Tidy CVC Datasets (with SWMM model hydrology)
Setup the basic working environment
End of explanation
bmpdb = pycvc.external.bmpdb(palette[3], 'D')
nsqdata = pycvc.external.nsqd(palette[2], 'd')
Explanation: Load External Data (this takes a while)
End of explanation
cvcdbfile = "C:/users/phobson/Desktop/scratch/cvc/cvc.accdb"
cvcdb = pycvc.Database(cvcdbfile, nsqdata, bmpdb)
Explanation: Load CVC Database
End of explanation
def LV1_runoff(row):
return max(0, -12.0 + 2.87 * row['total_precip_depth'] + 0.863 * row['duration_hours'])
Explanation: Hydrologic Relationships
$V_{\mathrm{runoff, \ LV1}} = \max\left(0,\: -12.05 + 2.873\, D_{\mathrm{precip}} + 0.863 \, \Delta t \right)$
End of explanation
def ED1_runoff(row):
return 10**(1.58 + 0.000667 * row['peak_precip_intensity'] + 0.0169 * row['total_precip_depth'] )
def ED1_bypass(row):
return max(0, -26.4 + 0.184 * row['peak_precip_intensity'] + 1.22 * row['total_precip_depth'])
def ED1_inflow(row):
return max(0, ED1_runoff(row) - ED1_bypass(row))
Explanation: ED-1
$\log \left(V_{\mathrm{runoff, \ ED1}}\right) = 1.58 + 0.000667 \, I_{\mathrm{max}} + 0.0169 \, D_{\mathrm{precip}} $
$V_{\mathrm{bypass, \ ED1}} = \max \left(0,\: -26.4 + 0.184 \, I_{\mathrm{max}} + 1.22 \, D_{\mathrm{precip}} \right)$
$V_{\mathrm{inflow, \ ED1}} = \max \left(0,\: V_{\mathrm{runoff, \ ED1}} - V_{\mathrm{bypass, \ ED1}} \right)$
End of explanation
def LV2_runoff(row):
return 10**(1.22 + 0.00622 * row['peak_precip_intensity'] + 0.0244 * row['total_precip_depth'] )
def LV2_bypass(row):
return 0
def LV2_inflow(row):
return max(0, LV2_runoff(row) - LV2_bypass(row))
Explanation: LV-2
$\log \left(V_{\mathrm{runoff, \ LV2}}\right) = 1.217 + 0.00622 \, I_{\mathrm{max}} + 0.0244 \, D_{\mathrm{precip}} $
$V_{\mathrm{bypass, \ LV2}} = 0$
$V_{\mathrm{inflow, \ LV2}} = \max \left(0,\: V_{\mathrm{runoff, \ LV2}} - V_{\mathrm{bypass, \ LV2}} \right)$
End of explanation
def LV4_runoff(row):
return 10**(1.35 + 0.00650 * row['peak_precip_intensity'] + 0.00940 * row['total_precip_depth'] )
def LV4_bypass(row):
return max(0, 7.36 + 0.0370 * row['peak_precip_intensity'] + 0.112 * row['total_precip_depth'])
def LV4_inflow(row):
return max(0, LV4_runoff(row) - LV4_bypass(row))
Explanation: LV-4
$\log \left(V_{\mathrm{runoff, \ LV4}}\right) = 1.35 + 0.00650 \, I_{\mathrm{max}} + 0.00940 \, D_{\mathrm{precip}} $
$V_{\mathrm{bypass, \ LV4}} = \max \left(0,\: 7.37 + 0.0370 \, I_{\mathrm{max}} + 0.112 \, D_{\mathrm{precip}} \right)$
$V_{\mathrm{inflow, \ LV4}} = \max \left(0,\: V_{\mathrm{runoff, \ LV4}} - V_{\mathrm{bypass, \ LV4}} \right)$
End of explanation
LV1 = pycvc.Site(db=cvcdb, siteid='LV-1', raingauge='LV-1', tocentry='Lakeview Control',
isreference=True, minprecip=min_precip, color=palette[1], marker='s')
LV1.runoff_fxn = LV1_runoff
Explanation: Water quality loading relationship
$ M_{\mathrm{runoff}} = V_{\mathrm{runoff}} \times \hat{\mathbb{C}}_{\mathrm{inflow}}\left(\mathrm{landuse,\ season}\right) $
$ M_{\mathrm{bypass}} = V_{\mathrm{bypass}} \times \hat{\mathbb{C}}_{\mathrm{inflow}}\left(\mathrm{landuse,\ season}\right) $
$ M_{\mathrm{inflow}} = M_{\mathrm{runoff}} - M_{\mathrm{bypass}} $
$ M_{\mathrm{outflow}} = V_{\mathrm{outflow}} \times \mathbb{C}_{\mathrm{outflow}} $
Define the site object for the reference site and compute its median values ("influent" to other sites)
End of explanation
def rename_influent_cols(col):
if col.lower() in ['parameter', 'units', 'season']:
newcol = col.lower()
else:
newcol = 'influent {}'.format(col.lower())
return newcol.replace(' nsqd ', ' ').replace(' effluent ', ' ')
LV_Influent = (
LV1.medians("concentration", groupby_col='season')
.rename(columns={'effluent stat': 'median'})
.rename(columns=rename_influent_cols)
)
LV1.influentmedians = LV_Influent
LV_Influent.head()
Explanation: Lakeview BMP sites get their "influent" data from LV-1
End of explanation
ED_Influent = (
cvcdb.nsqdata
.seasonal_medians
.rename(columns=rename_influent_cols)
)
ED_Influent.head()
Explanation: Elm Drive's "influent" data come from NSQD
End of explanation
ED1 = pycvc.Site(db=cvcdb, siteid='ED-1', raingauge='ED-1',
tocentry='Elm Drive', influentmedians=ED_Influent,
minprecip=min_precip, isreference=False,
color=palette[0], marker='o')
ED1.runoff_fxn = ED1_runoff
ED1.inflow_fxn = ED1_inflow
LV2 = pycvc.Site(db=cvcdb, siteid='LV-2', raingauge='LV-1',
tocentry='Lakeview Grass Swale', influentmedians=LV_Influent,
minprecip=min_precip, isreference=False,
color=palette[4], marker='^')
LV2.runoff_fxn = LV2_runoff
LV2.inflow_fxn = LV2_inflow
LV4 = pycvc.Site(db=cvcdb, siteid='LV-4', raingauge='LV-1',
tocentry=r'Lakeview Bioswale 1$^{\mathrm{st}}$ South Side',
influentmedians=LV_Influent,
minprecip=min_precip, isreference=False,
color=palette[5], marker='v')
LV4.runoff_fxn = LV4_runoff
LV4.inflow_fxn = LV4_inflow
Explanation: Remaining site objects
End of explanation
ED1.hydrodata.data.loc['2012-08-10 23:50:00':'2012-08-11 05:20', 'storm'] = 0
ED1.hydrodata.data.loc['2012-08-11 05:30':, 'storm'] += 1
Explanation: Fix ED-1 storm that had two composite samples
End of explanation
storm_date = datetime.date(2013, 7, 8)
for site in [ED1, LV1, LV2, LV4]:
bigstorm = site.storm_info.loc[site.storm_info.start_date.dt.date == storm_date].index[0]
inflow = site.drainagearea.simple_method(site.storm_info.loc[bigstorm, 'total_precip_depth'])
site.storm_info.loc[bigstorm, 'inflow_m3'] = inflow
site.storm_info.loc[bigstorm, 'runoff_m3'] = np.nan
site.storm_info.loc[bigstorm, 'bypass_m3'] = np.nan
Explanation: Replace total inflow volume with estimate from simple method for 2013-07-08 storm
End of explanation
hydro = pycvc.summary.collect_tidy_data(
[ED1, LV1, LV2, LV4],
lambda s: s.tidy_hydro
).pipe(pycvc.summary.classify_storms, 'total_precip_depth')
hydro.to_csv('output/tidy/hydro_swmm.csv', index=False)
Explanation: Export project-wide tidy datasets
Hydrologic (storm) data
The big event from July 8, 2013 is retained in this step
End of explanation
wq = (
pycvc.summary
.collect_tidy_data([ED1, LV1, LV2, LV4], lambda s: s.tidy_wq)
.pipe(pycvc.summary.classify_storms, 'total_precip_depth')
.pipe(pycvc.summary.remove_load_data_from_storms, [big_storm_date], 'start_date')
)
wq.to_csv('output/tidy/wq_swmm.csv', index=False)
Explanation: Water quality data
The loads from the big event on July 8, 2013 are removed in this step
End of explanation
for site in [ED1, LV1, LV2, LV4]:
print('\n----Compiling ISR for {0}----'.format(site.siteid))
site.allISRs('composite', version='draft')
Explanation: Individual Storm Reports
(requires $\LaTeX$)
End of explanation |
4,316 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create an example dataframe
Step2: Grab rows based on column values | Python Code:
# Import modules
import pandas as pd
# Set ipython's max row display
pd.set_option('display.max_row', 1000)
# Set iPython's max column width to 50
pd.set_option('display.max_columns', 50)
Explanation: Title: Select Rows When Columns Contain Certain Values
Slug: pandas_select_rows_when_column_has_certain_values
Summary: Select Rows When Columns Contain Certain Values
Date: 2016-05-01 12:00
Category: Python
Tags: Data Wrangling
Authors: Chris Albon
Preliminaries
End of explanation
# Create an example dataframe
data = {'name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'year': [2012, 2012, 2013, 2014, 2014],
'reports': [4, 24, 31, 2, 3]}
df = pd.DataFrame(data, index = ['Cochice', 'Pima', 'Santa Cruz', 'Maricopa', 'Yuma'])
df
Explanation: Create an example dataframe
End of explanation
value_list = ['Tina', 'Molly', 'Jason']
#Grab DataFrame rows where column has certain values
df[df.name.isin(value_list)]
#Grab DataFrame rows where column doesn't have certain values
df[~df.name.isin(value_list)]
Explanation: Grab rows based on column values
End of explanation |
4,317 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SPADE Tutorial
Step1: Generate correlated data
SPADE is a method to detect repeated spatio-temporal activity patterns in parallel spike train data that occur in excess to chance expectation. In this tutorial, we will use SPADE to detect the simplest type of such patterns, synchronous events that are found across a subset of the neurons considered (i.e., patterns that do not exhibit a temporal extent). We will demonstrate the method on stochastic data in which we control the patterns statistics. In a first step, let use generate 10 random spike trains, each modeled after a Poisson statistics, in which a certain proportion of the spikes is synchronized across the spike trains. To this end, we use the compound_poisson_process() function, which expects the rate of the resulting processes in addition to a distribution A[n] indicating the likelihood of finding synchronous spikes of a given order n. In our example, we construct the distribution such that we have a small probability to produce a synchronous event of order 10 (A[10]==0.02). Otherwise spikes are not synchronous with those of other neurons (i.e., synchronous events of order 1, A[1]==0.98). Notice that the length of the distribution A determines the number len(A)-1 of spiketrains returned by the function, and that A[0] is ignored for reasons of clearer notation.
Step2: In a second step, we add 90 purely random Poisson spike trains using the homogeneous_poisson_process()| function, such that in total we have 10 spiketrains that exhibit occasional synchronized events, and 90 uncorrelated spike trains.
Step3: Mining patterns with SPADE
In the next step, we run the spade() method to extract the synchronous patterns. We choose 1 ms as the time scale for discretization of the patterns, and specify a window length of 1 bin (meaning, we search for synchronous patterns only). Also, we concentrate on patterns that involve at least 3 spikes, therefore significantly accelerating the search by ignoring frequent events of order 2. To test for the significance of patterns, we set to repeat the pattern detection on 100 spike dither surrogates of the original data, creating by dithing spike up to 5 ms in time. For the final step of pattern set reduction (psr), we use the standard parameter set [0, 0, 0].
Step4: The output patterns of the method contains information on the found patterns. In this case, we retrieve the pattern we put into the data
Step5: Lastly, we visualize the found patterns using the function plot_patterns() of the viziphant library. Marked in red are the patterns of order ten injected into the data. | Python Code:
import numpy as np
import quantities as pq
import neo
import elephant
import viziphant
np.random.seed(4542)
Explanation: SPADE Tutorial
End of explanation
spiketrains = elephant.spike_train_generation.compound_poisson_process(
rate=5*pq.Hz, A=[0]+[0.98]+[0]*8+[0.02], t_stop=10*pq.s)
len(spiketrains)
Explanation: Generate correlated data
SPADE is a method to detect repeated spatio-temporal activity patterns in parallel spike train data that occur in excess to chance expectation. In this tutorial, we will use SPADE to detect the simplest type of such patterns, synchronous events that are found across a subset of the neurons considered (i.e., patterns that do not exhibit a temporal extent). We will demonstrate the method on stochastic data in which we control the patterns statistics. In a first step, let use generate 10 random spike trains, each modeled after a Poisson statistics, in which a certain proportion of the spikes is synchronized across the spike trains. To this end, we use the compound_poisson_process() function, which expects the rate of the resulting processes in addition to a distribution A[n] indicating the likelihood of finding synchronous spikes of a given order n. In our example, we construct the distribution such that we have a small probability to produce a synchronous event of order 10 (A[10]==0.02). Otherwise spikes are not synchronous with those of other neurons (i.e., synchronous events of order 1, A[1]==0.98). Notice that the length of the distribution A determines the number len(A)-1 of spiketrains returned by the function, and that A[0] is ignored for reasons of clearer notation.
End of explanation
for i in range(90):
spiketrains.append(elephant.spike_train_generation.homogeneous_poisson_process(
rate=5*pq.Hz, t_stop=10*pq.s))
Explanation: In a second step, we add 90 purely random Poisson spike trains using the homogeneous_poisson_process()| function, such that in total we have 10 spiketrains that exhibit occasional synchronized events, and 90 uncorrelated spike trains.
End of explanation
patterns = elephant.spade.spade(
spiketrains=spiketrains, binsize=1*pq.ms, winlen=1, min_spikes=3,
n_surr=100,dither=5*pq.ms,
psr_param=[0,0,0],
output_format='patterns')['patterns']
Explanation: Mining patterns with SPADE
In the next step, we run the spade() method to extract the synchronous patterns. We choose 1 ms as the time scale for discretization of the patterns, and specify a window length of 1 bin (meaning, we search for synchronous patterns only). Also, we concentrate on patterns that involve at least 3 spikes, therefore significantly accelerating the search by ignoring frequent events of order 2. To test for the significance of patterns, we set to repeat the pattern detection on 100 spike dither surrogates of the original data, creating by dithing spike up to 5 ms in time. For the final step of pattern set reduction (psr), we use the standard parameter set [0, 0, 0].
End of explanation
patterns
Explanation: The output patterns of the method contains information on the found patterns. In this case, we retrieve the pattern we put into the data: a pattern involving the first 10 neurons (IDs 0 to 9), occuring 5 times.
End of explanation
viziphant.patterns.plot_patterns(spiketrains, patterns)
Explanation: Lastly, we visualize the found patterns using the function plot_patterns() of the viziphant library. Marked in red are the patterns of order ten injected into the data.
End of explanation |
4,318 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Estimating Work Tour Scheduling
This notebook illustrates how to re-estimate the mandatory tour scheduling component for ActivitySim. This process
includes running ActivitySim in estimation mode to read household travel survey files and write out
the estimation data bundles used in this notebook. To review how to do so, please visit the other
notebooks in this directory.
Load libraries
Step1: We'll work in our test directory, where ActivitySim has saved the estimation data bundles.
Step2: Load data and prep model for estimation
Step3: Review data loaded from the EDB
The next (optional) step is to review the EDB, including the coefficients, utilities specification, and chooser and alternative data.
Coefficients
Step4: Utility specification
Step5: Chooser data
Step6: Alternatives data
Step7: Estimate
With the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.
Step8: Estimated coefficients
Step9: Output Estimation Results
Step10: Write the model estimation report, including coefficient t-statistic and log likelihood
Step11: Next Steps
The final step is to either manually or automatically copy the *_coefficients_revised.csv file to the configs folder, rename it to *_coefficients.csv, and run ActivitySim in simulation mode. | Python Code:
import os
import larch # !conda install larch -c conda-forge # for estimation
import pandas as pd
Explanation: Estimating Work Tour Scheduling
This notebook illustrates how to re-estimate the mandatory tour scheduling component for ActivitySim. This process
includes running ActivitySim in estimation mode to read household travel survey files and write out
the estimation data bundles used in this notebook. To review how to do so, please visit the other
notebooks in this directory.
Load libraries
End of explanation
os.chdir('test')
Explanation: We'll work in our test directory, where ActivitySim has saved the estimation data bundles.
End of explanation
modelname = "mandatory_tour_scheduling_work"
from activitysim.estimation.larch import component_model
model, data = component_model(modelname, return_data=True)
Explanation: Load data and prep model for estimation
End of explanation
data.coefficients
Explanation: Review data loaded from the EDB
The next (optional) step is to review the EDB, including the coefficients, utilities specification, and chooser and alternative data.
Coefficients
End of explanation
data.spec
Explanation: Utility specification
End of explanation
data.chooser_data
Explanation: Chooser data
End of explanation
data.alt_values
Explanation: Alternatives data
End of explanation
model.estimate()
Explanation: Estimate
With the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.
End of explanation
model.parameter_summary()
Explanation: Estimated coefficients
End of explanation
from activitysim.estimation.larch import update_coefficients
result_dir = data.edb_directory/"estimated"
update_coefficients(
model, data, result_dir,
output_file=f"{modelname}_coefficients_revised.csv",
);
Explanation: Output Estimation Results
End of explanation
model.to_xlsx(
result_dir/f"{modelname}_model_estimation.xlsx",
data_statistics=False,
)
Explanation: Write the model estimation report, including coefficient t-statistic and log likelihood
End of explanation
pd.read_csv(result_dir/f"{modelname}_coefficients_revised.csv")
Explanation: Next Steps
The final step is to either manually or automatically copy the *_coefficients_revised.csv file to the configs folder, rename it to *_coefficients.csv, and run ActivitySim in simulation mode.
End of explanation |
4,319 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Runtime Analysis
using Finding the nth Fibonacci numbers as a computational object to think with
Step1: Fibonacci
Excerpt from Algorithms by S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani
Fibonacci is most widely known for his famous sequence of numbers
$0,1,1,2,3,5,8,13,21,34,...,$
each the sum of its two immediate predecessors. More formally, the Fibonacci numbers $F_n$ are generated by the simple rule
$F_n = \begin{cases}
F_n−1 + F_n−2, & \mbox{if } n \mbox{ is} > 1 \
1, & \mbox{if } n \mbox{ is} = 1 \
0, & \mbox{if } n \mbox{ is} = 0
\end{cases}$
No other sequence of numbers has been studied as extensively, or applied to more fields
Step2: Whenever we have an algorithm, there are three questions we always ask about it
Step3: 3. Can we do better?
A polynomial algorithm for $fib$
Let’s try to understand why $fib$ is so slow. fib.call_count shows the count of recursive invocations triggered by a single call to $fib(5)$, which is 15. If you sketched it out, you will notice that many computations are repeated!
A more sensible scheme would store the intermediate results—the values $F_0 , F_1 , . . . , F_{n−1}$ as soon as they become known.
Lets do exactly that through memoization. Note that you can also do this by writing a polynomial algorithm.
Memoization
Tree-recursive computational processes can often be made more efficient through memoization, a powerful technique for increasing the efficiency of recursive functions that repeat computation. A memoized function will store the return value for any arguments it has previously received. A second call to fib(30) would not re-compute the return value recursively, but instead return the existing one that has already been constructed.
Memoization can be expressed naturally as a higher-order function, which can also be used as a decorator. The definition below creates a cache of previously computed results, indexed by the arguments from which they were computed. The use of a dictionary requires that the argument to the memoized function be immutable.
Step4: How long does $fib2$ take?
- The inner loop consists of a single computer step and is executed $n − 1$ times.
- Therefore the number of computer steps used by $fib2$ is linear in $n$.
From exponential we are down to polynomial, a huge breakthrough in running time. It is now perfectly reasonable to compute $F_{200}$ or even $F_{200,000}$
Step5: Instead of reporting that an algorithm takes, say, $ 5n^3 + 4n + 3$ steps on an input of size $n$, it is much simpler to leave out lower-order terms such as $4n$ and $3$ (which become insignificant as $n$ grows), and even the detail of the coefficient $5$ in the leading term (computers will be five times faster in a few years anyway), and just say that the algorithm takes time $O(n^3)$ (pronounced “big oh of $n^3$”).
It is time to define this notation precisely. In what follows, think of $f(n)$ and $g(n)$ as the running times of two algorithms on inputs of size $n$.
Let $f(n)$ and $g(n)$ be functions from positive integers to positive reals. We say $f = O(g)$ (which means that “$f$ grows no faster than $g$”) if there is a constant $c > 0$ such that
${f(n) ≤ c · g(n)}$.
Saying $f = O(g)$ is a very loose analog of “$f ≤ g$.” It differs from the usual notion of ≤ because of the constant c, so that for instance $10n = O(n)$. This constant also allows us to disregard what happens for small values of $n$.
Example | Python Code:
%pylab inline
# Import libraries
from __future__ import absolute_import, division, print_function
import math
from time import time
import matplotlib.pyplot as pyplt
Explanation: Runtime Analysis
using Finding the nth Fibonacci numbers as a computational object to think with
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('ls0GsJyLVLw')
def fib(n):
if n == 0 or n == 1:
return n
else:
return fib(n-2) + fib(n-1)
fib(5)
Explanation: Fibonacci
Excerpt from Algorithms by S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani
Fibonacci is most widely known for his famous sequence of numbers
$0,1,1,2,3,5,8,13,21,34,...,$
each the sum of its two immediate predecessors. More formally, the Fibonacci numbers $F_n$ are generated by the simple rule
$F_n = \begin{cases}
F_n−1 + F_n−2, & \mbox{if } n \mbox{ is} > 1 \
1, & \mbox{if } n \mbox{ is} = 1 \
0, & \mbox{if } n \mbox{ is} = 0
\end{cases}$
No other sequence of numbers has been studied as extensively, or applied to more fields: biology, demography, art, architecture, music, to name just a few. And, together with the powers of 2, it is computer science’s favorite sequence.
Tree Recursion
A very simple way to calculate the nth Fibonacci number is to use a recursive algorithm. Here is a recursive algorithm for computing the nth Fibonacci number.
python
def fib(n):
if n == 0 or n == 1:
return n
else:
return fib(n-2) + fib(n-1)
This algorithm in particular is done using tree recursion.
End of explanation
# This function provides a way to track function calls
def count(f):
def counted(n):
counted.call_count += 1
return f(n)
counted.call_count = 0
return counted
fib = count(fib)
t0 = time()
n = 5
fib(n)
print ('This recursive implementation of fib(', n, ') took', round(time() - t0, 4), 'secs')
print ('And {0} calls to the function'.format(fib.call_count))
t0 = time()
n = 30
fib(n)
print ('This recursive implementation of fib(', n, ') took', round(time() - t0, 4), 'secs')
print ('And {0} calls to the function'.format(fib.call_count))
Explanation: Whenever we have an algorithm, there are three questions we always ask about it:
Is it correct?
How much time does it take, as a function of n?
And can we do better?
1. Correctness
For this question, the answer is yes because it is almost a line by line implementation of the definition of the Fibonacci sequence.
2. Time complexity as a function of n
Let $T(n)$ be the number of computer steps needed to compute $fib(n)$; what can we say about this function? For starters, if $n$ is less than 2, the procedure halts almost immediately, after just a couple of steps. Therefore,
$$ T(n)≤2 \, \mbox{for} \, n≤1. $$
For larger values of $n$, there are two recursive invocations of $fib$, taking time $T (n − 1)$ and $T(n−2)$, respectively, plus three computer steps (checks on the value of $n$ and a final addition).
Therefore,
$$ T(n) = T(n−1) + T(n−2)+3\, \mbox{for} \,n>1. $$
Compare this to the recurrence relation for $F_n$, we immediately see that $T(n) ≥ F_n$.
This is very bad news: the running time of the algorithm grows as fast as the Fibonacci numbers! $T(n)$ is exponential in $n$, which implies that the algorithm is impractically slow except for very small values of $n$.
Let’s be a little more concrete about just how bad exponential time is. To compute $F_{200}$,
the $fib$ algorithm executes $T (200) ≥ F_{200} ≥ 2^{138}$ elementary computer steps. How long this actually takes depends, of course, on the computer used. At this time, the fastest computer in the world is the NEC Earth Simulator, which clocks 40 trillion steps per second. Even on this machine, $fib(200)$ would take at least $2^{92}$ seconds. This means that, if we start the computation today, it would still be going long after the sun turns into a red giant star.
End of explanation
def memo(f):
cache = {}
def memoized(n):
if n not in cache:
cache[n] = f(n) # Make a mapping between the key "n" and the return value of f(n)
return cache[n]
return memoized
fib = memo(fib)
t0 = time()
n = 400
fib(n)
print ('This memoized implementation of fib(', n, ') took', round(time() - t0, 4), 'secs')
t0 = time()
n = 300
fib(n)
print ('This memoized implementation of fib(', n, ') took', round(time() - t0, 4), 'secs')
# Here is the polynomial algorithm for fibonacci sequence
def fib2(n):
if n == 0:
return 0
f = [0] * (n+1) # create an array f[0 . . . n]
f[0], f[1] = 0, 1
for i in range(2, n+1):
f[i] = f[i-1] + f[i-2]
return f[n]
fib2 = count(fib2)
t0 = time()
n = 3000
fib2(n)
print ('This polynomial implementation of fib2(', n, ') took', round(time() - t0, 4), 'secs')
fib2.call_count
Explanation: 3. Can we do better?
A polynomial algorithm for $fib$
Let’s try to understand why $fib$ is so slow. fib.call_count shows the count of recursive invocations triggered by a single call to $fib(5)$, which is 15. If you sketched it out, you will notice that many computations are repeated!
A more sensible scheme would store the intermediate results—the values $F_0 , F_1 , . . . , F_{n−1}$ as soon as they become known.
Lets do exactly that through memoization. Note that you can also do this by writing a polynomial algorithm.
Memoization
Tree-recursive computational processes can often be made more efficient through memoization, a powerful technique for increasing the efficiency of recursive functions that repeat computation. A memoized function will store the return value for any arguments it has previously received. A second call to fib(30) would not re-compute the return value recursively, but instead return the existing one that has already been constructed.
Memoization can be expressed naturally as a higher-order function, which can also be used as a decorator. The definition below creates a cache of previously computed results, indexed by the arguments from which they were computed. The use of a dictionary requires that the argument to the memoized function be immutable.
End of explanation
fib2(200)
Explanation: How long does $fib2$ take?
- The inner loop consists of a single computer step and is executed $n − 1$ times.
- Therefore the number of computer steps used by $fib2$ is linear in $n$.
From exponential we are down to polynomial, a huge breakthrough in running time. It is now perfectly reasonable to compute $F_{200}$ or even $F_{200,000}$
End of explanation
t = arange(0, 15, 1)
f1 = t * t
f2 = 2*t + 20
pyplt.title('Exponential time vs Linear time')
plot(t, f1, t, f2)
pyplt.annotate('$n^2$', xy=(8, 1), xytext=(10, 108))
pyplt.annotate('$2n + 20$', xy=(5, 1), xytext=(10, 45))
pyplt.xlabel('n')
pyplt.ylabel('Run time')
pyplt.grid(True)
Explanation: Instead of reporting that an algorithm takes, say, $ 5n^3 + 4n + 3$ steps on an input of size $n$, it is much simpler to leave out lower-order terms such as $4n$ and $3$ (which become insignificant as $n$ grows), and even the detail of the coefficient $5$ in the leading term (computers will be five times faster in a few years anyway), and just say that the algorithm takes time $O(n^3)$ (pronounced “big oh of $n^3$”).
It is time to define this notation precisely. In what follows, think of $f(n)$ and $g(n)$ as the running times of two algorithms on inputs of size $n$.
Let $f(n)$ and $g(n)$ be functions from positive integers to positive reals. We say $f = O(g)$ (which means that “$f$ grows no faster than $g$”) if there is a constant $c > 0$ such that
${f(n) ≤ c · g(n)}$.
Saying $f = O(g)$ is a very loose analog of “$f ≤ g$.” It differs from the usual notion of ≤ because of the constant c, so that for instance $10n = O(n)$. This constant also allows us to disregard what happens for small values of $n$.
Example:
For example, suppose we are choosing between two algorithms for a particular computational task. One takes $f_1(n) = n^2$ steps, while the other takes $f_2(n) = 2n + 20$ steps. Which is better? Well, this depends on the value of $n$. For $n ≤ 5$, $f_1(n)$ is smaller; thereafter, $f_2$ is the clear winner. In this case, $f_2$ scales much better as $n$ grows, and therefore it is superior.
End of explanation |
4,320 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Indic NLP Library
The goal of the Indic NLP Library is to build Python based libraries for common text processing and Natural Language Processing in Indian languages. Indian languages share a lot of similarity in terms of script, phonology, language syntax, etc. and this library is an attempt to provide a general solution to very commonly required toolsets for Indian language text.
The library provides the following functionalities
Step1: Add Library to Python path
Step2: Export environment variable
export INDIC_RESOURCES_PATH=<path>
OR
set it programmatically
We will use that method for this demo
Step3: Initialize the Indic NLP library
Step4: Let's actually try out some of the API methods in the Indic NLP library
Many of the API functions require a language code. We use 2-letter ISO 639-1 codes. Some languages do not have assigned 2-letter codes. We use the following two-letter codes for such languages
Step5: Script Conversion
Convert from one Indic script to another. This is a simple script which exploits the fact that Unicode points of various Indic scripts are at corresponding offsets from the base codepoint for that script. The following scripts are supported
Step6: Romanization
Convert script text to Roman text in the ITRANS notation
Step7: Indicization (ITRANS to Indic Script)
Let's call conversion of ITRANS-transliteration to an Indic script as Indicization!
Step8: Script Information
Indic scripts have been designed keeping phonetic principles in nature and the design and organization of the scripts makes it easy to obtain phonetic information about the characters.
Get Phonetic Feature Vector
With each script character, a phontic feature vector is associated, which encodes the phontic properties of the character. This is a bit vector which is can be obtained as shown below
Step9: This fields in this bit vector are (from left to right)
Step10: You can check the phonetic information database files in Indic NLP resources to know the definition of each of the bits.
For Tamil Script
Step11: Get Phonetic Similarity
Using the phonetic feature vectors, we can define phonetic similarity between the characters (and underlying phonemes). The library implements some measures for phonetic similarity between the characters (and underlying phonemes). These can be defined using the phonetic feature vectors discussed earlier, so users can implement additional similarity measures.
The implemented similarity measures are
Step12: You may have figured out that you can also compute similarities of characters belonging to different scripts.
You can also get a similarity matrix which contains the similarities between all pairs of characters (within the same script or across scripts).
Let's see how we can compare the characters across Devanagari and Malayalam scripts
Step13: Some similarity functions like sim do not generate values in the range [0,1] and it may be more convenient to have the similarity values in the range [0,1]. This can be achieved by setting the normalize paramter to True
Step14: Orthographic Syllabification
Orthographic Syllabification is an approximate syllabification process for Indic scripts, where CV+ units are defined to be orthographic syllables.
See the following paper for details
Step15: Tokenization
A trivial tokenizer which just tokenizes on the punctuation boundaries. This also includes punctuations for the Indian language scripts (the purna virama and the deergha virama). It returns a list of tokens.
Step16: Word Segmentation
Unsupervised morphological analysers for various Indian language. Given a word, the analyzer returns the componenent morphemes.
The analyzer can recognize inflectional and derivational morphemes.
The following languages are supported
Step17: Transliteration
We use the BrahmiNet REST API for transliteration. | Python Code:
# The path to the local git repo for Indic NLP library
INDIC_NLP_LIB_HOME="e:\indic_nlp_library"
# The path to the local git repo for Indic NLP Resources
INDIC_NLP_RESOURCES="e:\indic_nlp_resources"
Explanation: Indic NLP Library
The goal of the Indic NLP Library is to build Python based libraries for common text processing and Natural Language Processing in Indian languages. Indian languages share a lot of similarity in terms of script, phonology, language syntax, etc. and this library is an attempt to provide a general solution to very commonly required toolsets for Indian language text.
The library provides the following functionalities:
Text Normalization
Script Conversion
Romanization
Indicization
Script Information
Phonetic Similarity
Syllabification
Tokenization
Word Segmenation
Transliteration
Translation
The data resources required by the Indic NLP Library are hosted in a different repository. These resources are required for some modules. You can download from the Indic NLP Resources project.
Pre-requisites
Python 2.7+
Morfessor 2.0 Python Library
Getting Started
----- Set these variables -----
End of explanation
import sys
sys.path.append('{}/src'.format(INDIC_NLP_LIB_HOME))
Explanation: Add Library to Python path
End of explanation
from indicnlp import common
common.set_resources_path(INDIC_NLP_RESOURCES)
Explanation: Export environment variable
export INDIC_RESOURCES_PATH=<path>
OR
set it programmatically
We will use that method for this demo
End of explanation
from indicnlp import loader
loader.load()
Explanation: Initialize the Indic NLP library
End of explanation
from indicnlp.normalize.indic_normalize import IndicNormalizerFactory
input_text=u"\u0958 \u0915\u093c"
remove_nuktas=False
factory=IndicNormalizerFactory()
normalizer=factory.get_normalizer("hi",remove_nuktas)
output_text=normalizer.normalize(input_text)
print output_text
print 'Length before normalization: {}'.format(len(input_text))
print 'Length after normalization: {}'.format(len(output_text))
Explanation: Let's actually try out some of the API methods in the Indic NLP library
Many of the API functions require a language code. We use 2-letter ISO 639-1 codes. Some languages do not have assigned 2-letter codes. We use the following two-letter codes for such languages:
Konkani: kK
Manipuri: mP
Bodo: bD
Text Normalization
Text written in Indic scripts display a lot of quirky behaviour on account of varying input methods, multiple representations for the same character, etc.
There is a need to canonicalize the representation of text so that NLP applications can handle the data in a consistent manner. The canonicalization primarily handles the following issues:
- Non-spacing characters like ZWJ/ZWNL
- Multiple representations of Nukta based characters
- Multiple representations of two part dependent vowel signs
- Typing inconsistencies: e.g. use of pipe (|) for poorna virama
End of explanation
from indicnlp.transliterate.unicode_transliterate import UnicodeIndicTransliterator
input_text=u'राजस्थान'
print UnicodeIndicTransliterator.transliterate(input_text,"hi","tm")
Explanation: Script Conversion
Convert from one Indic script to another. This is a simple script which exploits the fact that Unicode points of various Indic scripts are at corresponding offsets from the base codepoint for that script. The following scripts are supported:
Devanagari (Hindi,Marathi,Sanskrit,Konkani,Sindhi,Nepali), Assamese, Bengali, Oriya, Gujarati, Gurumukhi (Punjabi), Sindhi, Tamil, Telugu, Kannada, Malayalam
End of explanation
from indicnlp.transliterate.unicode_transliterate import ItransTransliterator
input_text=u'राजस्थान'
lang='hi'
print ItransTransliterator.to_itrans(input_text,lang)
Explanation: Romanization
Convert script text to Roman text in the ITRANS notation
End of explanation
from indicnlp.transliterate.unicode_transliterate import ItransTransliterator
# input_text=u'rajasthAna'
input_text=u'pitL^In'
lang='hi'
x=ItransTransliterator.from_itrans(input_text,lang)
print x
for y in x:
print '{:x}'.format(ord(y))
Explanation: Indicization (ITRANS to Indic Script)
Let's call conversion of ITRANS-transliteration to an Indic script as Indicization!
End of explanation
from indicnlp.script import indic_scripts as isc
c=u'क'
lang='hi'
isc.get_phonetic_feature_vector(c,lang)
Explanation: Script Information
Indic scripts have been designed keeping phonetic principles in nature and the design and organization of the scripts makes it easy to obtain phonetic information about the characters.
Get Phonetic Feature Vector
With each script character, a phontic feature vector is associated, which encodes the phontic properties of the character. This is a bit vector which is can be obtained as shown below:
End of explanation
sorted(isc.PV_PROP_RANGES.iteritems(),key=lambda x:x[1][0])
Explanation: This fields in this bit vector are (from left to right):
End of explanation
from indicnlp.langinfo import *
c=u'क'
lang='hi'
print 'Is vowel?: {}'.format(is_vowel(c,lang))
print 'Is consonant?: {}'.format(is_consonant(c,lang))
print 'Is velar?: {}'.format(is_velar(c,lang))
print 'Is palatal?: {}'.format(is_palatal(c,lang))
print 'Is aspirated?: {}'.format(is_aspirated(c,lang))
print 'Is unvoiced?: {}'.format(is_unvoiced(c,lang))
print 'Is nasal?: {}'.format(is_nasal(c,lang))
Explanation: You can check the phonetic information database files in Indic NLP resources to know the definition of each of the bits.
For Tamil Script: database
For other Indic Scripts: database
Query Phonetic Properties
Note: This interface below will be deprecated soon and a new interface will be available soon.
End of explanation
from indicnlp.script import indic_scripts as isc
from indicnlp.script import phonetic_sim as psim
c1=u'क'
c2=u'ख'
c3=u'भ'
lang='hi'
print u'Similarity between {} and {}'.format(c1,c2)
print psim.cosine(
isc.get_phonetic_feature_vector(c1,lang),
isc.get_phonetic_feature_vector(c2,lang)
)
print
print u'Similarity between {} and {}'.format(c1,c3)
print psim.cosine(
isc.get_phonetic_feature_vector(c1,lang),
isc.get_phonetic_feature_vector(c3,lang)
)
Explanation: Get Phonetic Similarity
Using the phonetic feature vectors, we can define phonetic similarity between the characters (and underlying phonemes). The library implements some measures for phonetic similarity between the characters (and underlying phonemes). These can be defined using the phonetic feature vectors discussed earlier, so users can implement additional similarity measures.
The implemented similarity measures are:
cosine
dice
jaccard
dot_product
sim1 (Kunchukuttan et al., 2016)
softmax
References
Anoop Kunchukuttan, Pushpak Bhattacharyya, Mitesh Khapra. Substring-based unsupervised transliteration with phonetic and contextual knowledge. SIGNLL Conference on Computational Natural Language Learning (CoNLL 2016) . 2016.
End of explanation
from indicnlp.script import indic_scripts as isc
from indicnlp.script import phonetic_sim as psim
slang='hi'
tlang='ml'
sim_mat=psim.create_similarity_matrix(psim.cosine,slang,tlang,normalize=False)
c1=u'क'
c2=u'ഖ'
print u'Similarity between {} and {}'.format(c1,c2)
print sim_mat[isc.get_offset(c1,slang),isc.get_offset(c2,tlang)]
Explanation: You may have figured out that you can also compute similarities of characters belonging to different scripts.
You can also get a similarity matrix which contains the similarities between all pairs of characters (within the same script or across scripts).
Let's see how we can compare the characters across Devanagari and Malayalam scripts
End of explanation
slang='hi'
tlang='ml'
sim_mat=psim.create_similarity_matrix(psim.sim1,slang,tlang,normalize=True)
c1=u'क'
c2=u'ഖ'
print u'Similarity between {} and {}'.format(c1,c2)
print sim_mat[isc.get_offset(c1,slang),isc.get_offset(c2,tlang)]
Explanation: Some similarity functions like sim do not generate values in the range [0,1] and it may be more convenient to have the similarity values in the range [0,1]. This can be achieved by setting the normalize paramter to True
End of explanation
from indicnlp.syllable import syllabifier
w=u'जगदीशचंद्र'
lang='ta'
print u' '.join(syllabifier.orthographic_syllabify(w,lang))
Explanation: Orthographic Syllabification
Orthographic Syllabification is an approximate syllabification process for Indic scripts, where CV+ units are defined to be orthographic syllables.
See the following paper for details:
Anoop Kunchukuttan, Pushpak Bhattacharyya. Orthographic Syllable as basic unit for SMT between Related Languages. Conference on Empirical Methods in Natural Language Processing (EMNLP 2016). 2016.
End of explanation
from indicnlp.tokenize import indic_tokenize
indic_string=u'अनूप,अनूप?।फोन'
print u'Input String: {}'.format(indic_string)
print u'Tokens: '
for t in indic_tokenize.trivial_tokenize(indic_string):
print t
Explanation: Tokenization
A trivial tokenizer which just tokenizes on the punctuation boundaries. This also includes punctuations for the Indian language scripts (the purna virama and the deergha virama). It returns a list of tokens.
End of explanation
from indicnlp.morph import unsupervised_morph
from indicnlp import common
analyzer=unsupervised_morph.UnsupervisedMorphAnalyzer('mr')
indic_string=u'आपल्या हिरड्यांच्या आणि दातांच्यामध्ये जीवाणू असतात .'
analyzes_tokens=analyzer.morph_analyze_document(indic_string.split(' '))
for w in analyzes_tokens:
print w
Explanation: Word Segmentation
Unsupervised morphological analysers for various Indian language. Given a word, the analyzer returns the componenent morphemes.
The analyzer can recognize inflectional and derivational morphemes.
The following languages are supported:
Hindi, Punjabi, Marathi, Konkani, Gujarati, Bengali, Kannada, Tamil, Telugu, Malayalam
Support for more languages will be added soon.
End of explanation
import urllib2
from django.utils.encoding import *
from django.utils.http import *
text=iri_to_uri(urlquote('anoop, ratish kal fone par baat karenge'))
url=u'http://www.cfilt.iitb.ac.in/indicnlpweb/indicnlpws/transliterate_bulk/en/hi/{}/statistical'.format(text)
response=urllib2.urlopen(url).read()
print response
Explanation: Transliteration
We use the BrahmiNet REST API for transliteration.
End of explanation |
4,321 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Número de datos obtenidos y perdidos
Importamos las librerías necesarias
Step1: Importamos las librerías creadas para trabajar
Step2: Generamos los datasets de todos los días
Notas
Step3: Se procesan las listas anteriores, se concatenan por motor según
la hora de los registros y se rellenan los espacios vacíos con
datos NaN, luego se juntan de costado las tablas (join) y se
le añade el sufijo _m1 y _m2 para diferenciar las columnas | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Número de datos obtenidos y perdidos
Importamos las librerías necesarias
End of explanation
import ext_datos as ext
import procesar as pro
import time_plot as tplt
Explanation: Importamos las librerías creadas para trabajar
End of explanation
dia1 = ext.extraer_data('dia1')
cd ..
dia2 = ext.extraer_data('dia2')
cd ..
dia3 = ext.extraer_data('dia3')
cd ..
dia4 = ext.extraer_data('dia4')
Explanation: Generamos los datasets de todos los días
Notas:
Se leeran los archivos guardados en una carpeta por dia
El problema de la configuración actual de la telemetría del
auto solar es que usa software hecho para registrar datos de
un motor a la vez, lo que nos obliga a usar dos instancias
cada una de las cuales escribe un archivo de datos diferente,
sin embargo el protocolo tcp/ip si pierde momentáneamente
la conexión intenta reconectar automáticamente y los programas
se reconectan al primer motor que detectan.
Esto genera archivos mezclados, datos duplicados y pérdidas de
datos
Cree un script que automatiza separar los motores, fusiona
los datos duplicados y junta en una sola tabla ambos motores.
En primer lugar se extraen los datos de todos los archivos
de cada día y se genera una lista de tablas separadas por motor
End of explanation
motoresdia1 = pro.procesar(dia1)
motoresdia2 = pro.procesar(dia2)
motoresdia3 = pro.procesar(dia3)
motoresdia4 = pro.procesar(dia4)
motoresdia4.motorRpm_m1[motoresdia4.motorRpm_m1>1].plot(kind='hist', bins=50)
motoresdia1.motorRpm_m2[motoresdia1.motorRpm_m2>1].plot(kind='hist', bins=50)
motoresdia2.motorRpm_m1[motoresdia2.motorRpm_m1>1].plot(kind='hist', bins=50)
motoresdia3.motorRpm_m1[motoresdia3.motorRpm_m1>1].plot(kind='hist', bins=50)
motoresdia4.motorTemp_m1.plot()
motoresdia4[motoresdia4.busCurrent_m1 == 0].busVoltage_m1.plot()
motoresdia4[motoresdia4.motorRpm_m1>1].motorRpm_m1.mean()
motoresdia3[motoresdia3.motorRpm_m1>1].motorRpm_m1.mean()
motoresdia2[motoresdia2.motorRpm_m1>1].motorRpm_m1.mean()
motoresdia1[motoresdia1.motorRpm_m2>1].motorRpm_m2.mean()
Explanation: Se procesan las listas anteriores, se concatenan por motor según
la hora de los registros y se rellenan los espacios vacíos con
datos NaN, luego se juntan de costado las tablas (join) y se
le añade el sufijo _m1 y _m2 para diferenciar las columnas
End of explanation |
4,322 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SysKey Registry Keys Access
Metadata
| Metadata | Value |
|
Step1: Download & Process Security Dataset
Step2: Analytic I
Look for handle requests and access operations to specific registry keys used to calculate the SysKey. SACLs are needed for them
| Data source | Event Provider | Relationship | Event |
| | Python Code:
from openhunt.mordorutils import *
spark = get_spark()
Explanation: SysKey Registry Keys Access
Metadata
| Metadata | Value |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2019/06/25 |
| modification date | 2020/09/20 |
| playbook related | [] |
Hypothesis
Adversaries might be calculating the SysKey from registry key values to decrypt SAM entries
Technical Context
Every computer that runs Windows has its own local domain; that is, it has an account database for accounts that are specific to that computer.
Conceptually,this is an account database like any other with accounts, groups, SIDs, and so on. These are referred to as local accounts, local groups, and so on.
Because computers typically do not trust each other for account information, these identities stay local to the computer on which they were created.
Offensive Tradecraft
Adversaries might use tools like Mimikatz with lsadump::sam commands or scripts such as Invoke-PowerDump to get the SysKey to decrypt Security Account Mannager (SAM) database entries (from registry or hive) and get NTLM, and sometimes LM hashes of local accounts passwords.
Adversaries can calculate the Syskey by using RegOpenKeyEx/RegQueryInfoKey API calls to query the appropriate class info and values from the HKLM:\SYSTEM\CurrentControlSet\Control\Lsa\JD, HKLM:\SYSTEM\CurrentControlSet\Control\Lsa\Skew1, HKLM:\SYSTEM\CurrentControlSet\Control\Lsa\GBG, and HKLM:\SYSTEM\CurrentControlSet\Control\Lsa\Data keys.
Additional reading
* https://github.com/OTRF/ThreatHunter-Playbook/tree/master/docs/library/windows/security_account_manager_database.md
* https://github.com/OTRF/ThreatHunter-Playbook/tree/master/docs/library/windows/syskey.md
Security Datasets
| Metadata | Value |
|:----------|:----------|
| docs | https://securitydatasets.com/notebooks/atomic/windows/credential_access/SDWIN-190625103712.html |
| link | https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/credential_access/host/empire_mimikatz_sam_access.zip |
Analytics
Initialize Analytics Engine
End of explanation
sd_file = "https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/credential_access/host/empire_mimikatz_sam_access.zip"
registerMordorSQLTable(spark, sd_file, "sdTable")
Explanation: Download & Process Security Dataset
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, ProcessName, ObjectName, AccessMask, EventID
FROM sdTable
WHERE LOWER(Channel) = "security"
AND (EventID = 4656 OR EventID = 4663)
AND ObjectType = "Key"
AND (
lower(ObjectName) LIKE "%jd"
OR lower(ObjectName) LIKE "%gbg"
OR lower(ObjectName) LIKE "%data"
OR lower(ObjectName) LIKE "%skew1"
)
'''
)
df.show(10,False)
Explanation: Analytic I
Look for handle requests and access operations to specific registry keys used to calculate the SysKey. SACLs are needed for them
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Windows registry | Microsoft-Windows-Security-Auditing | Process accessed Windows registry key | 4663 |
| Windows registry | Microsoft-Windows-Security-Auditing | Process requested access Windows registry key | 4656 |
End of explanation |
4,323 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute induced power in the source space with dSPM
Returns STC files ie source estimates of induced power
for different bands in the source space. The inverse method
is linear based on dSPM inverse operator.
Step1: Set parameters
Step2: plot mean power | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, source_band_induced_power
print(__doc__)
Explanation: Compute induced power in the source space with dSPM
Returns STC files ie source estimates of induced power
for different bands in the source space. The inverse method
is linear based on dSPM inverse operator.
End of explanation
data_path = sample.data_path()
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_raw.fif'
fname_inv = meg_path / 'sample_audvis-meg-oct-6-meg-inv.fif'
tmin, tmax, event_id = -0.2, 0.5, 1
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
inverse_operator = read_inverse_operator(fname_inv)
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
stim=False, include=include, exclude='bads')
# Load condition 1
event_id = 1
events = events[:10] # take 10 events to keep the computation time low
# Use linear detrend to reduce any edge artifacts
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6),
preload=True, detrend=1)
# Compute a source estimate per frequency band
bands = dict(alpha=[9, 11], beta=[18, 22])
stcs = source_band_induced_power(epochs, inverse_operator, bands, n_cycles=2,
use_fft=False, n_jobs=1)
for b, stc in stcs.items():
stc.save('induced_power_%s' % b, overwrite=True)
Explanation: Set parameters
End of explanation
plt.plot(stcs['alpha'].times, stcs['alpha'].data.mean(axis=0), label='Alpha')
plt.plot(stcs['beta'].times, stcs['beta'].data.mean(axis=0), label='Beta')
plt.xlabel('Time (ms)')
plt.ylabel('Power')
plt.legend()
plt.title('Mean source induced power')
plt.show()
Explanation: plot mean power
End of explanation |
4,324 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import required modules
Step1: Set image and catalogue filenames
Step2: Load in images, noise maps, header info and WCS information
Step3: Load in catalogue you want to fit (and make any cuts)
Step4: Ken Duncan defines a median and a hierarchical bayes combination redshift. We take first peak if it exists
Step5: XID+ uses Multi Order Coverage (MOC) maps for cutting down maps and catalogues so they cover the same area. It can also take in MOCs as selection functions to carry out additional cuts. Lets use the python module pymoc to create a MOC, centered on a specific position we are interested in. We will use a HEALPix order of 15 (the resolution
Step6: XID+ is built around two python classes. A prior and posterior class. There should be a prior class for each map being fitted. It is initiated with a map, noise map, primary header and map header and can be set with a MOC. It also requires an input prior catalogue and point spread function.
Step7: Set PSF. For SPIRE, the PSF can be assumed to be Gaussian with a FWHM of 18.15, 25.15, 36.3 '' for 250, 350 and 500 $\mathrm{\mu m}$ respectively. Lets use the astropy module to construct a Gaussian PSF and assign it to the three XID+ prior classes.
Step8: Before fitting, the prior classes need to take the PSF and calculate how muich each source contributes to each pixel. This process provides what we call a pointing matrix. Lets calculate the pointing matrix for each prior class
Step9: Default prior on flux is a uniform distribution, with a minimum and maximum of 0.00 and 1000.0 $\mathrm{mJy}$ respectively for each source. running the function upper_lim _map resets the upper limit to the maximum flux value (plus a 5 sigma Background value) found in the map in which the source makes a contribution to.
Step10: Now fit using the XID+ interface to pystan
Step11: Initialise the posterior class with the fit object from pystan, and save alongside the prior classes
Step12: Create SED grids | Python Code:
from astropy.io import ascii, fits
import pylab as plt
%matplotlib inline
from astropy import wcs
import numpy as np
import xidplus
from xidplus import moc_routines
import pickle
Explanation: Import required modules
End of explanation
#Folder containing maps
pswfits='/Users/pdh21/astrodata/COSMOS/P4/COSMOS-Nest_image_250_SMAP_v6.0.fits'#SPIRE 250 map
pmwfits='/Users/pdh21/astrodata/COSMOS/P4/COSMOS-Nest_image_350_SMAP_v6.0.fits'#SPIRE 350 map
plwfits='/Users/pdh21/astrodata/COSMOS/P4/COSMOS-Nest_image_500_SMAP_v6.0.fits'#SPIRE 500 map
#output folder
output_folder='./'
Explanation: Set image and catalogue filenames
End of explanation
#-----250-------------
hdulist = fits.open(pswfits)
im250phdu=hdulist[0].header
im250hdu=hdulist[1].header
im250=hdulist[1].data*1.0E3 #convert to mJy
nim250=hdulist[2].data*1.0E3 #convert to mJy
w_250 = wcs.WCS(hdulist[1].header)
pixsize250=3600.0*w_250.wcs.cd[1,1] #pixel size (in arcseconds)
hdulist.close()
#-----350-------------
hdulist = fits.open(pmwfits)
im350phdu=hdulist[0].header
im350hdu=hdulist[1].header
im350=hdulist[1].data*1.0E3 #convert to mJy
nim350=hdulist[2].data*1.0E3 #convert to mJy
w_350 = wcs.WCS(hdulist[1].header)
pixsize350=3600.0*w_350.wcs.cd[1,1] #pixel size (in arcseconds)
hdulist.close()
#-----500-------------
hdulist = fits.open(plwfits)
im500phdu=hdulist[0].header
im500hdu=hdulist[1].header
im500=hdulist[1].data*1.0E3 #convert to mJy
nim500=hdulist[2].data*1.0E3 #convert to mJy
w_500 = wcs.WCS(hdulist[1].header)
pixsize500=3600.0*w_500.wcs.cd[1,1] #pixel size (in arcseconds)
hdulist.close()
Explanation: Load in images, noise maps, header info and WCS information
End of explanation
from astropy.table import Table
photoz=Table.read('/Users/pdh21/astrodata/COSMOS/P4/COSMOS2015-HELP_selected_20160613_photoz_v1.0.fits')
Explanation: Load in catalogue you want to fit (and make any cuts)
End of explanation
z_sig=np.empty((len(photoz)))
z_median=np.empty((len(photoz)))
for i in range(0,len(photoz)):
z_sig[i]=np.max(np.array([photoz['z1_median'][i]-photoz['z1_min'][i],photoz['z1_max'][i]-photoz['z1_median'][i]]))
if photoz['z1_median'][i] > 0.0:
z_median[i]=photoz['z1_median'][i]
else:
z_median[i]=photoz['za_hb'][i]
Explanation: Ken Duncan defines a median and a hierarchical bayes combination redshift. We take first peak if it exists
End of explanation
from astropy.coordinates import SkyCoord
from astropy import units as u
c = SkyCoord(ra=[150.486514739]*u.degree, dec=[2.39576363026]*u.degree)
import pymoc
moc=pymoc.util.catalog.catalog_to_moc(c,50,15)
Explanation: XID+ uses Multi Order Coverage (MOC) maps for cutting down maps and catalogues so they cover the same area. It can also take in MOCs as selection functions to carry out additional cuts. Lets use the python module pymoc to create a MOC, centered on a specific position we are interested in. We will use a HEALPix order of 15 (the resolution: higher order means higher resolution), have a radius of 100 arcseconds centered around an R.A. of 150.487 degrees and Declination of 2.396 degrees.
End of explanation
#---prior250--------
prior250=xidplus.prior(im250,nim250,im250phdu,im250hdu, moc=moc)#Initialise with map, uncertianty map, wcs info and primary header
prior250.prior_cat(photoz['RA'],photoz['DEC'],'photoz', z_median=z_median, z_sig=z_sig)#Set input catalogue
prior250.prior_bkg(-5.0,5)#Set prior on background (assumes Gaussian pdf with mu and sigma)
#---prior350--------
prior350=xidplus.prior(im350,nim350,im350phdu,im350hdu, moc=moc)
prior350.prior_cat(photoz['RA'],photoz['DEC'],'photoz', z_median=z_median, z_sig=z_sig)
prior350.prior_bkg(-5.0,5)
#---prior500--------
prior500=xidplus.prior(im500,nim500,im500phdu,im500hdu, moc=moc)
prior500.prior_cat(photoz['RA'],photoz['DEC'],'photoz', z_median=z_median, z_sig=z_sig)
prior500.prior_bkg(-5.0,5)
Explanation: XID+ is built around two python classes. A prior and posterior class. There should be a prior class for each map being fitted. It is initiated with a map, noise map, primary header and map header and can be set with a MOC. It also requires an input prior catalogue and point spread function.
End of explanation
#pixsize array (size of pixels in arcseconds)
pixsize=np.array([pixsize250,pixsize350,pixsize500])
#point response function for the three bands
prfsize=np.array([18.15,25.15,36.3])
#use Gaussian2DKernel to create prf (requires stddev rather than fwhm hence pfwhm/2.355)
from astropy.convolution import Gaussian2DKernel
##---------fit using Gaussian beam-----------------------
prf250=Gaussian2DKernel(prfsize[0]/2.355,x_size=101,y_size=101)
prf250.normalize(mode='peak')
prf350=Gaussian2DKernel(prfsize[1]/2.355,x_size=101,y_size=101)
prf350.normalize(mode='peak')
prf500=Gaussian2DKernel(prfsize[2]/2.355,x_size=101,y_size=101)
prf500.normalize(mode='peak')
pind250=np.arange(0,101,1)*1.0/pixsize[0] #get 250 scale in terms of pixel scale of map
pind350=np.arange(0,101,1)*1.0/pixsize[1] #get 350 scale in terms of pixel scale of map
pind500=np.arange(0,101,1)*1.0/pixsize[2] #get 500 scale in terms of pixel scale of map
prior250.set_prf(prf250.array,pind250,pind250)#requires psf as 2d grid, and x and y bins for grid (in pixel scale)
prior350.set_prf(prf350.array,pind350,pind350)
prior500.set_prf(prf500.array,pind500,pind500)
print('fitting '+ str(prior250.nsrc)+' sources \n')
print('using ' + str(prior250.snpix)+', '+ str(prior350.snpix)+' and '+ str(prior500.snpix)+' pixels')
Explanation: Set PSF. For SPIRE, the PSF can be assumed to be Gaussian with a FWHM of 18.15, 25.15, 36.3 '' for 250, 350 and 500 $\mathrm{\mu m}$ respectively. Lets use the astropy module to construct a Gaussian PSF and assign it to the three XID+ prior classes.
End of explanation
prior250.get_pointing_matrix()
prior350.get_pointing_matrix()
prior500.get_pointing_matrix()
Explanation: Before fitting, the prior classes need to take the PSF and calculate how muich each source contributes to each pixel. This process provides what we call a pointing matrix. Lets calculate the pointing matrix for each prior class
End of explanation
prior250.upper_lim_map()
prior350.upper_lim_map()
prior500.upper_lim_map()
Explanation: Default prior on flux is a uniform distribution, with a minimum and maximum of 0.00 and 1000.0 $\mathrm{mJy}$ respectively for each source. running the function upper_lim _map resets the upper limit to the maximum flux value (plus a 5 sigma Background value) found in the map in which the source makes a contribution to.
End of explanation
from xidplus.stan_fit import SPIRE
fit_basic=SPIRE.all_bands(prior250,prior350,prior500,iter=1000)
Explanation: Now fit using the XID+ interface to pystan
End of explanation
fit_basic
posterior=xidplus.posterior_stan(fit_basic,[prior250,prior350,prior500])
#xidplus.save([prior250,prior350,prior500],posterior,'XID+SPIRE')
pacs100='/Users/pdh21/Google Drive/WORK/dmu_products/dmu18/dmu18_HELP-PACS-maps/data/COSMOS_PACS100_v0.9.fits'
#PACS 100 map
pacs160='/Users/pdh21/Google Drive/WORK/dmu_products/dmu18/dmu18_HELP-PACS-maps/data/COSMOS_PACS160_v0.9.fits'#PACS 160 map
#-----100-------------
hdulist = fits.open(pacs100)
im100phdu=hdulist[1].header
im100hdu=hdulist[1].header
im100=hdulist[1].data
w_100 = wcs.WCS(hdulist[1].header)
pixsize100=3600.0*np.abs(hdulist[1].header['CDELT1']) #pixel size (in arcseconds)
nim100=hdulist[2].data
hdulist.close()
#-----160-------------
hdulist = fits.open(pacs160)
im160phdu=hdulist[1].header
im160hdu=hdulist[1].header
im160=hdulist[1].data #convert to mJy
w_160 = wcs.WCS(hdulist[1].header)
pixsize160=3600.0*np.abs(hdulist[1].header['CDELT1']) #pixel size (in arcseconds)
nim160=hdulist[2].data
hdulist.close()
#---prior100--------
prior100=xidplus.prior(im100,nim100,im100phdu,im100hdu,moc=moc)#Initialise with map, uncertianty map, wcs info and primary header
#prior100.prior_cat(mips24['INRA'],mips24['INDEC'],'photoz')#Set input catalogue
prior100.prior_cat(photoz['RA'],photoz['DEC'],'photoz',z_median=z_median, z_sig=z_sig)#Set input catalogue
prior100.prior_bkg(0,1)#Set prior on background
#---prior160--------
prior160=xidplus.prior(im160,nim160,im160phdu,im160hdu,moc=moc)
prior160.prior_cat(photoz['RA'],photoz['DEC'],'photoz',z_median=z_median, z_sig=z_sig)
#prior160.prior_cat(mips24['INRA'],mips24['INDEC'],'photoz')
prior160.prior_bkg(0,1)
##---------fit using Herves normalised beam-----------------------
#-----100-------------
hdulist = fits.open('/Users/pdh21/astrodata/COSMOS/P4/PACS/psf_ok_this_time/COSMOS_PACS100_20160805_model_normalized_psf_MJy_sr_100.fits')
prf100=hdulist[0].data
hdulist.close()
#-----160-------------
hdulist = fits.open('/Users/pdh21/astrodata/COSMOS/P4/PACS/psf_ok_this_time/COSMOS_PACS160_20160805_model_normalized_psf_MJy_sr_160.fits')
prf160=hdulist[0].data
hdulist.close()
pind100=np.arange(0,11,0.5)
pind160=np.arange(0,11,0.5)
import scipy.ndimage
prior100.set_prf(scipy.ndimage.zoom(prf100[11:22,11:22]/1000.0,2,order=2),pind100,pind100)
prior160.set_prf(scipy.ndimage.zoom(prf160[6:17,6:17]/1000.0,2,order=2),pind160,pind160)
mipsfits='/Users/pdh21/astrodata/COSMOS/wp4_cosmos_mips24_map_v1.0.fits.gz'
#-----24-------------
hdulist = fits.open(mipsfits)
im24phdu=hdulist[0].header
im24hdu=hdulist[1].header
im24=hdulist[1].data
nim24=hdulist[2].data
w_24 = wcs.WCS(hdulist[1].header)
pixsize24=3600.0*w_24.wcs.cdelt[1] #pixel size (in arcseconds)
hdulist.close()
# Point response information, at the moment its 2D Gaussian,
#pixsize array (size of pixels in arcseconds)
pixsize=np.array([pixsize24])
#point response function for the three bands
#Set prior classes
#---prior24--------
prior24=xidplus.prior(im24,nim24,im24phdu,im24hdu,moc=moc)#Initialise with map, uncertianty map, wcs info and primary header
prior24.prior_cat(photoz['RA'],photoz['DEC'],'photoz',z_median=z_median, z_sig=z_sig)#Set input catalogue
prior24.prior_bkg(0,2)#Set prior on background
##---------fit using seb's empiricall beam-----------------------
#-----24-------------
hdulist = fits.open('/Users/pdh21/astrodata/COSMOS/psfcosmos_corrected.fits')
prf24=hdulist[0].data*1000.0
hdulist.close()
pind24=np.arange(0,21,0.5)
import scipy.ndimage
prior24.set_prf(scipy.ndimage.zoom(prf24[31:52,31:52],2,order=2),pind24,pind24)
prior100.get_pointing_matrix()
prior160.get_pointing_matrix()
prior24.get_pointing_matrix()
from xidplus.stan_fit import PACS
fit_basic_PACS=PACS.all_bands(prior100,prior160,iter=1000)
fit_basic_PACS
priors=[prior24,prior100,prior160,prior250,prior350,prior500]
xidplus.save(priors,posterior, 'XID+SED_prior')
Explanation: Initialise the posterior class with the fit object from pystan, and save alongside the prior classes
End of explanation
from xidplus import sed
SEDs, df=sed.berta_templates()
SEDs.shape
priors,posterior=xidplus.load(filename='./XID+SED_prior.pkl')
import xidplus.stan_fit.SED as SPM
fit=SPM.MIPS_PACS_SPIRE(priors,SEDs,chains=4,iter=10)
posterior=sed.posterior_sed(fit,priors,SEDs)
xidplus.save(priors, posterior, 'test_SPM')
def getnearpos(array,value):
idx = (np.abs(array-value)).argmin()
return idx
nsamp=500
LIR_prior=np.random.uniform(8,14, size=(nsamp,prior250.nsrc))
z_prior=np.random.normal(z_median[prior250.ID-1],z_sig[prior250.ID-1],size=(nsamp,prior250.nsrc))
SED_prior=np.random.multinomial(1, np.full((SEDs_IR.shape[0]),fill_value=1.0/SEDs_IR.shape[0]),size=(nsamp,prior250.nsrc))
samples=np.empty((nsamp,6,prior250.nsrc))
for i in range(0,nsamp):
for s in range(0,prior250.nsrc):
samples[i,:,s]=np.power(10.0,LIR_prior[i,s])*SEDs_IR[SED_prior[i,s,:]==1,:,getnearpos(np.arange(0,8,0.01),z_prior[i,s1])]
import pandas as pd
SEDS_IR_full=pd.read_pickle('SEDS_IR_full.pkl')
import seaborn as sns
sns.set_style("white")
plt.figure(figsize=(6,6))
s1=138
from astropy.cosmology import Planck13
violin_parts=plt.violinplot(samples[:,0:3,s1],[250,350,500], points=60, widths=100,
showmeans=True, showextrema=True, showmedians=True,bw_method=0.5)
# Make all the violin statistics marks red:
for partname in ('cbars','cmins','cmaxes','cmeans','cmedians'):
vp = violin_parts[partname]
vp.set_edgecolor('red')
vp.set_linewidth(1)
for pc in violin_parts['bodies']:
pc.set_facecolor('red')
violin_parts=plt.violinplot(samples[:,3:,s1],[24,100,160], points=60, widths=20,
showmeans=True, showextrema=True, showmedians=True,bw_method=0.5)
# Make all the violin statistics marks red:
for partname in ('cbars','cmins','cmaxes','cmeans','cmedians'):
vp = violin_parts[partname]
vp.set_edgecolor('red')
vp.set_linewidth(1)
for pc in violin_parts['bodies']:
pc.set_facecolor('red')
from astropy.cosmology import Planck13
import astropy.units as u
for s in range(0,100,1):
div=(4.0*np.pi * np.square(Planck13.luminosity_distance(z_prior[s,s1]).cgs))
div=div.value
plt.loglog((z_prior[i,s1]+1.0)*SEDS_IR_full['wave'],
np.power(10.0,LIR_prior[s,s1])*(1.0+z_prior[s,s1])
*SEDS_IR_full[SEDS_IR_full.columns[np.arange(1,SEDs_IR.shape[0]+1)
[SED_prior[s,s1]==1]]]/div,alpha=0.05,c='b',zorder=0)
#plt.plot([250,350,500, 24,100,160],posterior_IR.samples['src_f'][s,0:6,s1], 'ko', alpha=0.1, ms=10)
#plt.plot([250,350,500],posterior.samples['src_f'][s,0:3,s1], 'ro', alpha=0.1, ms=10)
plt.ylim(10E-7,10E2)
plt.xlim(5,5E3)
#plt.plot([3.6,4.5,5.7,7.9],[2.91E-3,2.38E-3,2.12E-3,9.6E-3], 'ro')
plt.xlabel('Wavelength (microns)')
plt.ylabel('Flux (mJy)')
Explanation: Create SED grids
End of explanation |
4,325 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing Algorithm Performance in Off-Policy Setting
Step1: Assessing Learning Algorithms
In theory, it is possible to solve for the value function sought by the learning algorithms directly, but in practice approximation will suffice.
Step2: What do the target values look like?
Step3: Actual Testing
We have a number of algorithms that we can try
Step4: These algorithms are given to OffPolicyAgent, which also takes care of the function approximation and manages the parameters given to the learning algorithm. | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import algos
import features
import parametric
import policy
import chicken
from agents import OffPolicyAgent, OnPolicyAgent
from rlbench import *
Explanation: Testing Algorithm Performance in Off-Policy Setting
End of explanation
# define the experiment
num_states = 8
num_features = 8
# set up environment
env = chicken.Chicken(num_states)
# set up policy
pol_pi = policy.FixedPolicy({s: {0: 1} for s in env.states})
# set feature mapping
phi = features.RandomBinary(num_features, num_features // 2, random_seed=101011)
# phi = features.Int2Unary(num_states)
# run the algorithms for enough time to get reliable convergence
num_steps = 20000
# state-dependent gamma
gm_dct = {s: 0.9 for s in env.states}
gm_dct[0] = 0
gm_func = parametric.MapState(gm_dct)
gm_p_func = parametric.MapNextState(gm_dct)
# the TD(1) solution should minimize the mean-squared error
update_params = {
'gm': gm_func,
'gm_p': gm_p_func,
'lm': 1.0,
}
lstd_1 = OnPolicyAgent(algos.LSTD(phi.length), pol_pi, phi, update_params)
run_episode(lstd_1, env, num_steps)
mse_values = lstd_1.get_values(env.states)
# the TD(0) solution should minimize the MSPBE
update_params = {
'gm': gm_func,
'gm_p': gm_p_func,
'lm': 0.0,
}
lstd_0 = OnPolicyAgent(algos.LSTD(phi.length), pol_pi, phi, update_params)
run_episode(lstd_0, env, num_steps)
mspbe_values = lstd_0.get_values(env.states)
Explanation: Assessing Learning Algorithms
In theory, it is possible to solve for the value function sought by the learning algorithms directly, but in practice approximation will suffice.
End of explanation
# Plot the states against their target values
xvals = list(sorted(env.states))
y_mse = [mse_values[s] for s in xvals]
y_mspbe = [mspbe_values[s] for s in xvals]
# Mean-square error optimal values
plt.bar(xvals, y_mse)
plt.show()
# MSPBE optimal values
plt.bar(xvals, y_mspbe)
plt.show()
y_mse
y_mspbe
Explanation: What do the target values look like?
End of explanation
algos.algo_registry
Explanation: Actual Testing
We have a number of algorithms that we can try
End of explanation
# set up algorithm parameters
update_params = {
'alpha': 0.02,
'beta': 0.002,
'gm': 0.9,
'gm_p': 0.9,
'lm': 0.0,
'lm_p': 0.0,
'interest': 1.0,
}
# Define the target policy
pol_pi = policy.FixedPolicy({s: {0: 1} for s in env.states})
# Define the behavior policy
pol_mu = policy.FixedPolicy({s: {0: 1} if s < 4 else {0: 0.5, 1: 0.5} for s in env.states})
# Run all available algorithms
max_steps = 50000
for name, alg in algos.algo_registry.items():
# Set up the agent, run the experiment, get state-values
agent = OffPolicyAgent(alg(phi.length), pol_pi, pol_mu, phi, update_params)
mse_lst = run_errors(agent, env, max_steps, mse_values)
mspbe_lst = run_errors(agent, env, max_steps, mspbe_values)
# Plot the errors
xdata = np.arange(max_steps)
plt.plot(xdata, mse_lst)
plt.plot(xdata, mspbe_lst)
# plt.plot(xdata, np.log(mse_lst))
# plt.plot(xdata, np.log(mspbe_lst))
# Format and label the graph
plt.ylim(0, 2)
plt.title(name)
plt.xlabel('Timestep')
plt.ylabel('Error')
plt.show()
Explanation: These algorithms are given to OffPolicyAgent, which also takes care of the function approximation and manages the parameters given to the learning algorithm.
End of explanation |
4,326 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
That looks like the best way to represent the data if we want to calculate the $R^2$ distance on a per-symbol basis. I could add it to the single val function.
Step1: Now, let's implement the rolling validation.
Step2: So, I could use a training period based in an amount of market days, or in an amount of sample base periods. The first approach would be taking into consideration the temporal correlation of the data, the second would consider that the amount of samples should be large enough. Not to lose sight of the real problem at hand, I will use the market days approach, and then check that the amount of samples is big enough.
Step3: A lot of attention should be paid to the effect of filling the missing data. It may change the whole results.
Step5: That last number is an approximation of the number of train/evaluation sets that are being considered.
Step6: Let's test the whole process
Step7: It seems like the weird point, in which the model is predicting terribly may be the 2008 financial crisis. And the big unpredictability is limited to one symbol. I should implement a way to trace the symbols...
What about the mean absolute error? | Python Code:
def run_single_val(x, y, ahead_days, estimator):
multiindex = x.index.nlevels > 1
x_y = pd.concat([x, y], axis=1)
x_y_sorted = x_y.sort_index()
if multiindex:
x_y_train = x_y_sorted.loc[:fe.add_market_days(x_y_sorted.index.levels[0][-1], -ahead_days)]
x_y_val = x_y_sorted.loc[x_y_sorted.index.levels[0][-1]:]
else:
x_y_train = x_y_sorted.loc[:fe.add_market_days(x_y_sorted.index[-1], -ahead_days)]
x_y_val = x_y_sorted.loc[x_y_sorted.index[-1]:]
x_train = x_y_train.iloc[:,:-1]
x_val = x_y_val.iloc[:,:-1]
y_train_true = x_y_train.iloc[:,-1]
y_val_true = x_y_val.iloc[:,-1]
estimator.fit(x_train)
y_train_pred = estimator.predict(x_train)
y_val_pred = estimator.predict(x_val)
y_train_true_df = pd.DataFrame(y_train_true)
y_train_pred_df = pd.DataFrame(y_train_pred)
y_val_true_df = pd.DataFrame(y_val_true)
y_val_pred_df = pd.DataFrame(y_val_pred)
return y_train_true, \
y_train_pred, \
y_val_true, \
y_val_pred
y_train_true, y_train_pred, y_val_true, y_val_pred = run_single_val(x, y, 1, predictor)
print(y_train_true.shape)
print(y_train_pred.shape)
print(y_val_true.shape)
print(y_val_pred.shape)
print(y_train_true.shape)
y_train_true.head()
y = y_train_true
multiindex = y.index.nlevels > 1
if multiindex:
DATE_LEVEL_NAME = 'level_0'
else:
DATE_LEVEL_NAME = 'index'
DATE_LEVEL_NAME
y.reset_index()
reshape_by_symbol(y_train_true)
Explanation: That looks like the best way to represent the data if we want to calculate the $R^2$ distance on a per-symbol basis. I could add it to the single val function.
End of explanation
train_eval_days = -1 # In market days
base_days = 7 # In market days
step_days = 7 # market days
ahead_days = 1 # market days
today = data_df.index[-1] # Real date
train_days = 252 # market days per training period
step_eval_days = 30 # market days between training periods beginings
filled_data_df = pp.fill_missing(data_df)
tic = time()
x, y = fe.generate_train_intervals(filled_data_df,
train_eval_days,
base_days,
step_days,
ahead_days,
today,
fe.feature_close_one_to_one)
toc = time()
print('Elapsed time: %i seconds.' % (toc-tic))
x_y_sorted = pd.concat([x, y], axis=1).sort_index()
x_y_sorted
start_date = x_y_sorted.index.levels[0][0]
start_date
end_date = fe.add_market_days(start_date, 252)
end_date
end_date = fe.add_index_days(start_date, 252, x_y_sorted)
end_date
Explanation: Now, let's implement the rolling validation.
End of explanation
end_date = fe.add_market_days(start_date, 252)
x_i = x_y_sorted.loc[start_date:end_date].iloc[:,:-1]
y_i = x_y_sorted.loc[start_date:end_date].iloc[:,-1]
print(x_i.shape)
print(x_i.head())
print(y_i.shape)
print(y_i.head())
ahead_days
predictor = dmp.DummyPredictor()
y_train_true, y_train_pred, y_val_true, y_val_pred = run_single_val(x_i, y_i, ahead_days, predictor)
print(y_train_true.shape)
print(y_train_pred.shape)
print(y_val_true.shape)
print(y_val_pred.shape)
y_train_pred.head()
y_train_pred.dropna(axis=1, how='all').shape
scores = r2_score(pp.fill_missing(y_train_pred), pp.fill_missing(y_train_true), multioutput='raw_values')
print('R^2 score = %f +/- %f' % (np.mean(scores), 2*np.std(scores)))
scores = r2_score(y_train_pred, y_train_true, multioutput='raw_values')
print('R^2 score = %f +/- %f' % (np.mean(scores), np.std(scores)))
len(scores)
y_val_true_df = pd.DataFrame()
y_val_true
y_val_true_df.append(y_val_true)
Explanation: So, I could use a training period based in an amount of market days, or in an amount of sample base periods. The first approach would be taking into consideration the temporal correlation of the data, the second would consider that the amount of samples should be large enough. Not to lose sight of the real problem at hand, I will use the market days approach, and then check that the amount of samples is big enough.
End of explanation
x.index.min()
x.index.max()
x.index.max() - x.index.min()
(x.index.max() - fe.add_market_days(x.index.min(), train_days)).days // step_days
Explanation: A lot of attention should be paid to the effect of filling the missing data. It may change the whole results.
End of explanation
def roll_evaluate(x, y, train_days, step_eval_days, ahead_days, verbose=False):
Warning: The final date of the period should be no larger than the final date of the SPY_DF
# calculate start and end date
# sort by date
x_y_sorted = pd.concat([x, y], axis=1).sort_index()
start_date = x_y_sorted.index[0]
end_date = fe.add_market_days(start_date, train_days)
final_date = x_y_sorted.index[-1]
# loop: run_single_val(x,y, ahead_days, estimator)
r2_train_means = []
r2_train_stds = []
y_val_true_df = pd.DataFrame()
y_val_pred_df = pd.DataFrame()
num_training_sets = (252/365) * (x.index.max() - fe.add_market_days(x.index.min(), train_days)).days // step_eval_days
set_index = 0
if verbose:
print('Evaluating approximately %i training/evaluation pairs' % num_training_sets)
while end_date < final_date:
x = x_y_sorted.loc[start_date:end_date].iloc[:,:-1]
y = x_y_sorted.loc[start_date:end_date].iloc[:,-1]
y_train_true, y_train_pred, y_val_true, y_val_pred = run_single_val(x, y, ahead_days, predictor)
# Calculate R^2 for training and append
scores = r2_score(y_train_true, y_train_pred, multioutput='raw_values')
r2_train_means.append(np.mean(scores))
r2_train_stds.append(np.std(scores))
# Append validation results
y_val_true_df = y_val_true_df.append(y_val_true)
y_val_pred_df = y_val_pred_df.append(y_val_pred)
# Update the dates
start_date = fe.add_market_days(start_date, step_eval_days)
end_date = fe.add_market_days(end_date, step_eval_days)
set_index += 1
if verbose:
sys.stdout.write('\rApproximately %2.1f percent complete. ' % (100.0 * set_index / num_training_sets))
sys.stdout.flush()
return r2_train_means, r2_train_stds, y_val_true_df, y_val_pred_df
Explanation: That last number is an approximation of the number of train/evaluation sets that are being considered.
End of explanation
train_eval_days = -1 # In market days
base_days = 14 # In market days
step_days = 30 # market days
ahead_days = 1 # market days
today = data_df.index[-1] # Real date
filled_data_df = pp.fill_missing(data_df)
tic = time()
x, y = fe.generate_train_intervals(filled_data_df,
train_eval_days,
base_days,
step_days,
ahead_days,
today,
fe.feature_close_one_to_one)
toc = time()
print('Elapsed time: %i seconds.' % (toc-tic))
train_days = 252 # market days per training period
step_eval_days = 10 # market days between training periods beginings
tic = time()
r2_train_means, r2_train_stds, y_val_true_df, y_val_pred_df = roll_evaluate(x, y, train_days, step_eval_days, ahead_days, verbose=True)
toc = time()
print('Elapsed time: %i seconds.' % (toc-tic))
y_val_true_df.head()
pd.DataFrame(r2_train_means).describe()
scores = r2_score(y_val_true_df.T, y_val_pred_df.T, multioutput='raw_values')
print('R^2 score = %f +/- %f' % (np.mean(scores), np.std(scores)))
pd.DataFrame(scores).describe()
plt.plot(y_val_true_df.index, r2_train_means, label='r2_train_means')
plt.plot(y_val_true_df.index, scores, label='r2 validation scores')
plt.legend(loc='lower left')
scores_val = r2_score(y_val_true_df, y_val_pred_df, multioutput='raw_values')
print('R^2 score = %f +/- %f' % (np.mean(scores_val), np.std(scores_val)))
plt.plot(scores_val, label='r2 validation scores')
sorted_means = x.sort_index().mean(axis=1)
sorted_means.head()
sorted_means.plot()
sub_period = sorted_means['2009-03-01':]
plt.scatter(sub_period.index, sub_period)
Explanation: Let's test the whole process
End of explanation
from sklearn.metrics import mean_absolute_error
scores = mean_absolute_error(y_val_true_df.T, y_val_pred_df.T, multioutput='raw_values')
print('MAE score = %f +/- %f' % (np.mean(scores), np.std(scores)))
plt.plot(y_val_true_df.index, scores, label='MAE validation scores')
plt.legend(loc='lower left')
pd.DataFrame(scores).describe()
scores = mean_absolute_error(y_val_true_df, y_val_pred_df, multioutput='raw_values')
print('MAE score = %f +/- %f' % (np.mean(scores), np.std(scores)))
plt.plot(scores, label='MAE validation scores')
plt.legend(loc='lower left')
pd.DataFrame(scores).describe()
Explanation: It seems like the weird point, in which the model is predicting terribly may be the 2008 financial crisis. And the big unpredictability is limited to one symbol. I should implement a way to trace the symbols...
What about the mean absolute error?
End of explanation |
4,327 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST ML Pros tutorial
This notebook is based on the tutorial found here
This tutorial is very similar to the beginners tutorial except for some incremental improvements added to the end to improve the accuracy
Get the MNIST dataset
Step1: Helper functions
Here we place our helper functions for creating weight & bias variables as well as doing our vanilla 2D convolution and Pooling operations
Step2: First Convolution Layer
Our first layer consists of a convolution layer followed by a max pooling layer. It will compute 32 features for each 5x5 patch.
The weight tensor has the shape
[patch_width, patch_height, num_input_channels, num_output_channels]
Reshape our image to a 4D tensor with the second and third dimension corresponding to image size and the final dimension for the number of colors. The -1 in this case indicates the dimension that will be automatically modified to keep the size of the new tensor the same as the original.
Step3: Second Convolution Layer
We create a similar structure except now we have 32 inputs and 64 feature outputs for each 5x5 patch.
Step4: Densely Connected Layer
We have now done 2 2x2 convolutions which have reduced our image size to 7x7 since every 2x2 convolution effectively produces a new image that is half the size of the input image.
But for each 7x7 image, we now have 64 features. So we will add a layer with 1024 neurons to allow processing on the entire image.
Step5: Dropout Layer
The dropout layer helps to reduce overfitting by dropping connections between neurons in the densely connected layers. This paper has a nice discussion on the matter.
Step6: Readout Layer
We add a layer that takes the output of our fully connected layer and does a softmax regression into our classes.
Step7: Training
It should not that depending on the CPU available this could take some time to complete. | Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/data/MNIST/",one_hot=True)
sess = tf.InteractiveSession()
Explanation: MNIST ML Pros tutorial
This notebook is based on the tutorial found here
This tutorial is very similar to the beginners tutorial except for some incremental improvements added to the end to improve the accuracy
Get the MNIST dataset
End of explanation
def weight_variable(shape):
initial = tf.truncated_normal(shape,stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x,W):
return tf.nn.conv2d(x,W,strides=[1,1,1,1],padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x,ksize=[1,2,2,1],strides=[1,2,2,1], padding='SAME')
# Setup our Input placeholder
x = tf.placeholder(tf.float32, [None, 784])
# Define loss and optimizer
y_ = tf.placeholder(tf.float32,[None,10])
Explanation: Helper functions
Here we place our helper functions for creating weight & bias variables as well as doing our vanilla 2D convolution and Pooling operations
End of explanation
W_conv1 = weight_variable([5,5,1,32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x,[-1,28,28,1])
h_conv1 = tf.nn.relu(conv2d(x_image,W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
Explanation: First Convolution Layer
Our first layer consists of a convolution layer followed by a max pooling layer. It will compute 32 features for each 5x5 patch.
The weight tensor has the shape
[patch_width, patch_height, num_input_channels, num_output_channels]
Reshape our image to a 4D tensor with the second and third dimension corresponding to image size and the final dimension for the number of colors. The -1 in this case indicates the dimension that will be automatically modified to keep the size of the new tensor the same as the original.
End of explanation
W_conv2 = weight_variable([5,5,32,64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1,W_conv2)+b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
Explanation: Second Convolution Layer
We create a similar structure except now we have 32 inputs and 64 feature outputs for each 5x5 patch.
End of explanation
W_fc1 = weight_variable([7*7*64,1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1,7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat,W_fc1)+b_fc1)
Explanation: Densely Connected Layer
We have now done 2 2x2 convolutions which have reduced our image size to 7x7 since every 2x2 convolution effectively produces a new image that is half the size of the input image.
But for each 7x7 image, we now have 64 features. So we will add a layer with 1024 neurons to allow processing on the entire image.
End of explanation
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1,keep_prob)
Explanation: Dropout Layer
The dropout layer helps to reduce overfitting by dropping connections between neurons in the densely connected layers. This paper has a nice discussion on the matter.
End of explanation
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
Explanation: Readout Layer
We add a layer that takes the output of our fully connected layer and does a softmax regression into our classes.
End of explanation
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess.run(tf.global_variables_initializer())
for i in range(10000):
batch = mnist.train.next_batch(50)
if i%100 == 0:
train_accuracy = accuracy.eval(feed_dict={
x:batch[0], y_: batch[1], keep_prob: 1.0})
print("step %d, training accuracy %g"%(i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
print("test accuracy %g"%accuracy.eval(feed_dict={
x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
Explanation: Training
It should not that depending on the CPU available this could take some time to complete.
End of explanation |
4,328 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Consume native Keras model served by TF-Serving
This notebook shows client code needed to consume a native Keras model served by Tensorflow serving. The Tensorflow serving model needs to be started using the following command
Step1: Load Test Data
Step2: Make Predictions | Python Code:
from __future__ import division, print_function
from google.protobuf import json_format
from grpc.beta import implementations
from sklearn.preprocessing import OneHotEncoder
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2
from sklearn.metrics import accuracy_score, confusion_matrix
import json
import os
import sys
import threading
import time
import numpy as np
import tensorflow as tf
SERVER_HOST = "localhost"
SERVER_PORT = 9000
DATA_DIR = "../../data"
TEST_FILE = os.path.join(DATA_DIR, "mnist_test.csv")
IMG_SIZE = 28
MODEL_NAME = "keras-mnist-fcn"
Explanation: Consume native Keras model served by TF-Serving
This notebook shows client code needed to consume a native Keras model served by Tensorflow serving. The Tensorflow serving model needs to be started using the following command:
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server \
--port=9000 --model_name=keras-mnist-fcn \
--model_base_path=/home/sujit/Projects/polydlot/data/tf-export/keras-mnist-fcn
End of explanation
def parse_file(filename):
xdata, ydata = [], []
fin = open(filename, "rb")
i = 0
for line in fin:
if i % 10000 == 0:
print("{:s}: {:d} lines read".format(os.path.basename(filename), i))
cols = line.strip().split(",")
ydata.append(int(cols[0]))
xdata.append(np.reshape(np.array([float(x) / 255. for x in cols[1:]]),
(IMG_SIZE * IMG_SIZE, )))
i += 1
fin.close()
print("{:s}: {:d} lines read".format(os.path.basename(filename), i))
X = np.array(xdata, dtype="float32")
y = np.array(ydata, dtype="int32")
return X, y
Xtest, ytest = parse_file(TEST_FILE)
print(Xtest.shape, ytest.shape)
Explanation: Load Test Data
End of explanation
channel = implementations.insecure_channel(SERVER_HOST, SERVER_PORT)
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
labels, predictions = [], []
for i in range(Xtest.shape[0]):
request = predict_pb2.PredictRequest()
request.model_spec.name = MODEL_NAME
request.model_spec.signature_name = "predict"
Xbatch, ybatch = Xtest[i], ytest[i]
request.inputs["images"].CopyFrom(
tf.contrib.util.make_tensor_proto(Xbatch, shape=[1, Xbatch.size]))
result = stub.Predict(request, 10.0)
result_json = json.loads(json_format.MessageToJson(result))
y_ = np.array(result_json["outputs"]["scores"]["floatVal"], dtype="float32")
labels.append(ybatch)
predictions.append(np.argmax(y_))
print("Test accuracy: {:.3f}".format(accuracy_score(labels, predictions)))
print("Confusion Matrix")
print(confusion_matrix(labels, predictions))
Explanation: Make Predictions
End of explanation |
4,329 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep learning
Driven by practicality as we are for the purpose of this course, we will dwelve directly into an example of using DL. We will gradually learn more things as we do things.
Most developed deep learning APIs
Step1: Multi-layered perceptron (feed forward network)
Each hiden layer is formed by neurons called perceptrons
A perceptron is a binary linear classifier
inputs
Step2: input layer
Step3: Learning process
NNs are supervised learning structures!
- forward propagation
Step4: Gradient descent (main optimization technique)
The weights in small increments with the help of the calculation of the derivative (or gradient) of the loss function, which allows us to see in which direction “to descend” towards the global minimum. Most optimizers are based on gradient descent, an algorithm that is very eficient on GPUs today, but gives local optima.
Epochs and batches. The optimization is done in general in batches of data in the successive iterations (epochs) of all the dataset that we pass to the network in each iteration. "epochs" are complete runs through the dataset. Batches are used because the whole dataset is hard to be passed through the network at once.
- 469 number of batches
128 * 469 ~= 60000 images (number of samples)
Step5: Observations
Step6: Historical essentials
Deep learning, from an algorithmic perspective, is the application of advanced multi-layered filters to learn hidden features in data representation.
Many of the methods that are used today in DL, such as most neural network types (and not only), went through a 20 years long pause due to the fact that the computing machines avalable at the era were too slow to produce wanted results.
It was several things that precipitated their return in 2010 | Python Code:
import tensorflow as tf
from tensorflow.keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
print(train_images.shape)
print(train_labels.shape)
# reshape (flatten) and scale images
train_images = train_images.reshape((60000, 28 * 28))
train_images = train_images.astype('float32') / 255
test_images = test_images.reshape((10000, 28 * 28))
test_images = test_images.astype('float32') / 255
%matplotlib inline
import matplotlib.pyplot as plt
image=train_images[0].reshape(28, 28)
plt.imshow(image)
plt.show()
print("Label:", train_labels[0])
# convert labels to one hot encoding
from tensorflow.keras.utils import to_categorical
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
train_labels[0]
Explanation: Deep learning
Driven by practicality as we are for the purpose of this course, we will dwelve directly into an example of using DL. We will gradually learn more things as we do things.
Most developed deep learning APIs:
- Tensorflow
- Keras
- PyTorch
NN essentials
task: classification, handwritten
method: multi-layered perceptron
concepts: NN architecture and training loop
python libraries: native, keras, tensorflow, pytorch
task: text classification
End of explanation
from IPython.display import Image
Image(url= "../img/perceptron.png", width=400, height=400)
Explanation: Multi-layered perceptron (feed forward network)
Each hiden layer is formed by neurons called perceptrons
A perceptron is a binary linear classifier
inputs: a flat array $x_i$
one output per neuron j: $y_j$
a transformation of input into output (activation function):
linear separator
sigmoid function
$z_j= \sum_i {w_{ij} x_i} + b_j$
$y_j = f(z_j) = \frac{1}{1 + e^{-z_j}}$
End of explanation
from IPython.display import Image
Image(url= "../img/ffn.png", width=400, height=400)
from tensorflow.keras import models
from tensorflow.keras import layers
# defining the NN structure
network = models.Sequential()
network.add(layers.Dense(512, activation='sigmoid', input_shape=(28 * 28,)))
network.add(layers.Dense(512, activation='sigmoid', input_shape=(512,)))
network.add(layers.Dense(10, activation='softmax'))
network.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
network.summary()
Explanation: input layer: sequential (flattened) image
hidden layers: perceptrons
output layer: softmax
End of explanation
from IPython.display import Image
Image(url= "../img/NN_learning.png", width=400, height=400)
Explanation: Learning process
NNs are supervised learning structures!
- forward propagation: all training data is fed to the network and y is predicted
- estimate the loss: difference between prediction and label
- backpropagation: the loss information is propagated backwards layer by layer, and the neuron weights are adjusted
- global optimization: the parameters (weights and biases) must be adjusted in such a way that the loss function presented above is minimized.
End of explanation
network.fit(train_images, train_labels, epochs=5, batch_size=128)
test_loss, test_acc = network.evaluate(test_images, test_labels)
print(test_loss, test_acc)
Explanation: Gradient descent (main optimization technique)
The weights in small increments with the help of the calculation of the derivative (or gradient) of the loss function, which allows us to see in which direction “to descend” towards the global minimum. Most optimizers are based on gradient descent, an algorithm that is very eficient on GPUs today, but gives local optima.
Epochs and batches. The optimization is done in general in batches of data in the successive iterations (epochs) of all the dataset that we pass to the network in each iteration. "epochs" are complete runs through the dataset. Batches are used because the whole dataset is hard to be passed through the network at once.
- 469 number of batches
128 * 469 ~= 60000 images (number of samples)
End of explanation
import matplotlib.pyplot as plt
import numpy as np
prediction=network.predict(test_images[0:9])
y_true_cls = np.argmax(test_labels[0:9], axis=1)
y_pred_cls = np.argmax(prediction, axis=1)
fig, axes = plt.subplots(3, 3, figsize=(8,8))
fig.subplots_adjust(hspace=0.5, wspace=0.5)
for i, ax in enumerate(axes.flat):
ax.imshow(test_images[i].reshape(28,28), cmap = 'BuGn')
xlabel = "True: {0}, Pred: {1}".format(y_true_cls[i], y_pred_cls[i])
ax.set_xlabel(xlabel)
ax.set_xticks([])
ax.set_yticks([])
plt.show()
Explanation: Observations:
- slightly smaller accuracy on the test data compared to training data (model overfits on the training data)
Questions:
- Why do we need several epochs?
- What is the main computer limitation when it comes to batches?
- How many epochs are needed, and what is the danger associated with using too many or too few?
Reading:
- https://medium.com/onfido-tech/machine-learning-101-be2e0a86c96a
Run a prediction:
End of explanation
import numpy as np
from keras.datasets import imdb
from keras import models
from keras import layers
from keras import optimizers
from keras import losses
from keras import metrics
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
print(max([max(sequence) for sequence in train_data]))
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
x_train = vectorize_sequences(train_data)
x_test = vectorize_sequences(test_data)
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
history = model.fit(partial_x_train,
partial_y_train,
epochs=5,
batch_size=512,
validation_data=(x_val, y_val))
p = model.predict(x_test)
print(history.history)
Explanation: Historical essentials
Deep learning, from an algorithmic perspective, is the application of advanced multi-layered filters to learn hidden features in data representation.
Many of the methods that are used today in DL, such as most neural network types (and not only), went through a 20 years long pause due to the fact that the computing machines avalable at the era were too slow to produce wanted results.
It was several things that precipitated their return in 2010:
- Graphical processors. A GPU has thousands of cores that are specialized in concomitant linear operations. This provided the infrastructure on which "deep" algorithms perform the best.
- The maturity of cloud computing. This enables third parties to use DL methodologies at scale, and with small operating costs.
- Big data. Most AI needs models to be trained on a lot of data, thus AI needs a sufficient level of data availability. The massive acumulation of data (not only in biology) is a very recent phenomenon.
Book reccomendation:
- http://www.deeplearningbook.org/ (free to read)
Text classification
The purpose is to cathegorize films into good or bad based on their reviews. Data is vectorized into binary.
layer activation
What happens during layer activation? Basically a set of tensor operations are being performed. A simplistic way to understand this is operations done on array of matrices, while the atomic operation would be:
output = relu(dot(W, input) + b)
, where the weight matrix W shape is (input_dim (10000), 16) and b is a bias term. In linear algebra terms, this will project the input data onto a 16 dimensional space. The more dimensions, the more features, the more confusion, and more computing cost BUT also more complex representations.
Task:
- Perform sentiment analysis using the code below!
- Plot the accuracy vs loss in both the training and validation data, on the history.history dictionary. Use more epochs. What do you notice? How many epochs do you think you need? What if you monitor for 100000 epochs?
- We were using 2 hidden layers. Try to use 1 or 3 hidden layers and see how it affects validation and test accuracy.
- Adjust the learning rate.
- Try to use layers with more hidden units or less hidden units: 32 units, 64 units...
- Try to use the mse loss function instead of binary_crossentropy.
- Try to use the tanh activation (an activation that was popular in the early days of neural networks) instead of relu.
End of explanation |
4,330 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Minería de Texto
La minería de texto es el proceso de obtener información de alta calidad a partir del texto.
¿Qué clase de información?
-Palabras Clave
Step1: Segundo paso, obtener el contenido.
De los datos de Facebok, solo tenemos los titulos y los urls. Necesitamos los articulos. Para esto, necesitamos acceder a los urls y extraer los datos de la página web. Esto es Web Scraping.
Nada que ver aquí, Pedro presenta el Web Scraping en R.
Tercer paso
Step2: Ya casi podemos comenzar a analizar. Vamos a utilizar el modelo de bolsa de palabras (bag of words). En este modelo contamos la ocurrencia de cada palabra en cada texto.
Pero para lograr esto de la manera más efectiva hay que limpiar el texto
Step3: Una alternativa para estandarizar las palabras es stemming. Esto devuelve a una palabra a la raíz de su familia
Step4: Cuenta de Palabras
Step5: Para verlo de manera más facil para los ojos
Step6: Un metodo muy util para medir la importancia de las palabras es TF-IDF.
Step7: Word Clouds
Los word clouds o nubes de palabras nos ayudan a visualizar el texto de manera más intuitiva. Las palabras más grandes son las más frecuentes. | Python Code:
def testFacebookPageFeedData(page_id, access_token):
# construct the URL string
base = "https://graph.facebook.com/v2.10"
node = "/" + page_id + "/feed" # changed
parameters = "/?fields=message,created_time,reactions.type(LOVE).limit(0).summary(total_count).as(reactions_love),reactions.type(WOW).limit(0).summary(total_count).as(reactions_wow),reactions.type(HAHA).limit(0).summary(total_count).as(reactions_haha),reactions.type(ANGRY).limit(0).summary(total_count).as(reactions_angry),reactions.type(SAD).limit(0).summary(total_count).as(reactions_sad),reactions.type(LIKE).limit(0).summary(total_count).as(reactions_like)&limit={}&access_token={}".format(100, access_token) # changed
url = base + node + parameters
# retrieve data
data = json.loads(request_until_succeed(url))
return data
def Get_News(limit = 10):
result = {}
nex = None
for i in range(limit):
range_dates = []
range_messages = []
range_ids= []
if i == 0:
data = testFacebookPageFeedData(page_id,access_token)
nex = data['paging']['next']
for d in data['data']:
range_dates.append(d['created_time'])
range_messages.append(d['message'])
range_ids.append(d['id'])
result['dates'] = range_dates
result['messages'] = range_messages
result['angry'] = range_angry
result['id'] = range_ids
else:
data = json.loads(request_until_succeed(nex))
try:
nex = data['paging']['next']
except:
break
for d in data['data']:
try:
range_messages.append(d['message'])
range_dates.append(d['created_time'])
range_ids.append(d['id'])
except:
print(d)
result['dates'].extend(range_dates)
result['messages'].extend(range_messages)
result['id'].extend(range_ids)
result_df = pd.DataFrame(result)
return result_df
import pandas as pd
pd.set_option('chained_assignment',None)
diario_libre_fb = pd.read_csv('diario_libre_fb.csv',encoding='latin1')
def get_url(url):
urls = re.findall('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', url)
try:
result = urls[0]
except:
result = 'Not found'
return result
diario_libre_fb.head()
Explanation: Minería de Texto
La minería de texto es el proceso de obtener información de alta calidad a partir del texto.
¿Qué clase de información?
-Palabras Clave: Soplan los vientos, Leonel 2020.
-Sentimiento: El Iphone X es un disparate.
-Agrupaciones: Todos esos tweets son bien parecidos.
Y muchos más.
El texto es el dato más abundante, ya que es generado cada milisegundo en un sitio que todos visitamos, la internet.
Tenemos datos infinitos con los que jugar. Pero como lo conseguimos?
Los textos son datos NO Estructurados.
Los metodos convencionales para analizar datos no funcionan aquí. ¿Qué Procede?
Proceso de Minería de Texto.
Caso de Uso, minería de texto de Periodicos Dominicanos
Primer Paso: Obtener los datos
Como los sitios web de los periodicos no son muy amigables para navegar hacia al pasado, y también tienen estructuras de portadas diferentes, recurrimos a un vínculo en común: Facebook.
End of explanation
import os
path = os.getcwd()
csv_files =[]
for file in os.listdir(path):
if file.endswith(".csv") and 'diario_libre_fb' not in file:
csv_files.append(os.path.join(path, file))
from matplotlib import rcParams
rcParams['figure.figsize'] = (8, 4) # Size of plot
rcParams['figure.dpi'] = 100 #Dots per inch of plot
rcParams['lines.linewidth'] = 2 # Width of lines of the plot
rcParams['axes.facecolor'] = 'white' #Color of the axes
rcParams['font.size'] = 12 # Size of the text.
rcParams['patch.edgecolor'] = 'white' #Patch edge color.
rcParams['font.family'] = 'StixGeneral' #Font of the plot text.
diarios = ['Diario Libre','El Dia','Hoy','Listin Diario','El Nacional']
noticias_df_all = None
for i,periodico in enumerate(csv_files):
noticias_df = pd.read_csv(csv_files[0],encoding = 'latin1').iloc[:,1:]
noticias_df['Diario'] = diarios[i]
if noticias_df_all is None:
noticias_df_all = noticias_df
else:
noticias_df_all = noticias_df_all.append(noticias_df)
noticias_df_all.reset_index(drop = True,inplace = True)
noticias_df_all.describe()
noticias_df_completas = noticias_df_all.loc[pd.notnull(noticias_df_all.contenidos)]
noticias_df_completas.shape
Explanation: Segundo paso, obtener el contenido.
De los datos de Facebok, solo tenemos los titulos y los urls. Necesitamos los articulos. Para esto, necesitamos acceder a los urls y extraer los datos de la página web. Esto es Web Scraping.
Nada que ver aquí, Pedro presenta el Web Scraping en R.
Tercer paso: Analizar el texto
Ya con el texto guardado y estructurado, solo falta analizarlo.
End of explanation
pd.options.mode.chained_assignment = None
import nltk
spanish_stops = set(nltk.corpus.stopwords.words('Spanish'))
list(spanish_stops)[:10]
import unicodedata
import re
def strip_accents(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def Clean_Text(text):
words = text.lower().split()
removed_stops = [strip_accents(w) for w in words if w not in spanish_stops and len(w)!=1]
stops_together = " ".join(removed_stops)
letters_only = re.sub("[^a-zA-Z]"," ", stops_together)
return letters_only
noticias_df_completas['contenido limpio'] = noticias_df_completas.contenidos.apply(Clean_Text)
noticias_df_completas[['contenidos','contenido limpio']].head()
Explanation: Ya casi podemos comenzar a analizar. Vamos a utilizar el modelo de bolsa de palabras (bag of words). En este modelo contamos la ocurrencia de cada palabra en cada texto.
Pero para lograr esto de la manera más efectiva hay que limpiar el texto:
Convertir a minuscula: Santiago -> santiago
Eliminar caracteres no alfabeticos -> No pararon. -> No pararon
Eliminar tildes -> República Dominicana -> Republica Dominicana
Eliminar palabras sin ningun valor análitico -> Falleció la mañana de este sábado -> Falleció mañana sabado
Para facilitar esto, vamos a utilizar la librería de texto Natural Language Toolkit o NLTK. Contiene un número inmenso de funcionalidades como :
Corpus de texto
Conversión de oraciones a las partes de texto (POS).
Tokenización de palabras y oraciones.
Y mucho más...
End of explanation
from nltk.stem.snowball import SnowballStemmer
spanish_stemmer = SnowballStemmer("spanish")
print(spanish_stemmer.stem("corriendo"))
print(spanish_stemmer.stem("correr"))
def stem_text(text):
stemmed_text = [spanish_stemmer.stem(word) for word in text.split()]
return " ".join(stemmed_text)
noticias_df_completas['contenido stemmed'] = noticias_df_completas['contenido limpio'].apply(stem_text)
noticias_df_completas.head()
Explanation: Una alternativa para estandarizar las palabras es stemming. Esto devuelve a una palabra a la raíz de su familia
End of explanation
import itertools
def Create_ngrams(all_text,number=1):
result = {}
for text in all_text:
text = [w for w in text.split() if len(w) != 1]
for comb in list(itertools.combinations(text, number)):
found = False
temp_dict = {}
i =0
while not found and i < len(comb):
if comb[i] not in temp_dict:
temp_dict[comb[i]] = "Found"
else:
found = True
i += 1
if not found:
if comb not in result:
result[comb]= 1
else:
result[comb]+=1
df = pd.DataFrame({ str(number) + "-Combinations": list(result.keys()),"Count":list(result.values())})
return df.sort_values(by="Count",ascending=False)
one_ngrams = Create_ngrams(noticias_df_completas['contenido limpio'])
one_ngrams.head()
Explanation: Cuenta de Palabras
End of explanation
from matplotlib import rcParams
rcParams['figure.figsize'] = (8, 4) # Size of plot
rcParams['figure.dpi'] = 100 #Dots per inch of plot
rcParams['lines.linewidth'] = 2 # Width of lines of the plot
rcParams['axes.facecolor'] = 'white' #Color of the axes
rcParams['font.size'] = 12 # Size of the text.
rcParams['patch.edgecolor'] = 'white' #Patch edge color.
rcParams['font.family'] = 'StixGeneral' #Font of the plot text.
import seaborn as sns
import matplotlib.pyplot as plt
def Plot_nCombination(comb_df,n,title):
sns.barplot(x=str(n) + "-Combinations",y = "Count",data = comb_df.head(10))
plt.title(title)
plt.xlabel("Combination")
plt.ylabel("Count")
plt.xticks(rotation = "75")
plt.show()
Plot_nCombination(one_ngrams,1,"Top 10 palabras más comunes, noticias.")
two_ngrams = Create_ngrams(noticias_df_completas['contenido limpio'],2)
Plot_nCombination(two_ngrams,2,"Top 10 pares de palabras más comunes.")
Explanation: Para verlo de manera más facil para los ojos
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer
import numpy as np
def Calculate_tfidf(text):
corpus = text
vectorizer = TfidfVectorizer( min_df = 0.025, max_df = 0.25)
vector_weights = vectorizer.fit_transform(corpus)
weights= list(np.asarray(vector_weights.mean(axis=0)).ravel())
df = pd.DataFrame({"Word":vectorizer.get_feature_names(),"Score":weights})
df = df.sort_values(by = "Score" ,ascending = False)
return df,vector_weights.toarray()
def Plot_Score(data,title):
sns.barplot(x="Word",y = "Score",data = data.head(10))
plt.title(title)
plt.xlabel("Palabra")
plt.ylabel("Score")
plt.xticks(rotation = "75")
plt.show()
Text_TfIdf,Text_Vector = Calculate_tfidf(noticias_df_completas['contenido limpio'])
Plot_Score(Text_TfIdf,"TF-IDF Top 10 palabras")
Explanation: Un metodo muy util para medir la importancia de las palabras es TF-IDF.
End of explanation
noticias_df_completas = noticias_df_completas.loc[pd.notnull(noticias_df_completas.fechas)]
noticias_df_completas.fechas = pd.to_datetime(noticias_df_completas.fechas)
noticias_df_completas['Mes'] = noticias_df_completas.fechas.dt.month
noticias_df_completas['Año'] = noticias_df_completas.fechas.dt.year
noticias_df_completas.head()
from wordcloud import WordCloud
rcParams['figure.dpi'] = 600
def crear_wordcloud_mes_anio(data,mes,anio):
data = data.loc[(data.Mes == mes) & (data.Año == anio)]
print("Existen {} articulos en los datos para el mes {} del año {}.".format(data.shape[0],mes,anio))
wordcloud = WordCloud(background_color='white',max_words=200,
max_font_size=40,random_state=42).generate(str(data['contenido limpio']))
fig = plt.figure(1)
plt.imshow(wordcloud)
plt.axis('off')
plt.show()
crear_wordcloud_mes_anio(noticias_df_completas,9,2017)
Explanation: Word Clouds
Los word clouds o nubes de palabras nos ayudan a visualizar el texto de manera más intuitiva. Las palabras más grandes son las más frecuentes.
End of explanation |
4,331 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
------------- User's settings -------------
Step1: ------------- (semi)-Automatic -------------
Step2: Configure GPU/CPU devices
Step3: Load data
Step4: Stack single-channel images (maximum 3 channels) and store to PNG files
Step5: Load trained model
Step6: Evaluate testing set
Step7: Extract the most crucial layer
Step8: Look for the densely/fully connected layer nearest to the classier, which is the one that has the shape of (None, number-of-classes)
==================================================================
Example 1
Step9: Metadata for embeddings
Step10: Predicted values in .TXT
To be uploaded and viewed on http
Step11: Note
Step12: Note
Step13: Confusion matrix | Python Code:
# Location of digested data
input_directory = '/digested/'
# Desired location to save temporary PNG outputs:
png_directory = '/digested_png/'
# Location of saved trained model
model_directory = '/model_directory/'
# Desired location for outputs
output_directory = '/output_directory_transferred/'
# Define native shape of the transferred model (refer to model documentation)
shape=(197,197,3)
Explanation: ------------- User's settings -------------
End of explanation
%matplotlib inline
import keras
import keras.preprocessing.image
import pickle
from keras.layers import *
from keras.models import Sequential
import numpy
import os
import os.path
import matplotlib.pyplot
import pandas
import seaborn
import sklearn.metrics
import tensorflow
from tensorflow.contrib.tensorboard.plugins import projector
Explanation: ------------- (semi)-Automatic -------------
End of explanation
# -------- If using Tensorflow-GPU: -------- #
configuration = tensorflow.ConfigProto()
configuration.gpu_options.allow_growth = True
configuration.gpu_options.visible_device_list = "0"
session = tensorflow.Session(config=configuration)
keras.backend.set_session(session)
# -------- If using Tensorflow (CPU) : -------- #
# configuration = tensorflow.ConfigProto()
# session = tensorflow.Session(config=configuration)
# keras.backend.set_session(session)
if not os.path.exists(output_directory):
os.makedirs(output_directory)
Explanation: Configure GPU/CPU devices:
End of explanation
testing_x = numpy.load(os.path.join(input_directory, "testing_x.npy"))
testing_y = numpy.load(os.path.join(input_directory, "testing_y.npy"))
Explanation: Load data:
End of explanation
%%capture
digest.save_png(testing_x, os.path.join(png_directory,"Testing") )
testing_generator = keras.preprocessing.image.ImageDataGenerator()
testing_generator = testing_generator.flow_from_directory(
batch_size=32,
color_mode="rgb",
directory= os.path.join(png_directory,"Testing"),
target_size=(shape[0], shape[1])
)
Explanation: Stack single-channel images (maximum 3 channels) and store to PNG files
End of explanation
model = keras.models.load_model( os.path.join(model_directory, 'model.h5') )
model.load_weights(os.path.join(model_directory, 'model.h5'))
Explanation: Load trained model:
(can also load checkpoints)
End of explanation
model.evaluate_generator(
generator=testing_generator,
steps=256
)
Explanation: Evaluate testing set
End of explanation
layers = model.layers
model.summary()
Explanation: Extract the most crucial layer
End of explanation
print(layers[-3])
abstract_model = None # Clear cached abstract_model
abstract_model = Sequential([layers[-3]])
abstract_model.summary()
extracted_features = abstract_model.predict_generator(
generator=testing_generator,
steps=256)
Explanation: Look for the densely/fully connected layer nearest to the classier, which is the one that has the shape of (None, number-of-classes)
==================================================================
Example 1: in case of classification of 7 classes, the last few layers are:
dense_1 (Dense) (None, 1024) 943820
dropout_1 (Dropout) (None, 1024) 0
dense_2 (Dense) (None, 7) 7175
activation_1 (Activation) (None, 7) 0
then look for the layer dense_1 , which has a shape of (None, 1024)
==================================================================
Example 2: in case of classification of 5 classes, the last few layers are:
activation_49 (Activation) (None, 8, 8, 2048) 0
avg_pool (AveragePooling2D) (None, 1, 1, 2048) 0
global_average_pooling2d_1 (Glob (None, 2048) 0
dense_2 (Dense) (None, 5) 10245
then look for the layer global_average_pooling2d_1 , which has a shape of (None, 2048)
End of explanation
print('Converting numeric labels into class names...')
class_names = pickle.load(open(os.path.join(input_directory, "class_names.sav"), 'rb'))
def save_metadata(file):
with open(file, 'w') as f:
for i in range(test_y.shape[0]):
f.write('{}\n'.format( class_names[test_y[i]] ))
save_metadata( os.path.join(output_directory, 'metadata.tsv') )
print('Done.')
Explanation: Metadata for embeddings
End of explanation
numpy.savetxt( os.path.join(output_directory, 'table_of_features.txt' ), extracted_features, delimiter='\t')
Explanation: Predicted values in .TXT
To be uploaded and viewed on http://projector.tensorflow.org
End of explanation
numpy.save( os.path.join(output_directory, 'table_of_features.npy' ), extracted_features )
extracted_features = numpy.load( 'table_of_features.npy' )
embedding_var = tensorflow.Variable(extracted_features)
embedSess = tensorflow.Session()
# save variable in session
embedSess.run(embedding_var.initializer)
# save session (only used variable) to file
saver = tensorflow.train.Saver([embedding_var])
saver.save(embedSess, 'tf.ckpt')
summary_writer = tensorflow.summary.FileWriter('./')
config = tensorflow.contrib.tensorboard.plugins.projector.ProjectorConfig()
embedding = config.embeddings.add()
embedding.tensor_name = embedding_var.name
embedding.metadata_path = 'metadata.tsv' # this metadata_path need to be modified later. See note.
tensorflow.contrib.tensorboard.plugins.projector.visualize_embeddings(summary_writer, config)
embedSess.close()
Explanation: Note:
Once finished, open http://projector.tensorflow.org on web-browser.
Click "Load data" on the left panel.
Step 1: Load a TSV file of vectors >> Choose file: 'table_of_features.txt'
Step 2: Load a TSV file of metadata >> Choose file: 'metadata.tsv'
Hit ESC or click outside the load data window to dismiss.
Predicted values in .NPY
Used for generating Tensorboard embeddings to be viewed locally on http://localhost:6006
End of explanation
metrics = pandas.read_csv(os.path.join(model_directory, 'training.csv') )
print(metrics)
matplotlib.pyplot.plot(metrics["acc"])
matplotlib.pyplot.plot(metrics["val_acc"])
matplotlib.pyplot.plot(metrics["loss"])
matplotlib.pyplot.plot(metrics["val_loss"])
Explanation: Note:
Tensorboard embeddings files will be saved in the same location with this script.
Collect the following files into one folder:
metadata.tsv
checkpoint
projector_config.pbtxt
tf.ckpt.index
tf.ckpt.meta
tf.ckpt.data-00000-of-00001
Open with any text editor : "projector_config.pbtxt"
"/path/to/logdir/metadata.tsv" has to be specified, CANNOT be relative path "./metadata.tsv", nor "~/metadata.tsv"
Then type command in terminal: tensorboard --logdir="/path/to/logdir"
Next, open web-browser, connect to http://localhost:6006
Plot categorical accuracy and loss
End of explanation
predicted = model.predict(
batch_size=50,
x=testing_x
)
predicted = numpy.argmax(predicted, -1)
expected = numpy.argmax(testing_y[:, :], -1)
confusion = sklearn.metrics.confusion_matrix(expected, predicted)
confusion = pandas.DataFrame(confusion)
matplotlib.pyplot.figure(figsize=(12, 8))
seaborn.heatmap(confusion, annot=True)
matplotlib.pyplot.savefig( os.path.join(output_directory, 'confusion_matrix.eps') , format='eps', dpi=600)
Explanation: Confusion matrix
End of explanation |
4,332 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Custom Image Augmentations with BaseImageAugmentationLayer
Author
Step1: First, let's implement some helper functions to visualize intermediate results
Step2: BaseImageAugmentationLayer Introduction
Image augmentation should operate on a sample-wise basis; not batch-wise.
This is a common mistake many machine learning practicioners make when implementing
custom techniques.
BaseImageAugmentation offers a set of clean abstractions to make implementing image
augmentation techniques on a sample wise basis much easier.
This is done by allowing the end user to override an augment_image() method and then
performing automatic vectorization under the hood.
Most augmentation techniques also must sample from one or more random distributions.
KerasCV offers an abstraction to make random sampling end user configurable
Step3: Our layer overrides BaseImageAugmentationLayer.augment_image(). This method is
used to augment images given to the layer. By default, using
BaseImageAugmentationLayer gives you a few nice features for free
Step4: Next, let's augment it and visualize the result
Step5: Looks great! We can also call our layer on batched inputs
Step7: Adding Random Behavior with the FactorSampler API.
Usually an image augmentation technique should not do the same thing on every
invocation of the layer's __call__ method.
KerasCV offers the FactorSampler API to allow users to provide configurable random
distributions.
Step8: Now, we can configure the random behavior of ou RandomBlueTint layer.
We can give it a range of values to sample from
Step9: Each image is augmented differently with a random factor sampled from the range
(0, 0.5).
We can also configure the layer to draw from a normal distribution
Step11: As you can see, the augmentations now are drawn from a normal distributions.
There are various types of FactorSamplers including UniformFactorSampler,
NormalFactorSampler, and ConstantFactorSampler. You can also implement you own.
Overridding get_random_transformation()
Now, suppose that your layer impacts the prediction targets
Step12: To make use of these new methods, you will need to feed your inputs in with a
dictionary maintaining a mapping from images to targets.
As of now, KerasCV supports the following label types
Step13: Now if we call our layer on the inputs
Step14: Both the inputs and labels are augmented.
Note how when transformation is > 100 the label is modified to contain 2.0 as
specified in the layer above.
value_range support
Imagine you are using your new augmentation layer in many pipelines.
Some pipelines have values in the range [0, 255], some pipelines have normalized their
images to the range [-1, 1], and some use a value range of [0, 1].
If a user calls your layer with an image in value range [0, 1], the outputs will be
nonsense!
Step16: Note that this is an incredibly weak augmentation!
Factor is only set to 0.1.
Let's resolve this issue with KerasCV's value_range API.
Step17: Now our elephants are only slgihtly blue tinted. This is the expected behavior when
using a factor of 0.1. Great!
Now users can configure the layer to support any value range they may need. Note that
only layers that interact with color information should use the value range API.
Many augmentation techniques, such as RandomRotation will not need this.
Auto vectorization performance
If you are wondering | Python Code:
import tensorflow as tf
from tensorflow import keras
import keras_cv
from tensorflow.keras import layers
from keras_cv import utils
from keras_cv.layers import BaseImageAugmentationLayer
import matplotlib.pyplot as plt
tf.autograph.set_verbosity(0)
Explanation: Custom Image Augmentations with BaseImageAugmentationLayer
Author: lukewood<br>
Date created: 2022/04/26<br>
Last modified: 2022/04/26<br>
Description: Use BaseImageAugmentationLayer to implement custom data augmentations.
Overview
Data augmentation is an integral part of training any robust computer vision model.
While KerasCV offers a plethora of prebuild high quality data augmentation techniques,
you may still want to implement your own custom technique.
KerasCV offers a helpful base class for writing data augmentation layers:
BaseImageAugmentationLayer.
Any augmentation layer built with BaseImageAugmentationLayer will automatically be
compatible with the KerasCV RandomAugmentationPipeline class.
This guide will show you how to implement your own custom augmentation layers using
BaseImageAugmentationLayer. As an example, we will implement a layer that tints all
images blue.
End of explanation
def imshow(img):
img = img.astype(int)
plt.axis("off")
plt.imshow(img)
plt.show()
def gallery_show(images):
images = images.astype(int)
for i in range(9):
image = images[i]
plt.subplot(3, 3, i + 1)
plt.imshow(image.astype("uint8"))
plt.axis("off")
plt.show()
Explanation: First, let's implement some helper functions to visualize intermediate results
End of explanation
class RandomBlueTint(keras_cv.layers.BaseImageAugmentationLayer):
def augment_image(self, image, transformation=None):
# image is of shape (height, width, channels)
[*others, blue] = tf.unstack(image, axis=-1)
blue = tf.clip_by_value(blue + 100, 0.0, 255.0)
return tf.stack([*others, blue], axis=-1)
Explanation: BaseImageAugmentationLayer Introduction
Image augmentation should operate on a sample-wise basis; not batch-wise.
This is a common mistake many machine learning practicioners make when implementing
custom techniques.
BaseImageAugmentation offers a set of clean abstractions to make implementing image
augmentation techniques on a sample wise basis much easier.
This is done by allowing the end user to override an augment_image() method and then
performing automatic vectorization under the hood.
Most augmentation techniques also must sample from one or more random distributions.
KerasCV offers an abstraction to make random sampling end user configurable: the
FactorSampler API.
Finally, many augmentation techniques requires some information about the pixel values
present in the input images. KerasCV offers the value_range API to simplify the handling of this.
In our example, we will use the FactorSampler API, the value_range API, and
BaseImageAugmentationLayer to implement a robust, configurable, and correct RandomBlueTint layer.
Overriding augment_image()
Let's start off with the minimum:
End of explanation
SIZE = (300, 300)
elephants = tf.keras.utils.get_file(
"african_elephant.jpg", "https://i.imgur.com/Bvro0YD.png"
)
elephants = tf.keras.utils.load_img(elephants, target_size=SIZE)
elephants = tf.keras.utils.img_to_array(elephants)
imshow(elephants)
Explanation: Our layer overrides BaseImageAugmentationLayer.augment_image(). This method is
used to augment images given to the layer. By default, using
BaseImageAugmentationLayer gives you a few nice features for free:
support for unbatched inputs (HWC Tensor)
support for batched inputs (BHWC Tensor)
automatic vectorization on batched inputs (more information on this in automatic
vectorization performance)
Let's check out the result. First, let's download a sample image:
End of explanation
layer = RandomBlueTint()
augmented = layer(elephants)
imshow(augmented.numpy())
Explanation: Next, let's augment it and visualize the result:
End of explanation
layer = RandomBlueTint()
augmented = layer(tf.expand_dims(elephants, axis=0))
imshow(augmented.numpy()[0])
Explanation: Looks great! We can also call our layer on batched inputs:
End of explanation
class RandomBlueTint(keras_cv.layers.BaseImageAugmentationLayer):
RandomBlueTint randomly applies a blue tint to images.
Args:
factor: A tuple of two floats, a single float or a
`keras_cv.FactorSampler`. `factor` controls the extent to which the
image is blue shifted. `factor=0.0` makes this layer perform a no-op
operation, while a value of 1.0 uses the degenerated result entirely.
Values between 0 and 1 result in linear interpolation between the original
image and a fully blue image.
Values should be between `0.0` and `1.0`. If a tuple is used, a `factor` is
sampled between the two values for every image augmented. If a single float
is used, a value between `0.0` and the passed float is sampled. In order to
ensure the value is always the same, please pass a tuple with two identical
floats: `(0.5, 0.5)`.
def __init__(self, factor, **kwargs):
super().__init__(**kwargs)
self.factor = utils.parse_factor(factor)
def augment_image(self, image, transformation=None):
[*others, blue] = tf.unstack(image, axis=-1)
blue_shift = self.factor() * 255
blue = tf.clip_by_value(blue + blue_shift, 0.0, 255.0)
return tf.stack([*others, blue], axis=-1)
Explanation: Adding Random Behavior with the FactorSampler API.
Usually an image augmentation technique should not do the same thing on every
invocation of the layer's __call__ method.
KerasCV offers the FactorSampler API to allow users to provide configurable random
distributions.
End of explanation
many_elephants = tf.repeat(tf.expand_dims(elephants, axis=0), 9, axis=0)
layer = RandomBlueTint(factor=0.5)
augmented = layer(many_elephants)
gallery_show(augmented.numpy())
Explanation: Now, we can configure the random behavior of ou RandomBlueTint layer.
We can give it a range of values to sample from:
End of explanation
many_elephants = tf.repeat(tf.expand_dims(elephants, axis=0), 9, axis=0)
factor = keras_cv.NormalFactorSampler(
mean=0.3, stddev=0.1, min_value=0.0, max_value=1.0
)
layer = RandomBlueTint(factor=factor)
augmented = layer(many_elephants)
gallery_show(augmented.numpy())
Explanation: Each image is augmented differently with a random factor sampled from the range
(0, 0.5).
We can also configure the layer to draw from a normal distribution:
End of explanation
class RandomBlueTint(keras_cv.layers.BaseImageAugmentationLayer):
RandomBlueTint randomly applies a blue tint to images.
Args:
factor: A tuple of two floats, a single float or a
`keras_cv.FactorSampler`. `factor` controls the extent to which the
image is blue shifted. `factor=0.0` makes this layer perform a no-op
operation, while a value of 1.0 uses the degenerated result entirely.
Values between 0 and 1 result in linear interpolation between the original
image and a fully blue image.
Values should be between `0.0` and `1.0`. If a tuple is used, a `factor` is
sampled between the two values for every image augmented. If a single float
is used, a value between `0.0` and the passed float is sampled. In order to
ensure the value is always the same, please pass a tuple with two identical
floats: `(0.5, 0.5)`.
def __init__(self, factor, **kwargs):
super().__init__(**kwargs)
self.factor = utils.parse_factor(factor)
def get_random_transformation(self, **kwargs):
# kwargs holds {"images": image, "labels": label, etc...}
return self.factor() * 255
def augment_image(self, image, transformation=None, **kwargs):
[*others, blue] = tf.unstack(image, axis=-1)
blue = tf.clip_by_value(blue + transformation, 0.0, 255.0)
return tf.stack([*others, blue], axis=-1)
def augment_label(self, label, transformation=None, **kwargs):
# you can use transformation somehow if you want
if transformation > 100:
# i.e. maybe class 2 corresponds to blue images
return 2.0
return label
def augment_bounding_boxes(self, bounding_boxes, transformation=None, **kwargs):
# you can also perform no-op augmentations on label types to support them in
# your pipeline.
return bounding_boxes
Explanation: As you can see, the augmentations now are drawn from a normal distributions.
There are various types of FactorSamplers including UniformFactorSampler,
NormalFactorSampler, and ConstantFactorSampler. You can also implement you own.
Overridding get_random_transformation()
Now, suppose that your layer impacts the prediction targets: whether they are bounding
boxes, classification labels, or regression targets.
Your layer will need to have information about what augmentations are taken on the image
when augmenting the label.
Luckily, BaseImageAugmentationLayer was designed with this in mind.
To handle this issue, BaseImageAugmentationLayer has an overrideable
get_random_transformation() method alongside with augment_label(),
augment_target() and augment_bounding_boxes().
augment_segmentation_map() and others will be added in the future.
Let's add this to our layer.
End of explanation
labels = tf.constant([[1, 0]])
inputs = {"images": elephants, "labels": labels}
Explanation: To make use of these new methods, you will need to feed your inputs in with a
dictionary maintaining a mapping from images to targets.
As of now, KerasCV supports the following label types:
labels via augment_label().
bounding_boxes via augment_bounding_boxes().
In order to use augmention layers alongside your prediction targets, you must package
your inputs as follows:
End of explanation
layer = RandomBlueTint(factor=(0.6, 0.6))
augmented = layer(inputs)
print(augmented["labels"])
Explanation: Now if we call our layer on the inputs:
End of explanation
layer = RandomBlueTint(factor=(0.1, 0.1))
elephants_0_1 = elephants / 255
print("min and max before augmentation:", elephants_0_1.min(), elephants_0_1.max())
augmented = layer(elephants_0_1)
print(
"min and max after augmentation:",
(augmented.numpy()).min(),
augmented.numpy().max(),
)
imshow((augmented * 255).numpy().astype(int))
Explanation: Both the inputs and labels are augmented.
Note how when transformation is > 100 the label is modified to contain 2.0 as
specified in the layer above.
value_range support
Imagine you are using your new augmentation layer in many pipelines.
Some pipelines have values in the range [0, 255], some pipelines have normalized their
images to the range [-1, 1], and some use a value range of [0, 1].
If a user calls your layer with an image in value range [0, 1], the outputs will be
nonsense!
End of explanation
class RandomBlueTint(keras_cv.layers.BaseImageAugmentationLayer):
RandomBlueTint randomly applies a blue tint to images.
Args:
value_range: value_range: a tuple or a list of two elements. The first value
represents the lower bound for values in passed images, the second represents
the upper bound. Images passed to the layer should have values within
`value_range`.
factor: A tuple of two floats, a single float or a
`keras_cv.FactorSampler`. `factor` controls the extent to which the
image is blue shifted. `factor=0.0` makes this layer perform a no-op
operation, while a value of 1.0 uses the degenerated result entirely.
Values between 0 and 1 result in linear interpolation between the original
image and a fully blue image.
Values should be between `0.0` and `1.0`. If a tuple is used, a `factor` is
sampled between the two values for every image augmented. If a single float
is used, a value between `0.0` and the passed float is sampled. In order to
ensure the value is always the same, please pass a tuple with two identical
floats: `(0.5, 0.5)`.
def __init__(self, value_range, factor, **kwargs):
super().__init__(**kwargs)
self.value_range = value_range
self.factor = utils.parse_factor(factor)
def get_random_transformation(self, **kwargs):
# kwargs holds {"images": image, "labels": label, etc...}
return self.factor() * 255
def augment_image(self, image, transformation=None, **kwargs):
image = utils.transform_value_range(image, self.value_range, (0, 255))
[*others, blue] = tf.unstack(image, axis=-1)
blue = tf.clip_by_value(blue + transformation, 0.0, 255.0)
result = tf.stack([*others, blue], axis=-1)
result = utils.transform_value_range(result, (0, 255), self.value_range)
return result
def augment_label(self, label, transformation=None, **kwargs):
# you can use transformation somehow if you want
if transformation > 100:
# i.e. maybe class 2 corresponds to blue images
return 2.0
return label
def augment_bounding_boxes(self, bounding_boxes, transformation=None, **kwargs):
# you can also perform no-op augmentations on label types to support them in
# your pipeline.
return bounding_boxes
layer = RandomBlueTint(value_range=(0, 1), factor=(0.1, 0.1))
elephants_0_1 = elephants / 255
print("min and max before augmentation:", elephants_0_1.min(), elephants_0_1.max())
augmented = layer(elephants_0_1)
print(
"min and max after augmentation:",
augmented.numpy().min(),
augmented.numpy().max(),
)
imshow((augmented * 255).numpy().astype(int))
Explanation: Note that this is an incredibly weak augmentation!
Factor is only set to 0.1.
Let's resolve this issue with KerasCV's value_range API.
End of explanation
class UnVectorizable(keras_cv.layers.BaseImageAugmentationLayer):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# this disables BaseImageAugmentationLayer's Auto Vectorization
self.auto_vectorize = False
Explanation: Now our elephants are only slgihtly blue tinted. This is the expected behavior when
using a factor of 0.1. Great!
Now users can configure the layer to support any value range they may need. Note that
only layers that interact with color information should use the value range API.
Many augmentation techniques, such as RandomRotation will not need this.
Auto vectorization performance
If you are wondering:
Does implementing my augmentations on an sample-wise basis carry performance
implications?
You are not alone!
Luckily, I have performed extensive analysis on the performance of automatic
vectorization, manual vectorization, and unvectorized implementations.
In this benchmark, I implemented a RandomCutout layer using auto vectorization, no auto
vectorization and manual vectorization.
All of these were benchmarked inside of an @tf.function annotation.
They were also each benchmarked with the jit_compile argument.
The following chart shows the results of this benchmark:
The primary takeaway should be that the difference between manual vectorization and
automatic vectorization is marginal!
Please note that Eager mode performance will be drastically different.
Common gotchas
Some layers are not able to be automatically vectorizated.
An example of this is GridMask.
If you receive an error when invoking your layer, try adding the following to your
constructor:
End of explanation |
4,333 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Secondary structure prediction
Proteins have four levels of structure
Step1: Training GOR III
We can now create our GOR3 instance and train it with the extracted training data
Step2: Predicting structures
Now that it's trained, our object can be used to issue structure predictions. Here are the results for the sequences from the validation data (real structure on top, predicted structure at the bottom)
Step3: Checking prediction quality
Here are the quality measures for our predictions, as explained in the "Prediction quality" section | Python Code:
datasetPath = joinpath("resources", "dataset")
with open(joinpath(datasetPath, "CATH_info.txt")) as infoFile:
with open(joinpath(datasetPath, "CATH_info-PARSED.txt"), 'w') as outFile:
dsspPath = joinpath(datasetPath, "dssp", "")
for line in infoFile.readlines():
d = DSSP(dsspPath + line[0:4] + ".dssp")
description = "> " + d.identifier + "|" + d.protein + "|" + d.organism
seq, struct = d.getSequenceStructure(line[4])
outFile.writelines(l + "\n" for l in [description,seq,struct])
with open(joinpath(datasetPath, "CATH_info_test.txt")) as infoFile:
with open(joinpath(datasetPath, "CATH_info_test-PARSED.txt"), 'w') as outFile:
dsspTestPath = joinpath(datasetPath, "dssp_test", "")
for line in infoFile.readlines():
d = DSSP(dsspTestPath + line[0:4] + ".dssp")
description = "> " + d.identifier + "|" + d.protein + "|" + d.organism
seq, struct = d.getSequenceStructure(line[4])
outFile.writelines(l + "\n" for l in [description,seq,struct])
Explanation: Secondary structure prediction
Proteins have four levels of structure:
* The primary structure is the sequence of amino acids that makes up the protein.
* The secondary structure refers to particular shapes that sub-sequences of the protein tend to form, due to hydrogen bonds. The most common among these are alpha helices and beta sheets.
* The tertiary structure is how the whole protein is "folded" (i.e. its 3D structure). The folding is due to hydrophobic interactions, and stops when the shape is stabilized by other interactions.
* The quartenary structure is particular to multimers, proteins that are made up of multiple subunits. It describes how these subunits are arranged together.
The goal of this project is to predict the secondary structure of a protein, based on its primary structure. This is useful in the context of multiple sequence alignment, since proteins that exerce the same function are likely to have similar secondary structure as well as related primary structures. Fortunately for us, secondary structures can be observed experimentally via multiple techniques, granting us the possibility to train and verify our prediction system with real-world data. Furthermore, the folding of proteins (into their secondary and tertiary stable structures) is highly deterministic, which means it can be predicted based on the primary structure alone.
DSSP definition
DSSP stands for Define Secondary Structure of Proteins and is a standard for how the atomic 3D arrangement of a protein is translated into secondary structures. DSSP admits eight types of secondary structures and assigns one to each amino acid from a protein by examining their spacial coordinates. We won't be implementing DSSP, however we will need to parse .dssp files in order to extract secondary structure information to train and verify our prediction system. Here is a class that parses such files:
Note that we won't be using all eight structures, but rather regroup them into four classes:
* Helix (H) regroups 3,4 and 5-turn helixes
* Sheet (E) regroups parallel/antiparallel $\beta$-sheets and isolated $\beta$-bridges
* Turn (T) is the hydrogen bonded turn
* Coil (C) regroups coils (no structure) and bends
GOR prediction
GOR stands for Garnier-Osguthorpe-Robson and is a secondary structure prediction method based on information theory. It has had several releases, each increasing the prediction accuracy, but we will only focus on the GOR III version here. This version uses two kinds of information to issue a prediction, all based on known protein-structure pairs parsed from a training dataset. In the following formulas, $R_j$ is the residue (amino acid) at index $j$ whose structure is being predicted, $S_j$ is one of the structures, $n-S_j$ represents all of the structures except for $S_j$, $f_{c_1,...c_k}$ is the frequency with which all conditions $c_1$ through $c_k$ are met within the training dataset and $I(\Delta S, ...) = I(S, ...) - I(n-S, ...)$ is the information difference between the predictions concerning $S$ and $n-S$.
* Individual information concerns only the amino acid at position $j$: $$I(\Delta S_j, R_j) = \log{\left( \frac{f_{S_j,R_j}}{f_{n-S_j,R_j}} \right)} + \log{\left( \frac{f_{n-S_j}}{f_{S_j}} \right)}$$
* Directional information was introduced in version 2 and concerns the amino acids surrounding position $j$, from $j-n$ to $j+n$: $$I(\Delta S_j, R_{j+m}) = \log{\left( \frac{f_{S_j,R_{j+m}}}{f_{n-S_j,R_{j+m}}} \right)} + \log{\left( \frac{f_{n-S_j}}{f_{S_j}} \right)}$$
* Pair-wise information has replaced directional information since version 3 and concerns the pairs $(R_j, R_{j+m}) \forall m \in [-n, -1] \cup [1, n]$: $$I(\Delta S_j, R_{j+m} | R_j) = \log{\left( \frac{f_{S_j,R_{j+m},R_j}}{f_{n-S_j,R_{j+m},R_j}} \right)} + \log{\left( \frac{f_{n-S},R_j}{f_{S},R_j} \right)}$$
Overall, the formula applied for the GOR III prediction is: $$I(\Delta S_j, R_{j-n} ... R_{j+n}) = I(\Delta S_j, R_j) + \sum_{m=-n, m \neq 0}^{m=n}{I(\Delta S_j, R_{j+m} | R_j)}$$
Here is an implementation of the algorithm, that we can train with new sequences then use to predict the structure of other sequences:
Prediction quality
Our model can be seen as a set of binary classifiers, each predicting if a given amino acid belongs or not to one structure. There is one classifier per structure, each has its own values for $TP$ (true positives), $TN$ (true negatives), $FP$ (false positives) and $FN$ (false negatives).
Q3
$Q_3$ is equal to the number of correctly predicted residues, divided by the total number of residues. In our case, since we know of four different structures, anything over $\frac{1}{4} = 0.25$ is better than completely random predictions.
MCC
Matthews Correlation Coefficient (MCC)
$$\frac{TP \cdot TN - FP \cdot FN}{\sqrt{(TP+FP)(TP+FN)(TN+FP)(TN+FN)}}$$
ROC Curve
Receiver Operating Characteristic (ROC) curve is a graphical representation of a binary classifier's results. The curve is obtained by plotting together the true positive rate or TPR along the $y$ axis, and the false positive rate or FPR along the $x$ axis. The data is generated for different threshold values which discriminate the positive and negative results. The curve is usually displayed along with a diagonal line joining the dots $(0,0)$ and $(1,1)$, representing the random classifier's ROC (which has a 50% chance of classifying an instance as positive or negative). If our classifier's curve is on top of that line, it means it performs better. Furthermore, the point from our curve closest to $(0,1)$ has the best threshold for efficiently classifying data.
AUC
Area Under Curve (AUC) is the area under a curve. The AUC of a ROC Curve is equal to the probability that its binary classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one.
Results
Time to run that bytecode.
Parsing DSSP files
The first step towards predicting structures is to gather training and validation data. Fortunately, our dear TA has provided us with all the .dssp files we need. The only miscellaneous stuff here is that we're not interested in the whole data, just some chains from some files (specified in info files). The output goes in the PARSED files in a format similar to fasta, where the structure follows the sequence.
End of explanation
gor3Pred = GOR3()
with open(joinpath(datasetPath, "CATH_info-PARSED.txt")) as inFile:
index = 0
sequence = ""
for line in inFile.readlines():
line = line.strip().upper()
if not (line=="" or line[0]==">"):
#Line is a sequence
if index % 2 == 0:
sequence = line
#Line is a structure
else:
gor3Pred.train(sequence, line)
index += 1
Explanation: Training GOR III
We can now create our GOR3 instance and train it with the extracted training data:
End of explanation
with open(joinpath(datasetPath, "CATH_info_test-PARSED.txt")) as inFile:
index = 0
sequence = ""
for line in inFile.readlines():
line = line.strip().upper()
if not (line=="" or line[0]==">"):
#Line is a sequence
if index % 2 == 0:
sequence = line
#Line is a structure
else:
structure = line
prediction = gor3Pred.predict(sequence, structure)
inter = "".join([":" if s1==s2 else " " for s1,s2 in zip(structure, prediction)])
print("-------- STRUCTURE (top) vs PREDICTION (bottom) --------")
print("True Positive Rate:", round(inter.count(":")/len(inter), 2))
print()
chunk = 80
for start in range(0, len(structure), chunk):
stop = start+chunk+1 if start+chunk+1<=len(structure) else len(structure)
print(structure[start:stop])
print(inter[start:stop])
print(prediction[start:stop])
print()
print()
index += 1
Explanation: Predicting structures
Now that it's trained, our object can be used to issue structure predictions. Here are the results for the sequences from the validation data (real structure on top, predicted structure at the bottom):
End of explanation
q3, mcc = gor3Pred.getQuality()
print("----- Overall quality -----")
print("Q3:", round(q3, 2))
print("MCC:", round(mcc, 2))
print()
gor3Pred.plotROC()
Explanation: Checking prediction quality
Here are the quality measures for our predictions, as explained in the "Prediction quality" section:
End of explanation |
4,334 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: End to end example of model training and streaming/(non streaming) inference with TF/TFlite
We will train a simple conv model on artificially generated data and run inference in non streaming and striming modes with TF/TFLite
Imports
Step2: Prepare artificial train data
Step3: Prepare non streaming batched model
Step4: Train non streaming batched model
Step5: Run inference with TF
TF Run non streaming inference
Step6: TF Run streaming inference with internal state
Step7: TF Run streaming inference with external state
Step8: Run inference with TFlite
Run non streaming inference with TFLite
Step9: Run streaming inference with TFLite | Python Code:
!git clone https://github.com/google-research/google-research.git
# install tensorflow_model_optimization
!pip install tensorflow_model_optimization
import sys
import os
import tarfile
import urllib
import zipfile
sys.path.append('./google-research')
Explanation: Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
End of explanation
import tensorflow as tf
import numpy as np
import tensorflow.compat.v1 as tf1
import logging
from kws_streaming.models import model_params
from kws_streaming.models import model_flags
from kws_streaming.train import test
from kws_streaming import data
tf1.disable_eager_execution()
from kws_streaming.models import models
from kws_streaming.layers import modes
from kws_streaming.layers.modes import Modes
from kws_streaming.layers import speech_features
from kws_streaming.layers.stream import Stream
from kws_streaming.models import utils
config = tf1.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf1.Session(config=config)
# general imports
import matplotlib.pyplot as plt
import os
import json
import numpy as np
import scipy as scipy
import scipy.io.wavfile as wav
import scipy.signal
tf.__version__
tf1.reset_default_graph()
sess = tf1.Session()
tf1.keras.backend.set_session(sess)
tf1.keras.backend.set_learning_phase(0)
Explanation: End to end example of model training and streaming/(non streaming) inference with TF/TFlite
We will train a simple conv model on artificially generated data and run inference in non streaming and striming modes with TF/TFLite
Imports
End of explanation
samplerate = 16000
singnal_len = samplerate # it is equalent to 1 second
label_count = 4
train_data = []
train_label = []
data_size = 1024
for b in range(data_size):
noise = np.random.normal(size = singnal_len, scale = 0.2)
label = np.mod(b, label_count)
frequency = (label+1)*2
signal = np.cos(2.0*np.pi*frequency*np.arange(samplerate)/samplerate) + noise
train_data.append(signal)
train_label.append(label)
train_data = np.array(train_data)
train_label = np.array(train_label)
ind = 0
plt.plot(train_data[ind])
print("label " + str(train_label[ind]))
Explanation: Prepare artificial train data
End of explanation
FLAGS = model_params.Params()
flags = model_flags.update_flags(FLAGS)
flags.desired_samples = singnal_len
epsilon = 0.0001
batch_size = 16
input_audio = tf.keras.layers.Input(shape=(singnal_len,), batch_size=batch_size)
net = input_audio
net = speech_features.SpeechFeatures(speech_features.SpeechFeatures.get_params(flags))(net)
net = tf.keras.backend.expand_dims(net)
net = Stream(cell=tf.keras.layers.Conv2D( filters=5, kernel_size=(3,3), activation='linear'))(net)
net = tf.keras.layers.BatchNormalization(epsilon=epsilon)(net)
net = tf.keras.layers.ReLU(6.)(net)
net = Stream(cell=tf.keras.layers.Flatten())(net)
net = tf.keras.layers.Dense(units=label_count)(net)
model_non_stream_batch = tf.keras.Model(input_audio, net)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.Adam(epsilon=flags.optimizer_epsilon)
model_non_stream_batch.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])
Explanation: Prepare non streaming batched model
End of explanation
# we just overfit the model on artificial data
for i in range(data_size//batch_size):
ind = i * batch_size
train_data_batch = train_data[ind:ind+batch_size,]
train_label_batch = train_label[ind:ind+batch_size,]
result = model_non_stream_batch.train_on_batch(train_data_batch, train_label_batch)
if not (i % 5):
print("accuracy on training batch " + str(result[1] * 100))
tf.keras.utils.plot_model(
model_non_stream_batch,
show_shapes=True,
show_layer_names=True,
expand_nested=True)
Explanation: Train non streaming batched model
End of explanation
# convert model to inference mode with batch one
inference_batch_size = 1
tf.keras.backend.set_learning_phase(0)
flags.batch_size = inference_batch_size # set batch size
model_non_stream = utils.to_streaming_inference(model_non_stream_batch, flags, Modes.NON_STREAM_INFERENCE)
#model_non_stream.summary()
tf.keras.utils.plot_model(
model_non_stream,
show_shapes=True,
show_layer_names=True,
expand_nested=True)
predictions = model_non_stream.predict(train_data)
predicted_labels = np.argmax(predictions, axis=1)
predicted_labels
accuracy = np.sum(predicted_labels==train_label)/len(train_label)
print("accuracy " + str(accuracy * 100))
Explanation: Run inference with TF
TF Run non streaming inference
End of explanation
# convert model to streaming mode
flags.batch_size = inference_batch_size # set batch size
model_stream = utils.to_streaming_inference(model_non_stream_batch, flags, Modes.STREAM_INTERNAL_STATE_INFERENCE)
#model_stream.summary()
tf.keras.utils.plot_model(
model_stream,
show_shapes=True,
show_layer_names=True,
expand_nested=True)
sream_predicted_labels = []
for input_data in train_data: # loop over all audio sequences
# add batch dim - it is always 1 for streaming inference mode
input_data = np.expand_dims(input_data, axis=0)
# output_predictions = []
# output_ids = []
# run streaming inference on one audio sequence
start = 0
end = flags.window_stride_samples
while end <= input_data.shape[1]: # loop over one audio sequence sample by sample
stream_update = input_data[:, start:end]
# get new frame from stream of data
stream_output_prediction = model_stream.predict(stream_update)
stream_output_arg = np.argmax(stream_output_prediction)
# output_predictions.append(stream_output_prediction[0][stream_output_arg])
# output_ids.append(stream_output_arg)
# update indexes of streamed updates
start = end
end = start + flags.window_stride_samples
sream_predicted_labels.append(stream_output_arg)
# validate that accuracy in streaming mode is the same with accuracy in non streaming mode
stream_accuracy_internal_state = np.sum(sream_predicted_labels==train_label)/len(train_label)
print("accuracy " + str(stream_accuracy_internal_state * 100))
Explanation: TF Run streaming inference with internal state
End of explanation
# convert model to streaming mode
flags.batch_size = inference_batch_size # set batch size
model_stream = utils.to_streaming_inference(model_non_stream_batch, flags, Modes.STREAM_EXTERNAL_STATE_INFERENCE)
#model_stream.summary()
tf.keras.utils.plot_model(
model_stream,
show_shapes=True,
show_layer_names=True,
expand_nested=True)
sream_external_state_predicted_labels = []
for input_data in train_data: # loop over all audio sequences
# add batch dim - it is always 1 for streaming inference mode
input_data = np.expand_dims(input_data, axis=0)
# output_predictions = []
# output_ids = []
inputs = []
for s in range(len(model_stream.inputs)):
inputs.append(np.zeros(model_stream.inputs[s].shape, dtype=np.float32))
reset_state = True
if reset_state:
for s in range(len(model_stream.inputs)):
inputs[s] = np.zeros(model_stream.inputs[s].shape, dtype=np.float32)
start = 0
end = flags.window_stride_samples
while end <= input_data.shape[1]:
# get new frame from stream of data
stream_update = input_data[:, start:end]
# update indexes of streamed updates
start = end
end = start + flags.window_stride_samples
# set input audio data (by default input data at index 0)
inputs[0] = stream_update
# run inference
outputs = model_stream.predict(inputs)
# get output states and set it back to input states
# which will be fed in the next inference cycle
for s in range(1, len(model_stream.inputs)):
inputs[s] = outputs[s]
stream_output_arg = np.argmax(outputs[0])
# output_predictions.append(outputs[0][0][stream_output_arg])
# output_ids.append(stream_output_arg)
sream_external_state_predicted_labels.append(stream_output_arg)
# validate that accuracy in streaming mode with external states is the same with accuracy in non streaming mode
stream_accuracy_external_state = np.sum(sream_external_state_predicted_labels==train_label)/len(train_label)
print("accuracy " + str(stream_accuracy_external_state * 100))
Explanation: TF Run streaming inference with external state
End of explanation
# path = os.path.join(train_dir, 'tflite_non_stream')
# tflite_model_name = 'non_stream.tflite'
tflite_non_streaming_model = utils.model_to_tflite(sess, model_non_stream, flags, Modes.NON_STREAM_INFERENCE)
# prepare TFLite interpreter
# with tf.io.gfile.Open(os.path.join(path, tflite_model_name), 'rb') as f:
# model_content = f.read()
interpreter = tf.lite.Interpreter(model_content=tflite_non_streaming_model)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
inputs = []
for s in range(len(input_details)):
inputs.append(np.zeros(input_details[s]['shape'], dtype=np.float32))
padded_input = np.zeros((1, 16000), dtype=np.float32)
padded_input[:, :input_data.shape[1]] = input_data
non_sream_tflite_predicted_labels = []
for input_data in train_data: # loop over all audio sequences
# add batch dim - it is always 1 for streaming inference mode
input_data = np.expand_dims(input_data, axis=0)
# set input audio data (by default input data at index 0)
interpreter.set_tensor(input_details[0]['index'], input_data.astype(np.float32))
# run inference
interpreter.invoke()
# get output: classification
out_tflite = interpreter.get_tensor(output_details[0]['index'])
out_tflite_argmax = np.argmax(out_tflite)
non_sream_tflite_predicted_labels.append(out_tflite_argmax)
# validate that accuracy in TFLite is the same with TF
non_stream_accuracy_tflite = np.sum(non_sream_tflite_predicted_labels==train_label)/len(train_label)
print("accuracy " + str(non_stream_accuracy_tflite * 100))
Explanation: Run inference with TFlite
Run non streaming inference with TFLite
End of explanation
# path = os.path.join(train_dir, 'tflite_stream_state_external')
# tflite_model_name = 'stream_state_external.tflite'
tflite_streaming_model = utils.model_to_tflite(sess, model_non_stream, flags, Modes.STREAM_EXTERNAL_STATE_INFERENCE)
# with tf.io.gfile.Open(os.path.join(path, tflite_model_name), 'rb') as f:
# model_content = f.read()
interpreter = tf.lite.Interpreter(model_content=tflite_streaming_model)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
inputs = []
for s in range(len(input_details)):
inputs.append(np.zeros(input_details[s]['shape'], dtype=np.float32))
input_details[0]['shape']
sream_external_state_tflite_predicted_labels = []
for input_data in train_data: # loop over all audio sequences
# add batch dim - it is always 1 for streaming inference mode
input_data = np.expand_dims(input_data, axis=0)
reset_state = True
# before processing new test sequence we can reset model state
# if we reset model state then it is not real streaming mode
if reset_state:
for s in range(len(input_details)):
# print(input_details[s]['shape'])
inputs[s] = np.zeros(input_details[s]['shape'], dtype=np.float32)
start = 0
end = flags.window_stride_samples
while end <= input_data.shape[1]:
stream_update = input_data[:, start:end]
stream_update = stream_update.astype(np.float32)
# update indexes of streamed updates
start = end
end = start + flags.window_stride_samples
# set input audio data (by default input data at index 0)
interpreter.set_tensor(input_details[0]['index'], stream_update)
# set input states (index 1...)
for s in range(1, len(input_details)):
interpreter.set_tensor(input_details[s]['index'], inputs[s])
# run inference
interpreter.invoke()
# get output: classification
out_tflite = interpreter.get_tensor(output_details[0]['index'])
#print(start / 16000.0, np.argmax(out_tflite), np.max(out_tflite))
# get output states and set it back to input states
# which will be fed in the next inference cycle
for s in range(1, len(input_details)):
# The function `get_tensor()` returns a copy of the tensor data.
# Use `tensor()` in order to get a pointer to the tensor.
inputs[s] = interpreter.get_tensor(output_details[s]['index'])
out_tflite_argmax = np.argmax(out_tflite)
sream_external_state_tflite_predicted_labels.append(out_tflite_argmax)
# validate that accuracy in streaming mode with external states is the same with accuracy in non streaming mode
stream_accuracy_tflite = np.sum(sream_external_state_tflite_predicted_labels==train_label)/len(train_label)
print("accuracy " + str(stream_accuracy_tflite * 100))
Explanation: Run streaming inference with TFLite
End of explanation |
4,335 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
M-Estimators for Robust Linear Modeling
Step1: An M-estimator minimizes the function
$$Q(e_i, \rho) = \sum_i~\rho \left (\frac{e_i}{s}\right )$$
where $\rho$ is a symmetric function of the residuals
The effect of $\rho$ is to reduce the influence of outliers
$s$ is an estimate of scale.
The robust estimates $\hat{\beta}$ are computed by the iteratively re-weighted least squares algorithm
We have several choices available for the weighting functions to be used
Step2: Andrew's Wave
Step3: Hampel's 17A
Step4: Huber's t
Step5: Least Squares
Step6: Ramsay's Ea
Step7: Trimmed Mean
Step8: Tukey's Biweight
Step9: Scale Estimators
Robust estimates of the location
Step10: The mean is not a robust estimator of location
Step11: The median, on the other hand, is a robust estimator with a breakdown point of 50%
Step12: Analagously for the scale
The standard deviation is not robust
Step13: Median Absolute Deviation
$$ median_i |X_i - median_j(X_j)|) $$
Standardized Median Absolute Deviation is a consistent estimator for $\hat{\sigma}$
$$\hat{\sigma}=K \cdot MAD$$
where $K$ depends on the distribution. For the normal distribution for example,
$$K = \Phi^{-1}(.75)$$
Step14: The default for Robust Linear Models is MAD
another popular choice is Huber's proposal 2
Step15: Duncan's Occupational Prestige data - M-estimation for outliers
Step16: Hertzprung Russell data for Star Cluster CYG 0B1 - Leverage Points
Data is on the luminosity and temperature of 47 stars in the direction of Cygnus.
Step17: Why? Because M-estimators are not robust to leverage points.
Step18: Let's delete that line
Step19: MM estimators are good for this type of problem, unfortunately, we don't yet have these yet.
It's being worked on, but it gives a good excuse to look at the R cell magics in the notebook.
Step20: Exercise
Step21: Squared error loss | Python Code:
%matplotlib inline
from __future__ import print_function
from statsmodels.compat import lmap
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import statsmodels.api as sm
Explanation: M-Estimators for Robust Linear Modeling
End of explanation
norms = sm.robust.norms
def plot_weights(support, weights_func, xlabels, xticks):
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
ax.plot(support, weights_func(support))
ax.set_xticks(xticks)
ax.set_xticklabels(xlabels, fontsize=16)
ax.set_ylim(-.1, 1.1)
return ax
Explanation: An M-estimator minimizes the function
$$Q(e_i, \rho) = \sum_i~\rho \left (\frac{e_i}{s}\right )$$
where $\rho$ is a symmetric function of the residuals
The effect of $\rho$ is to reduce the influence of outliers
$s$ is an estimate of scale.
The robust estimates $\hat{\beta}$ are computed by the iteratively re-weighted least squares algorithm
We have several choices available for the weighting functions to be used
End of explanation
help(norms.AndrewWave.weights)
a = 1.339
support = np.linspace(-np.pi*a, np.pi*a, 100)
andrew = norms.AndrewWave(a=a)
plot_weights(support, andrew.weights, ['$-\pi*a$', '0', '$\pi*a$'], [-np.pi*a, 0, np.pi*a]);
Explanation: Andrew's Wave
End of explanation
help(norms.Hampel.weights)
c = 8
support = np.linspace(-3*c, 3*c, 1000)
hampel = norms.Hampel(a=2., b=4., c=c)
plot_weights(support, hampel.weights, ['3*c', '0', '3*c'], [-3*c, 0, 3*c]);
Explanation: Hampel's 17A
End of explanation
help(norms.HuberT.weights)
t = 1.345
support = np.linspace(-3*t, 3*t, 1000)
huber = norms.HuberT(t=t)
plot_weights(support, huber.weights, ['-3*t', '0', '3*t'], [-3*t, 0, 3*t]);
Explanation: Huber's t
End of explanation
help(norms.LeastSquares.weights)
support = np.linspace(-3, 3, 1000)
lst_sq = norms.LeastSquares()
plot_weights(support, lst_sq.weights, ['-3', '0', '3'], [-3, 0, 3]);
Explanation: Least Squares
End of explanation
help(norms.RamsayE.weights)
a = .3
support = np.linspace(-3*a, 3*a, 1000)
ramsay = norms.RamsayE(a=a)
plot_weights(support, ramsay.weights, ['-3*a', '0', '3*a'], [-3*a, 0, 3*a]);
Explanation: Ramsay's Ea
End of explanation
help(norms.TrimmedMean.weights)
c = 2
support = np.linspace(-3*c, 3*c, 1000)
trimmed = norms.TrimmedMean(c=c)
plot_weights(support, trimmed.weights, ['-3*c', '0', '3*c'], [-3*c, 0, 3*c]);
Explanation: Trimmed Mean
End of explanation
help(norms.TukeyBiweight.weights)
c = 4.685
support = np.linspace(-3*c, 3*c, 1000)
tukey = norms.TukeyBiweight(c=c)
plot_weights(support, tukey.weights, ['-3*c', '0', '3*c'], [-3*c, 0, 3*c]);
Explanation: Tukey's Biweight
End of explanation
x = np.array([1, 2, 3, 4, 500])
Explanation: Scale Estimators
Robust estimates of the location
End of explanation
x.mean()
Explanation: The mean is not a robust estimator of location
End of explanation
np.median(x)
Explanation: The median, on the other hand, is a robust estimator with a breakdown point of 50%
End of explanation
x.std()
Explanation: Analagously for the scale
The standard deviation is not robust
End of explanation
stats.norm.ppf(.75)
print(x)
sm.robust.scale.mad(x)
np.array([1,2,3,4,5.]).std()
Explanation: Median Absolute Deviation
$$ median_i |X_i - median_j(X_j)|) $$
Standardized Median Absolute Deviation is a consistent estimator for $\hat{\sigma}$
$$\hat{\sigma}=K \cdot MAD$$
where $K$ depends on the distribution. For the normal distribution for example,
$$K = \Phi^{-1}(.75)$$
End of explanation
np.random.seed(12345)
fat_tails = stats.t(6).rvs(40)
kde = sm.nonparametric.KDEUnivariate(fat_tails)
kde.fit()
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
ax.plot(kde.support, kde.density);
print(fat_tails.mean(), fat_tails.std())
print(stats.norm.fit(fat_tails))
print(stats.t.fit(fat_tails, f0=6))
huber = sm.robust.scale.Huber()
loc, scale = huber(fat_tails)
print(loc, scale)
sm.robust.mad(fat_tails)
sm.robust.mad(fat_tails, c=stats.t(6).ppf(.75))
sm.robust.scale.mad(fat_tails)
Explanation: The default for Robust Linear Models is MAD
another popular choice is Huber's proposal 2
End of explanation
from statsmodels.graphics.api import abline_plot
from statsmodels.formula.api import ols, rlm
prestige = sm.datasets.get_rdataset("Duncan", "carData", cache=True).data
print(prestige.head(10))
fig = plt.figure(figsize=(12,12))
ax1 = fig.add_subplot(211, xlabel='Income', ylabel='Prestige')
ax1.scatter(prestige.income, prestige.prestige)
xy_outlier = prestige.loc['minister', ['income','prestige']]
ax1.annotate('Minister', xy_outlier, xy_outlier+1, fontsize=16)
ax2 = fig.add_subplot(212, xlabel='Education',
ylabel='Prestige')
ax2.scatter(prestige.education, prestige.prestige);
ols_model = ols('prestige ~ income + education', prestige).fit()
print(ols_model.summary())
infl = ols_model.get_influence()
student = infl.summary_frame()['student_resid']
print(student)
print(student.loc[np.abs(student) > 2])
print(infl.summary_frame().loc['minister'])
sidak = ols_model.outlier_test('sidak')
sidak.sort_values('unadj_p', inplace=True)
print(sidak)
fdr = ols_model.outlier_test('fdr_bh')
fdr.sort_values('unadj_p', inplace=True)
print(fdr)
rlm_model = rlm('prestige ~ income + education', prestige).fit()
print(rlm_model.summary())
print(rlm_model.weights)
Explanation: Duncan's Occupational Prestige data - M-estimation for outliers
End of explanation
dta = sm.datasets.get_rdataset("starsCYG", "robustbase", cache=True).data
from matplotlib.patches import Ellipse
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111, xlabel='log(Temp)', ylabel='log(Light)', title='Hertzsprung-Russell Diagram of Star Cluster CYG OB1')
ax.scatter(*dta.values.T)
# highlight outliers
e = Ellipse((3.5, 6), .2, 1, alpha=.25, color='r')
ax.add_patch(e);
ax.annotate('Red giants', xy=(3.6, 6), xytext=(3.8, 6),
arrowprops=dict(facecolor='black', shrink=0.05, width=2),
horizontalalignment='left', verticalalignment='bottom',
clip_on=True, # clip to the axes bounding box
fontsize=16,
)
# annotate these with their index
for i,row in dta.loc[dta['log.Te'] < 3.8].iterrows():
ax.annotate(i, row, row + .01, fontsize=14)
xlim, ylim = ax.get_xlim(), ax.get_ylim()
from IPython.display import Image
Image(filename='star_diagram.png')
y = dta['log.light']
X = sm.add_constant(dta['log.Te'], prepend=True)
ols_model = sm.OLS(y, X).fit()
abline_plot(model_results=ols_model, ax=ax)
rlm_mod = sm.RLM(y, X, sm.robust.norms.TrimmedMean(.5)).fit()
abline_plot(model_results=rlm_mod, ax=ax, color='red')
Explanation: Hertzprung Russell data for Star Cluster CYG 0B1 - Leverage Points
Data is on the luminosity and temperature of 47 stars in the direction of Cygnus.
End of explanation
infl = ols_model.get_influence()
h_bar = 2*(ols_model.df_model + 1 )/ols_model.nobs
hat_diag = infl.summary_frame()['hat_diag']
hat_diag.loc[hat_diag > h_bar]
sidak2 = ols_model.outlier_test('sidak')
sidak2.sort_values('unadj_p', inplace=True)
print(sidak2)
fdr2 = ols_model.outlier_test('fdr_bh')
fdr2.sort_values('unadj_p', inplace=True)
print(fdr2)
Explanation: Why? Because M-estimators are not robust to leverage points.
End of explanation
l = ax.lines[-1]
l.remove()
del l
weights = np.ones(len(X))
weights[X[X['log.Te'] < 3.8].index.values - 1] = 0
wls_model = sm.WLS(y, X, weights=weights).fit()
abline_plot(model_results=wls_model, ax=ax, color='green')
Explanation: Let's delete that line
End of explanation
yy = y.values[:,None]
xx = X['log.Te'].values[:,None]
%load_ext rpy2.ipython
%R library(robustbase)
%Rpush yy xx
%R mod <- lmrob(yy ~ xx);
%R params <- mod$coefficients;
%Rpull params
%R print(mod)
print(params)
abline_plot(intercept=params[0], slope=params[1], ax=ax, color='red')
Explanation: MM estimators are good for this type of problem, unfortunately, we don't yet have these yet.
It's being worked on, but it gives a good excuse to look at the R cell magics in the notebook.
End of explanation
np.random.seed(12345)
nobs = 200
beta_true = np.array([3, 1, 2.5, 3, -4])
X = np.random.uniform(-20,20, size=(nobs, len(beta_true)-1))
# stack a constant in front
X = sm.add_constant(X, prepend=True) # np.c_[np.ones(nobs), X]
mc_iter = 500
contaminate = .25 # percentage of response variables to contaminate
all_betas = []
for i in range(mc_iter):
y = np.dot(X, beta_true) + np.random.normal(size=200)
random_idx = np.random.randint(0, nobs, size=int(contaminate * nobs))
y[random_idx] = np.random.uniform(-750, 750)
beta_hat = sm.RLM(y, X).fit().params
all_betas.append(beta_hat)
all_betas = np.asarray(all_betas)
se_loss = lambda x : np.linalg.norm(x, ord=2)**2
se_beta = lmap(se_loss, all_betas - beta_true)
Explanation: Exercise: Breakdown points of M-estimator
End of explanation
np.array(se_beta).mean()
all_betas.mean(0)
beta_true
se_loss(all_betas.mean(0) - beta_true)
Explanation: Squared error loss
End of explanation |
4,336 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intrusive Galerkin
When talking about polynomial chaos expansions, there are typically two
categories methods that are used
Step1: Here the parameters are positional defined as $\alpha$ and $\beta$
respectively.
First step of intrusive Galerkin's method, we will first assume that the
solution $u(t)$ can be expressed as the sum
Step2: Note again, that the variables are here defined positional. $\alpha$ and
$\beta$ corresponds to positions 0 and 1, which again corresponds to the
polynomial variables q0 and q1 respectively.
The second step of the method is to fill in the assumed solution into the
equations we are trying to solve the following two equations
Step3: As above, these two variables are defined positional to correspond to both
the distribution and polynomial.
From the simplified equation above, it can be observed that the fraction of
expected values doesn't depend on neither $c$ nor $t$, and can therefore be
pre-computed.
For the denominator $\mathbb E[\beta\Phi_n\Phi_k]$, since there are both $\Phi_k$
and $\Phi_n$ terms, the full expression can be defined as a two-dimensional
tensor
Step4: This allows us to calculate the full expression
Step5: For the numerator $\mbox E(\Phi_k\Phi_k)$, it is worth noting that these are
the square of the norms $\|\Phi_k\|^2$. We could calculate them the same way,
but choose not to. Calculating the norms is often numerically unstable, and
it is better to retrieve them from three-terms-recursion process. In
chaospy this can be extracted during the creation of the orthogonal
polynomials
Step6: Having all terms in place, we can create a function for the right-hand-side
of the equation
Step7: Initial conditions
The equation associated with the initial condition can be reformulated as
follows
Step8: Equation solving
With the right-hand-side for both the main set of equations and the initial
conditions, it should be straight forward to solve the equations numerically.
For example using scipy.integrate.odeint
Step9: These coefficients can then be used to construct the approximation for $u$
using the assumption about the solutions form
Step10: Finally, this can be used to calculate statistical properties
Step11: Using the true mean and variance as reference, we can also calculate the mean
absolute error | Python Code:
from problem_formulation import joint
joint
Explanation: Intrusive Galerkin
When talking about polynomial chaos expansions, there are typically two
categories methods that are used: non-intrusive and intrusive methods. The
distinction between the two categories lies in how one tries to solve the
problem at hand. In the intrusive methods, the core problem formulation,
often in the form of some governing equations to solve is reformulated to
target a polynomial chaos expansion. In the case of the non-intrusive methods
a solver for deterministic case is used in combination of some form of
collocation method to fit to the expansion.
The chaospy toolbox caters for the most part to the non-intrusive
methods. However it is still possible to use the toolbox to solve intrusive
formulation. It just requires that the user to do more of the mathematics
them selves.
Problem revisited
This section uses the same example as the problem
formulation. To reiterate the problem
formulation:
$$
\frac{d}{dt} u(t) = -\beta\ u(t) \qquad u(0) = \alpha \qquad t \in [0, 10]
$$
Here $\alpha$ is initial condition and $\beta$ is the exponential growth
rate. They are both unknown hyper parameters which can be described through a
joint probability distribution:
End of explanation
import chaospy
polynomial_expansion = chaospy.generate_expansion(3, joint)
polynomial_expansion[:4].round(10)
Explanation: Here the parameters are positional defined as $\alpha$ and $\beta$
respectively.
First step of intrusive Galerkin's method, we will first assume that the
solution $u(t)$ can be expressed as the sum:
$$
u(t; \alpha, \beta) = \sum_{n=0}^N c_n(t)\ \Phi_n(\alpha, \beta)
$$
Here $\Phi_n$ are orthogonal polynomials and $c_n$ Fourier coefficients. We
do not know what the latter is yet, but the former we can construct from
distribution alone.
End of explanation
alpha, beta = chaospy.variable(2)
Explanation: Note again, that the variables are here defined positional. $\alpha$ and
$\beta$ corresponds to positions 0 and 1, which again corresponds to the
polynomial variables q0 and q1 respectively.
The second step of the method is to fill in the assumed solution into the
equations we are trying to solve the following two equations:
$$
\frac{d}{dt} \sum_{n=0}^N c_n\ \Phi_n = -\beta \sum_{n=0}^N c_n \qquad
\sum_{n=0}^N c_n(0)\ \Phi_n = \alpha
$$
The third step is to take the inner product of each side of both equations
against the polynomial $\Phi_k$ for $k=0,\cdots,N$. For the first equation,
this will have the following form:
$$
\begin{align}
\left\langle \frac{d}{dt} \sum_{n=0}^N c_n \Phi_n, \Phi_k \right\rangle &=
\left\langle -\beta \sum_{n=0}^N c_n\Phi_n, \Phi_k \right\rangle \
\left\langle \sum_{n=0}^N c_n(0)\ \Phi_n, \Phi_k \right\rangle &=
\left\langle \alpha, \Phi_k \right\rangle \
\end{align}
$$
Let us define the first equation as the main equation, and the latter as the
initial condition equation.
Main equation
We start by simplifying the equation. A lot of collapsing of the sums is
possible because of the orthogonality property of the polynomials $\langle
\Phi_i, \Phi_j\rangle$ for $i \neq j$.
$$
\begin{align}
\left\langle \frac{d}{dt} \sum_{n=0}^N c_n \Phi_n, \Phi_k \right\rangle &=
\left\langle -\beta \sum_{n=0}^N c_n\Phi_n, \Phi_k \right\rangle \
\sum_{n=0}^N \frac{d}{dt} c_n \left\langle \Phi_n, \Phi_k \right\rangle &=
-\sum_{n=0}^N c_n \left\langle \beta\ \Phi_n, \Phi_n \right\rangle \
\frac{d}{dt} c_k \left\langle \Phi_k, \Phi_k \right\rangle &=
-\sum_{n=0}^N c_n \left\langle \beta\ \Phi_n, \Phi_k \right\rangle \
\frac{d}{dt} c_k &=
-\sum_{n=0}^N c_n
\frac{
\left\langle \beta\ \Phi_n, \Phi_k \right\rangle
}{
\left\langle \Phi_k, \Phi_k \right\rangle
}
\end{align}
$$
Or equivalent, using probability notation:
$$
\frac{d}{dt} c_k =
-\sum_{n=0}^N c_n
\frac{
\mbox E\left( \beta\ \Phi_n \Phi_k \right)
}{
\mbox E\left( \Phi_k \Phi_k \right)
}
$$
This is a set of linear equations. To solve them in practice, we need to
formulate the right-hand-side as a function. To start we create variables to
deal with the fact that $\alpha$ and $\beta$ are part of the equation.
End of explanation
phi_phi = chaospy.outer(
polynomial_expansion, polynomial_expansion)
[polynomial_expansion.shape, phi_phi.shape]
Explanation: As above, these two variables are defined positional to correspond to both
the distribution and polynomial.
From the simplified equation above, it can be observed that the fraction of
expected values doesn't depend on neither $c$ nor $t$, and can therefore be
pre-computed.
For the denominator $\mathbb E[\beta\Phi_n\Phi_k]$, since there are both $\Phi_k$
and $\Phi_n$ terms, the full expression can be defined as a two-dimensional
tensor:
End of explanation
e_beta_phi_phi = chaospy.E(beta*phi_phi, joint)
e_beta_phi_phi[:3, :3].round(6)
Explanation: This allows us to calculate the full expression:
End of explanation
_, norms = chaospy.generate_expansion(3, joint, retall=True)
norms[:4].round(6)
Explanation: For the numerator $\mbox E(\Phi_k\Phi_k)$, it is worth noting that these are
the square of the norms $\|\Phi_k\|^2$. We could calculate them the same way,
but choose not to. Calculating the norms is often numerically unstable, and
it is better to retrieve them from three-terms-recursion process. In
chaospy this can be extracted during the creation of the orthogonal
polynomials:
End of explanation
import numpy
def right_hand_side(c, t):
return -numpy.sum(c*e_beta_phi_phi, -1)/norms
Explanation: Having all terms in place, we can create a function for the right-hand-side
of the equation:
End of explanation
e_alpha_phi = chaospy.E(alpha*polynomial_expansion, joint)
initial_condition = e_alpha_phi/norms
Explanation: Initial conditions
The equation associated with the initial condition can be reformulated as
follows:
$$
\begin{align}
\left\langle \sum_{n=0}^N c_n(0)\ \Phi_n, \Phi_k \right\rangle &=
\left\langle \alpha, \Phi_k \right\rangle \
\sum_{n=0}^N c_n(0) \left\langle \Phi_n, \Phi_k \right\rangle &=
\left\langle \alpha, \Phi_k \right\rangle \
c_k(0) \left\langle \Phi_k, \Phi_k \right\rangle &=
\left\langle \alpha, \Phi_k \right\rangle \
c_k(0) &=
\frac{
\left\langle \alpha, \Phi_k \right\rangle
}{
\left\langle \Phi_k, \Phi_k \right\rangle
}
\end{align}
$$
Or equivalently:
$$
c_k(0) =
\frac{
\mbox E\left( \alpha\ \Phi_k \right)
}{
\mbox E\left( \Phi_k \Phi_k \right)
}
$$
Using the same logic as for the first equation we get:
End of explanation
from scipy.integrate import odeint
coordinates = numpy.linspace(0, 10, 1000)
coefficients = odeint(func=right_hand_side,
y0=initial_condition, t=coordinates)
coefficients.shape
Explanation: Equation solving
With the right-hand-side for both the main set of equations and the initial
conditions, it should be straight forward to solve the equations numerically.
For example using scipy.integrate.odeint:
End of explanation
u_approx = chaospy.sum(polynomial_expansion*coefficients, -1)
u_approx[:4].round(2)
Explanation: These coefficients can then be used to construct the approximation for $u$
using the assumption about the solutions form:
End of explanation
mean = chaospy.E(u_approx, joint)
variance = chaospy.Var(u_approx, joint)
mean[:5].round(6), variance[:5].round(6)
from matplotlib import pyplot
pyplot.rc("figure", figsize=[6, 4])
pyplot.xlabel("coordinates")
pyplot.ylabel("model approximation")
pyplot.axis([0, 10, 0, 2])
sigma = numpy.sqrt(variance)
pyplot.fill_between(coordinates, mean-sigma, mean+sigma, alpha=0.3)
pyplot.plot(coordinates, mean)
pyplot.show()
Explanation: Finally, this can be used to calculate statistical properties:
End of explanation
from problem_formulation import error_in_mean, error_in_variance
error_in_mean(mean).round(16), error_in_variance(variance).round(12)
Explanation: Using the true mean and variance as reference, we can also calculate the mean
absolute error:
End of explanation |
4,337 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example of online analysis using OnACID
Complete pipeline for online processing using CaImAn Online (OnACID).
The demo demonstates the analysis of a sequence of files using the CaImAn online
algorithm. The steps include i) motion correction, ii) tracking current
components, iii) detecting new components, iv) updating of spatial footprints.
The script demonstrates how to construct and use the params and online_cnmf
objects required for the analysis, and presents the various parameters that
can be passed as options. A plot of the processing time for the various steps
of the algorithm is also included.
@author
Step1: First download the data
The function download_demo will look for the datasets Tolias_mesoscope_*.hdf5 ins your caiman_data folder inside the subfolder specified by the variable fld_name and will download the files if they do not exist.
Step2: Set up some parameters
Here we set up some parameters for running OnACID. We use the same params object as in batch processing with CNMF.
Step3: Now run the CaImAn online algorithm (OnACID).
The first initbatch frames are used for initialization purposes. The initialization method chosen here bare will only search for a small number of neurons and is mostly used to initialize the background components. Initialization with the full CNMF can also be used by choosing cnmf.
We first create an OnACID object located in the module online_cnmf and we pass the parameters similarly to the case of batch processing. We then run the algorithm using the fit_online method.
Step4: Optionally save results and do some plotting
Step5: View components
Now inspect the components extracted by OnACID. Note that if single pass was used then several components would be non-zero only for the part of the time interval indicating that they were detected online by OnACID.
Note that if you get data rate error you can start Jupyter notebooks using
Step6: Plot timing
The plot below shows the time spent on each part of the algorithm (motion correction, tracking of current components, detect new components, update shapes) for each frame. Note that if you displayed a movie while processing the data (show_movie=True) the time required to generate this movie will be included here. | Python Code:
try:
if __IPYTHON__:
# this is used for debugging purposes only. allows to reload classes when changed
get_ipython().magic('load_ext autoreload')
get_ipython().magic('autoreload 2')
except NameError:
pass
import logging
import numpy as np
logging.basicConfig(format=
"%(relativeCreated)12d [%(filename)s:%(funcName)20s():%(lineno)s] [%(process)d] %(message)s",
# filename="/tmp/caiman.log",
level=logging.INFO)
import caiman as cm
from caiman.source_extraction import cnmf
from caiman.utils.utils import download_demo
import matplotlib.pyplot as plt
import bokeh.plotting as bpl
bpl.output_notebook()
Explanation: Example of online analysis using OnACID
Complete pipeline for online processing using CaImAn Online (OnACID).
The demo demonstates the analysis of a sequence of files using the CaImAn online
algorithm. The steps include i) motion correction, ii) tracking current
components, iii) detecting new components, iv) updating of spatial footprints.
The script demonstrates how to construct and use the params and online_cnmf
objects required for the analysis, and presents the various parameters that
can be passed as options. A plot of the processing time for the various steps
of the algorithm is also included.
@author: Eftychios Pnevmatikakis @epnev
Special thanks to Andreas Tolias and his lab at Baylor College of Medicine
for sharing the data used in this demo.
End of explanation
fld_name = 'Mesoscope' # folder inside ./example_movies where files will be saved
fnames = []
fnames.append(download_demo('Tolias_mesoscope_1.hdf5',fld_name))
fnames.append(download_demo('Tolias_mesoscope_2.hdf5',fld_name))
fnames.append(download_demo('Tolias_mesoscope_3.hdf5',fld_name))
print(fnames) # your list of files should look something like this
Explanation: First download the data
The function download_demo will look for the datasets Tolias_mesoscope_*.hdf5 ins your caiman_data folder inside the subfolder specified by the variable fld_name and will download the files if they do not exist.
End of explanation
fr = 15 # frame rate (Hz)
decay_time = 0.5 # approximate length of transient event in seconds
gSig = (4,4) # expected half size of neurons
p = 1 # order of AR indicator dynamics
min_SNR = 1 # minimum SNR for accepting new components
rval_thr = 0.90 # correlation threshold for new component inclusion
ds_factor = 1 # spatial downsampling factor (increases speed but may lose some fine structure)
gnb = 2 # number of background components
gSig = tuple(np.ceil(np.array(gSig)/ds_factor).astype('int')) # recompute gSig if downsampling is involved
mot_corr = True # flag for online motion correction
pw_rigid = False # flag for pw-rigid motion correction (slower but potentially more accurate)
max_shifts_online = np.ceil(10./ds_factor).astype('int') # maximum allowed shift during motion correction
sniper_mode = True # flag using a CNN to detect new neurons (o/w space correlation is used)
init_batch = 200 # number of frames for initialization (presumably from the first file)
expected_comps = 500 # maximum number of expected components used for memory pre-allocation (exaggerate here)
dist_shape_update = True # flag for updating shapes in a distributed way
min_num_trial = 10 # number of candidate components per frame
K = 2 # initial number of components
epochs = 2 # number of passes over the data
show_movie = False # show the movie with the results as the data gets processed
params_dict = {'fnames': fnames,
'fr': fr,
'decay_time': decay_time,
'gSig': gSig,
'p': p,
'min_SNR': min_SNR,
'rval_thr': rval_thr,
'ds_factor': ds_factor,
'nb': gnb,
'motion_correct': mot_corr,
'init_batch': init_batch,
'init_method': 'bare',
'normalize': True,
'expected_comps': expected_comps,
'sniper_mode': sniper_mode,
'dist_shape_update' : dist_shape_update,
'min_num_trial': min_num_trial,
'K': K,
'epochs': epochs,
'max_shifts_online': max_shifts_online,
'pw_rigid': pw_rigid,
'show_movie': show_movie}
opts = cnmf.params.CNMFParams(params_dict=params_dict)
Explanation: Set up some parameters
Here we set up some parameters for running OnACID. We use the same params object as in batch processing with CNMF.
End of explanation
cnm = cnmf.online_cnmf.OnACID(params=opts)
cnm.fit_online()
Explanation: Now run the CaImAn online algorithm (OnACID).
The first initbatch frames are used for initialization purposes. The initialization method chosen here bare will only search for a small number of neurons and is mostly used to initialize the background components. Initialization with the full CNMF can also be used by choosing cnmf.
We first create an OnACID object located in the module online_cnmf and we pass the parameters similarly to the case of batch processing. We then run the algorithm using the fit_online method.
End of explanation
logging.info('Number of components: ' + str(cnm.estimates.A.shape[-1]))
Cn = cm.load(fnames[0], subindices=slice(0,500)).local_correlations(swap_dim=False)
cnm.estimates.plot_contours(img=Cn)
Explanation: Optionally save results and do some plotting
End of explanation
cnm.estimates.nb_view_components(img=Cn, denoised_color='red');
Explanation: View components
Now inspect the components extracted by OnACID. Note that if single pass was used then several components would be non-zero only for the part of the time interval indicating that they were detected online by OnACID.
Note that if you get data rate error you can start Jupyter notebooks using:
'jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10'
End of explanation
T_motion = 1e3*np.array(cnm.t_motion)
T_detect = 1e3*np.array(cnm.t_detect)
T_shapes = 1e3*np.array(cnm.t_shapes)
T_online = 1e3*np.array(cnm.t_online) - T_motion - T_detect - T_shapes
plt.figure()
plt.stackplot(np.arange(len(T_motion)), T_motion, T_online, T_detect, T_shapes)
plt.legend(labels=['motion', 'process', 'detect', 'shapes'], loc=2)
plt.title('Processing time allocation')
plt.xlabel('Frame #')
plt.ylabel('Processing time [ms]')
plt.ylim([0,140])
Explanation: Plot timing
The plot below shows the time spent on each part of the algorithm (motion correction, tracking of current components, detect new components, update shapes) for each frame. Note that if you displayed a movie while processing the data (show_movie=True) the time required to generate this movie will be included here.
End of explanation |
4,338 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading Excel files
This notebook demonstrates how to read and manipulate data from
Excel using Pandas
Step1: Get IRS data on businesses
The IRS website has some aggregated statistics on business returns in Excel files. We will use the Selected Income and Tax Items for Selected Years.
The original data is from the file linked here
Step2: Look at the last 3 rows
The function pd.read_excel returns an object called a 'Data Frame', that is defined inside of the Pandas library. It has associated functions that access and manipulate the data inside. For example
Step3: Split out the 'Current dollars' and 'Constant 1990 dollars'
There are two sets of data — for the actual dollars for each variable, and also for constant dollars (accounting for inflation). We will split the raw dataset into two and then index the rows by the units (whether they're number of returns or amount paid/claimed).
The columns we care about are Variable, Units, and the current or constant dollars from each year. (You can view them all with raw.columns.)
We can subset the dataset with the columns we want using raw.ix[
Step4: Statistics
Pandas provides methods for statistical summaries. The describe method gives an overall summary. dropna(axis=1) deletes columns containing null values. If it were axis=0 it would be deleting rows.
Step5: Plot
The library that provides plot functions is called Matplotlib. To show the plots in this notebook you need to use the "magic method" %matplotlib inline. It should be used at the beginning of the notebook for clarity.
Step6: The per-entry data
The data are (I think) for every form filed, not really per capita, but since we're not interpreting it for anything important we can conflate the two.
Per capita income (Blue line) rose a lot with the tech bubble, then sunk with its crash, and then followed the housing bubble and crash. It also looks like small business income (Red dashed line) hasn't really come back since the crash, but that unemployment (Magenta dots) has gone down.
Step7: Also with log-y
We can see the total social security benefits payout (Green dot dash) increase as the baby boomers come of age, and we see the unemployment compensation (Magenta dots) spike after the 2008 crisis and then fall off. | Python Code:
# The library for handling tabular data is called 'pandas'
# Everyone shortens this to 'pd' for convenience.
import pandas as pd
Explanation: Reading Excel files
This notebook demonstrates how to read and manipulate data from
Excel using Pandas:
Input / Output
summaries
plotting
First, import the Pandas library:
End of explanation
raw = pd.read_excel('data/14intaba_cleaned.xls', skiprows=2)
Explanation: Get IRS data on businesses
The IRS website has some aggregated statistics on business returns in Excel files. We will use the Selected Income and Tax Items for Selected Years.
The original data is from the file linked here:
https://www.irs.gov/pub/irs-soi/14intaba.xls,
but I cleaned it up by hand to remove footnotes and reformat the column and row headers. You can get the cleaned file in this repository data/14intaba_cleaned.xls.
It looks like this:
<img src="img/screenshot-14intaba.png" width="100%"/>
Read the data!
We will use the read_excel function inside of the Pandas library (accessed using pd.read_excel) to get the data. We need to:
skip the first 2 rows
Split out the 'Current dollars' and 'Constant 1990 dollars' subsets
use the left two columns to split out the number of returns and their dollar amounts
When referring to files on your computer from Jupyer, the path you use is relative to the current Jupyter notebook. My directory looks like this:
.
|-- notebooks
|-- input_output.ipynb
|-- data
|- 14intaba_cleaned.xls
so, the relative path from the notebook input_output.ipynb to the dataset 14intaba_cleaned.xls is:
data/14intaba_cleaned.xls
End of explanation
# Look at the last 3 rows
raw.tail(3)
Explanation: Look at the last 3 rows
The function pd.read_excel returns an object called a 'Data Frame', that is defined inside of the Pandas library. It has associated functions that access and manipulate the data inside. For example:
End of explanation
index_cols = ['Units', 'Variable']
current_dollars_cols = index_cols + [
c for c in raw.columns if c.startswith('Current')
]
constant_dollars_cols = index_cols + [
c for c in raw.columns if c.startswith('Constant')
]
current_dollars_data = raw[current_dollars_cols][9:]
current_dollars_data.set_index(keys=index_cols, inplace=True)
constant_dollars_data = raw[constant_dollars_cols][9:]
constant_dollars_data.set_index(keys=index_cols, inplace=True)
years = [int(c[-4:]) for c in constant_dollars_data.columns]
constant_dollars_data.columns = years
Explanation: Split out the 'Current dollars' and 'Constant 1990 dollars'
There are two sets of data — for the actual dollars for each variable, and also for constant dollars (accounting for inflation). We will split the raw dataset into two and then index the rows by the units (whether they're number of returns or amount paid/claimed).
The columns we care about are Variable, Units, and the current or constant dollars from each year. (You can view them all with raw.columns.)
We can subset the dataset with the columns we want using raw.ix[:, <desired_cols>].
There are a lot of commands in this section...we will do a better job explaining later. For now, ['braces', 'denote', 'a list'], you can add lists, and you can write a shorthand for loop inside of a list (that's called a "list comprehension").
End of explanation
per_entry = (
constant_dollars_data.transpose()['Amount (thousand USD)'] * 1000 /
constant_dollars_data.transpose()['Number of returns']
)
per_entry.dropna(axis=1).describe().round()
Explanation: Statistics
Pandas provides methods for statistical summaries. The describe method gives an overall summary. dropna(axis=1) deletes columns containing null values. If it were axis=0 it would be deleting rows.
End of explanation
# This should always be at the beginning of the notebook,
# like all magic statements and import statements.
# It's only here because I didn't want to describe it earlier.
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (10, 12)
Explanation: Plot
The library that provides plot functions is called Matplotlib. To show the plots in this notebook you need to use the "magic method" %matplotlib inline. It should be used at the beginning of the notebook for clarity.
End of explanation
styles = ['b-', 'g-.', 'r--', 'c-', 'm:']
axes = per_entry[[
'Total income',
'Total social security benefits (not in income)',
'Business or profession net income less loss',
'Total payments',
'Unemployment compensation']].plot(style=styles)
plt.suptitle('Average USD per return (when stated)')
Explanation: The per-entry data
The data are (I think) for every form filed, not really per capita, but since we're not interpreting it for anything important we can conflate the two.
Per capita income (Blue line) rose a lot with the tech bubble, then sunk with its crash, and then followed the housing bubble and crash. It also looks like small business income (Red dashed line) hasn't really come back since the crash, but that unemployment (Magenta dots) has gone down.
End of explanation
styles = ['b-', 'r--', 'g-.', 'c-', 'm:']
axes = constant_dollars_data.transpose()['Amount (thousand USD)'][[
'Total income',
'Total payments',
'Total social security benefits (not in income)',
'Business or profession net income less loss',
'Unemployment compensation']].plot(logy=True, style=styles)
plt.legend(bbox_to_anchor=(1, 1),
bbox_transform=plt.gcf().transFigure)
plt.suptitle('Total USD (constant 1990 basis)')
Explanation: Also with log-y
We can see the total social security benefits payout (Green dot dash) increase as the baby boomers come of age, and we see the unemployment compensation (Magenta dots) spike after the 2008 crisis and then fall off.
End of explanation |
4,339 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cas', 'fgoals-g3', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: CAS
Source ID: FGOALS-G3
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
4,340 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
Step2: Convolution
Step4: Aside
Step5: Convolution
Step6: Max pooling
Step7: Max pooling
Step8: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory
Step9: Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
Step10: Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file cs231n/classifiers/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug
Step11: Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer. Note
Step12: Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
Step13: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting
Step14: Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set
Step15: Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following
Step16: Spatial Batch Normalization
We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.
Spatial batch normalization
Step17: Spatial batch normalization | Python Code:
# As usual, a bit of setup
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.cnn import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from cs231n.layers import *
from cs231n.fast_layers import *
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
Explanation: Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
End of explanation
x_shape = (2, 3, 4, 4)
w_shape = (3, 3, 4, 4)
x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)
w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=3)
conv_param = {'stride': 2, 'pad': 1}
out, _ = conv_forward_naive(x, w, b, conv_param)
correct_out = np.array([[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]])
# Compare your output to ours; difference should be around 2e-8
print('Testing conv_forward_naive')
print('difference: ', rel_error(out, correct_out))
Explanation: Convolution: Naive forward pass
The core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive.
You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.
You can test your implementation by running the following:
End of explanation
from scipy.misc import imread, imresize
kitten, puppy = imread('kitten.jpg'), imread('puppy.jpg')
# kitten is wide, and puppy is already square
d = kitten.shape[1] - kitten.shape[0]
kitten_cropped = kitten[:, d//2:-d//2, :]
img_size = 200 # Make this smaller if it runs too slow
x = np.zeros((2, 3, img_size, img_size))
x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))
x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))
# Set up a convolutional weights holding 2 filters, each 3x3
w = np.zeros((2, 3, 3, 3))
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]
w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]
w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = np.array([0, 128])
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})
def imshow_noax(img, normalize=True):
Tiny helper to show images as uint8 and remove axis labels
if normalize:
img_max, img_min = np.max(img), np.min(img)
img = 255.0 * (img - img_min) / (img_max - img_min)
plt.imshow(img.astype('uint8'))
plt.gca().axis('off')
# Show the original images and the results of the conv operation
plt.subplot(2, 3, 1)
imshow_noax(puppy, normalize=False)
plt.title('Original image')
plt.subplot(2, 3, 2)
imshow_noax(out[0, 0])
plt.title('Grayscale')
plt.subplot(2, 3, 3)
imshow_noax(out[0, 1])
plt.title('Edges')
plt.subplot(2, 3, 4)
imshow_noax(kitten_cropped, normalize=False)
plt.subplot(2, 3, 5)
imshow_noax(out[1, 0])
plt.subplot(2, 3, 6)
imshow_noax(out[1, 1])
plt.show()
Explanation: Aside: Image processing via convolutions
As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.
End of explanation
np.random.seed(231)
# x = np.random.randn(2, 2, 5, 5)
# w = np.random.randn(2, 2, 3, 3)
# b = np.random.randn(2,)
# dout = np.random.randn(2, 2, 5, 5)
# conv_param = {'stride': 1, 'pad': 1}
x = np.random.randn(4, 3, 5, 5)
w = np.random.randn(2, 3, 3, 3)
b = np.random.randn(2,)
dout = np.random.randn(4, 2, 5, 5)
conv_param = {'stride': 1, 'pad': 1}
dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)
out, cache = conv_forward_naive(x, w, b, conv_param)
dx, dw, db = conv_backward_naive(dout, cache)
# Your errors should be around 1e-8'
print('Testing conv_backward_naive function')
print('dx error: ', rel_error(dx, dx_num))
print('dw error: ', rel_error(dw, dw_num))
print('db error: ', rel_error(db, db_num))
Explanation: Convolution: Naive backward pass
Implement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don't need to worry too much about computational efficiency.
When you are done, run the following to check your backward pass with a numeric gradient check.
End of explanation
x_shape = (2, 3, 4, 4)
x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = max_pool_forward_naive(x, pool_param)
correct_out = np.array([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]])
# Compare your output with ours. Difference should be around 1e-8.
print('Testing max_pool_forward_naive function:')
print('difference: ', rel_error(out, correct_out))
Explanation: Max pooling: Naive forward
Implement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don't worry too much about computational efficiency.
Check your implementation by running the following:
End of explanation
np.random.seed(231)
x = np.random.randn(3, 2, 8, 8)
dout = np.random.randn(3, 2, 4, 4)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)
out, cache = max_pool_forward_naive(x, pool_param)
dx = max_pool_backward_naive(dout, cache)
# Your error should be around 1e-12
print('Testing max_pool_backward_naive function:')
print('dx error: ', rel_error(dx, dx_num))
Explanation: Max pooling: Naive backward
Implement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency.
Check your implementation with numeric gradient checking by running the following:
End of explanation
from cs231n.fast_layers import conv_forward_fast, conv_backward_fast
from time import time
np.random.seed(231)
x = np.random.randn(100, 3, 31, 31)
w = np.random.randn(25, 3, 3, 3)
b = np.random.randn(25,)
dout = np.random.randn(100, 25, 16, 16)
conv_param = {'stride': 2, 'pad': 1}
t0 = time()
out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)
t1 = time()
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
t2 = time()
print('Testing conv_forward_fast:')
print('Naive: %fs' % (t1 - t0))
print('Fast: %fs' % (t2 - t1))
print('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('Difference: ', rel_error(out_naive, out_fast))
# TODO: speed naive bp, because it can't run to the end.
# t0 = time()
# dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)
# t1 = time()
# dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)
# t2 = time()
# print('\nTesting conv_backward_fast:')
# print('Naive: %fs' % (t1 - t0))
# print('Fast: %fs' % (t2 - t1))
# print('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))
# print('dx difference: ', rel_error(dx_naive, dx_fast))
# print('dw difference: ', rel_error(dw_naive, dw_fast))
# print('db difference: ', rel_error(db_naive, db_fast))
from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast
np.random.seed(231)
x = np.random.randn(100, 3, 32, 32)
dout = np.random.randn(100, 3, 16, 16)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
t0 = time()
out_naive, cache_naive = max_pool_forward_naive(x, pool_param)
t1 = time()
out_fast, cache_fast = max_pool_forward_fast(x, pool_param)
t2 = time()
print('Testing pool_forward_fast:')
print('Naive: %fs' % (t1 - t0))
print('fast: %fs' % (t2 - t1))
print('speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('difference: ', rel_error(out_naive, out_fast))
t0 = time()
dx_naive = max_pool_backward_naive(dout, cache_naive)
t1 = time()
dx_fast = max_pool_backward_fast(dout, cache_fast)
t2 = time()
print('\nTesting pool_backward_fast:')
print('Naive: %fs' % (t1 - t0))
print('speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('dx difference: ', rel_error(dx_naive, dx_fast))
Explanation: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:
bash
python setup.py build_ext --inplace
The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.
NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.
You can compare the performance of the naive and fast versions of these layers by running the following:
End of explanation
from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward
np.random.seed(231)
x = np.random.randn(2, 3, 16, 16)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)
dx, dw, db = conv_relu_pool_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)
print('Testing conv_relu_pool')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
from cs231n.layer_utils import conv_relu_forward, conv_relu_backward
np.random.seed(231)
x = np.random.randn(2, 3, 8, 8)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
out, cache = conv_relu_forward(x, w, b, conv_param)
dx, dw, db = conv_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)
print('Testing conv_relu:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
Explanation: Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
End of explanation
model = ThreeLayerConvNet()
N = 50
X = np.random.randn(N, 3, 32, 32)
y = np.random.randint(10, size=N)
loss, grads = model.loss(X, y)
print('Initial loss (no regularization): ', loss)
model.reg = 0.5
loss, grads = model.loss(X, y)
print('Initial loss (with regularization): ', loss)
Explanation: Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file cs231n/classifiers/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug:
Sanity check loss
After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.
End of explanation
num_inputs = 2
input_dim = (3, 16, 16)
reg = 0.0
num_classes = 10
np.random.seed(231)
X = np.random.randn(num_inputs, *input_dim)
y = np.random.randint(num_classes, size=num_inputs)
model = ThreeLayerConvNet(num_filters=3, filter_size=3,
input_dim=input_dim, hidden_dim=7,
dtype=np.float64)
loss, grads = model.loss(X, y)
for param_name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)
e = rel_error(param_grad_num, grads[param_name])
print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
Explanation: Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer. Note: correct implementations may still have relative errors up to 1e-2.
End of explanation
np.random.seed(231)
num_train = 100
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
model = ThreeLayerConvNet(weight_scale=1e-2)
solver = Solver(model, small_data,
num_epochs=30, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 6e-5, #1e-3
},
verbose=True, print_every=1)
solver.train()
Explanation: Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
End of explanation
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, 'o')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-o')
plt.plot(solver.val_acc_history, '-o')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
Explanation: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:
End of explanation
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)
solver = Solver(model, data,
num_epochs=1, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=20)
solver.train()
Explanation: Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:
End of explanation
from cs231n.vis_utils import visualize_grid
grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))
plt.imshow(grid.astype('uint8'))
plt.axis('off')
plt.gcf().set_size_inches(5, 5)
plt.show()
Explanation: Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following:
End of explanation
np.random.seed(231)
# Check the training-time forward pass by checking means and variances
# of features both before and after spatial batch normalization
N, C, H, W = 2, 3, 4, 5
x = 4 * np.random.randn(N, C, H, W) + 10
print('Before spatial batch normalization:')
print(' Shape: ', x.shape)
print(' Means: ', x.mean(axis=(0, 2, 3)))
print(' Stds: ', x.std(axis=(0, 2, 3)))
# Means should be close to zero and stds close to one
gamma, beta = np.ones(C), np.zeros(C)
bn_param = {'mode': 'train'}
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print('After spatial batch normalization:')
print(' Shape: ', out.shape)
print(' Means: ', out.mean(axis=(0, 2, 3)))
print(' Stds: ', out.std(axis=(0, 2, 3)))
# Means should be close to beta and stds close to gamma
gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8])
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print('After spatial batch normalization (nontrivial gamma, beta):')
print(' Shape: ', out.shape)
print(' Means: ', out.mean(axis=(0, 2, 3)))
print(' Stds: ', out.std(axis=(0, 2, 3)))
np.random.seed(231)
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, C, H, W = 10, 4, 11, 12
bn_param = {'mode': 'train'}
gamma = np.ones(C)
beta = np.zeros(C)
for t in range(50):
x = 2.3 * np.random.randn(N, C, H, W) + 13
spatial_batchnorm_forward(x, gamma, beta, bn_param)
bn_param['mode'] = 'test'
x = 2.3 * np.random.randn(N, C, H, W) + 13
a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After spatial batch normalization (test-time):')
print(' means: ', a_norm.mean(axis=(0, 2, 3)))
print(' stds: ', a_norm.std(axis=(0, 2, 3)))
Explanation: Spatial Batch Normalization
We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.
Spatial batch normalization: forward
In the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following:
End of explanation
np.random.seed(231)
N, C, H, W = 2, 3, 4, 5
x = 5 * np.random.randn(N, C, H, W) + 12
gamma = np.random.randn(C)
beta = np.random.randn(C)
dout = np.random.randn(N, C, H, W)
bn_param = {'mode': 'train'}
fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
Explanation: Spatial batch normalization: backward
In the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check:
End of explanation |
4,341 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning
Machine Learning is a set of algorithms to enable computers to make and improve predictions or behaviors based on some data. This ability is not explicitly programmed. It involves models with tuneable parameters, that can adapt their values based on available data. Thence, these models can generalize this knowledge and make predictions about new (and unseen) data.
Fitting lines through data. Any middle schooler could eyeball this data and draw a reasonable line through it...however, this task is not simple for a machine.
And when we move to more complicated datasets and multiple dimensions, your middle schooler will give up.
Step1: Scikit-Learn
Scikit-Learn (http
Step2: Steps in the K-means algorithm
Step3: The arguments to the algorithm
Step4: Exercise 2
Clustering the iris dataset based on sepal and petal lengths and widths.
Step5: Regression
Step6: Exercise 3
Linear Regression over a multi-dimensional data set. The data exhibits the advertising expenditure over TV, radio and the print media, versus the change in sales of the product. | Python Code:
from IPython.core.display import Image, display
display(Image(filename='Reg1.png'))
display(Image(filename='Reg2.png'))
from IPython.core.display import Image, display
display(Image(filename='Cluster0.png'))
display(Image(filename='Cluster1.png'))
Explanation: Machine Learning
Machine Learning is a set of algorithms to enable computers to make and improve predictions or behaviors based on some data. This ability is not explicitly programmed. It involves models with tuneable parameters, that can adapt their values based on available data. Thence, these models can generalize this knowledge and make predictions about new (and unseen) data.
Fitting lines through data. Any middle schooler could eyeball this data and draw a reasonable line through it...however, this task is not simple for a machine.
And when we move to more complicated datasets and multiple dimensions, your middle schooler will give up.
End of explanation
%matplotlib inline
from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt
import numpy as np
X, y = make_blobs(n_samples=200,n_features=2,centers=6,cluster_std=0.8, shuffle=True,random_state=0)
plt.scatter(X[:,0],X[:,1])
Explanation: Scikit-Learn
Scikit-Learn (http://scikit-learn.org) is a python package that uses NumPy & SciPy to enable the application of popular machine learning algorithms up on small to medium datasets.
Referring back to the machine learning models, every model in scikit is a python class with a uniform interface. Every instance of this class is an object and the general method of application is very similar.
a. Import class from module. (Here "abc" is an arbitrary algorithm.)
* from sklearn.ABC import abc
b. Instantiate estimator object
* abc_model=abc(arguments)
c. Fit model to training data
* abc_model.fit(data)
d. Use fitted model to predict
* abc_model.predict(new_data)
Now, we'll move from this (seemingly) abstract overview to actual application.
To motivate this discussion, lets start with a concrete problem...that of the infinite scroll.
The goal of Clustering is to find an arrangement in the data such that items in the same group (or cluster) are more similar to each other than those from different clusters.
The Prototype based K-Means algorithm is quiet popular. In prototype based clustering, each group is represented/exemplified by a prototype. In K-Means, the prototype is the mean (or centroid).
Exercise 1
Name another parameter that we could have chosen as a prototype?
When would this parameter be more suited than the centroid?
End of explanation
#import Kmeans class for the cluster module
from sklearn.cluster import KMeans
#instantiate the model
km = KMeans(n_clusters=3, init='random', n_init=10, max_iter=300, tol=1e-04, random_state=0)
Explanation: Steps in the K-means algorithm:
Choose k centroids from the sample points as initial cluster centers.
Assign each data point to the nearest centroid (based on Euclidean distance).
Update the centroid locations to the mean of the samples that were assigned to it.
Repeat steps 2 and 3 till the cluster assignments do not change, or, a pre-defined tolerance, or, a maximum number of iterations is reached.
End of explanation
#fitting the model to the data
y_km = km.fit_predict(X)
plt.scatter(X[y_km==0,0], X[y_km ==0,1], s=50, c='lightgreen', marker='o', label='Group A')
plt.scatter(X[y_km ==1,0], X[y_km ==1,1], s=50, c='orange', marker='o', label='Group B')
plt.scatter(X[y_km ==2,0], X[y_km ==2,1], s=50, c='white', marker='o', label='Group C')
plt.scatter(km.cluster_centers_[:,0],km.cluster_centers_[:,1], s=50, marker='o', c='black', label='Centers')
plt.legend()
Explanation: The arguments to the algorithm:
* n_clusters: The number of groups to be divided in.
* n_init: The number of different initial random centroids to be run.
* max_iter: The maximum number of iterations for each single run.
* tol: Cut-off for the changes in the within-cluster sum-squared-error.
End of explanation
display(Image(filename='1.png'))
from sklearn.datasets import load_iris
iris = load_iris()
n_samples, n_features = iris.data.shape
X, y = iris.data, iris.target
f, axarr = plt.subplots(2, 2)
axarr[0, 0].scatter(iris.data[:, 0], iris.data[:, 1],c=iris.target, cmap=plt.cm.get_cmap('RdYlBu', 3))
axarr[0, 0].set_title('Sepal length versus width')
axarr[0, 1].scatter(iris.data[:, 1], iris.data[:, 2],c=iris.target, cmap=plt.cm.get_cmap('RdYlBu', 3))
axarr[0, 1].set_title('Sepal width versus Petal Length')
axarr[1, 0].scatter(iris.data[:, 2], iris.data[:, 3],c=iris.target, cmap=plt.cm.get_cmap('RdYlBu', 3))
axarr[1, 0].set_title('Petal length versus width')
axarr[1, 1].scatter(iris.data[:, 0], iris.data[:, 2],c=iris.target, cmap=plt.cm.get_cmap('RdYlBu', 3))
axarr[1, 1].set_title('Sepal length versus Petal length')
plt.setp([a.get_xticklabels() for a in axarr[0, :]], visible=False);
plt.setp([a.get_yticklabels() for a in axarr[:, 1]], visible=False);
#Instantiate and fit the model here
Explanation: Exercise 2
Clustering the iris dataset based on sepal and petal lengths and widths.
End of explanation
x=np.arange(100)
eps=50*np.random.randn(100)
y=2*x+eps
plt.scatter(x,y)
plt.xlabel("X")
plt.ylabel("Y")
from sklearn.linear_model import LinearRegression
model=LinearRegression(normalize=True)
X=x[:,np.newaxis]
model.fit(X,y)
X_fit=x[:,np.newaxis]
y_pred=model.predict(X_fit)
plt.scatter(x,y)
plt.plot(X_fit,y_pred,linewidth=2)
plt.xlabel("X")
plt.ylabel("Y")
print model.coef_
print model.intercept_
#So a unit change is X is associated with a ___ change in Y.
Explanation: Regression
End of explanation
import pandas as pd
data=pd.read_csv('addata.csv', index_col=0)
data.head(5)
#from sklearn.linear_model import LinearRegression
from sklearn import linear_model
clf=linear_model.LinearRegression()
feature_cols=["TV","Radio","Newspaper"]
X=data[feature_cols]
y=data["Sales"]
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
#Fit the model and print the coefficients here
#Make predictions for the test dataset here
from sklearn import metrics
print np.sqrt(metrics.mean_squared_error(y_test,y_pred)) #RMSE
Explanation: Exercise 3
Linear Regression over a multi-dimensional data set. The data exhibits the advertising expenditure over TV, radio and the print media, versus the change in sales of the product.
End of explanation |
4,342 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id='top'> </a>
Author
Step1: Cosmic-ray composition spectrum analysis
Table of contents
Define analysis free parameters
Data preprocessing
Fitting random forest
Fraction correctly identified
Spectrum
Unfolding
Feature importance
Step2: Define analysis free parameters
[ back to top ]
Step3: Whether or not to train on 'light' and 'heavy' composition classes, or the individual compositions
Step4: Get composition classifier pipeline
Step5: Define energy binning for this analysis
Step6: Data preprocessing
[ back to top ]
1. Load simulation/data dataframe and apply specified quality cuts
2. Extract desired features from dataframe
3. Get separate testing and training datasets
4. Feature transformation
Step7: Run classifier over training and testing sets to get an idea of the degree of overfitting
Step8: Fraction correctly identified
[ back to top ]
Step10: Calculate classifier generalization error via 10-fold CV
Step11: Spectrum
[ back to top ]
Step12: Unfolding
[ back to top ] | Python Code:
%load_ext watermark
%watermark -u -d -v -p numpy,matplotlib,scipy,pandas,sklearn,mlxtend
Explanation: <a id='top'> </a>
Author: James Bourbeau
End of explanation
%matplotlib inline
from __future__ import division, print_function
from collections import defaultdict
import itertools
import numpy as np
from scipy import interp
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import seaborn.apionly as sns
import matplotlib as mpl
from sklearn.metrics import accuracy_score, confusion_matrix, roc_curve, auc, classification_report
from sklearn.model_selection import cross_val_score, StratifiedShuffleSplit, KFold, StratifiedKFold
from mlxtend.feature_selection import SequentialFeatureSelector as SFS
import composition as comp
import composition.analysis.plotting as plotting
color_dict = comp.analysis.get_color_dict()
Explanation: Cosmic-ray composition spectrum analysis
Table of contents
Define analysis free parameters
Data preprocessing
Fitting random forest
Fraction correctly identified
Spectrum
Unfolding
Feature importance
End of explanation
bin_midpoints, _, counts, counts_err = comp.get1d('/home/jbourbeau/PyUnfold/unfolded_output_h3a.root', 'NC', 'Unf_ks_ACM/bin0')
Explanation: Define analysis free parameters
[ back to top ]
End of explanation
comp_class = True
comp_list = ['light', 'heavy'] if comp_class else ['P', 'He', 'O', 'Fe']
Explanation: Whether or not to train on 'light' and 'heavy' composition classes, or the individual compositions
End of explanation
pipeline_str = 'GBDT'
pipeline = comp.get_pipeline(pipeline_str)
Explanation: Get composition classifier pipeline
End of explanation
energybins = comp.analysis.get_energybins()
Explanation: Define energy binning for this analysis
End of explanation
sim_train, sim_test = comp.preprocess_sim(comp_class=comp_class, return_energy=True)
# Compute the correlation matrix
df_sim = comp.load_dataframe(datatype='sim', config='IC79')
feature_list, feature_labels = comp.analysis.get_training_features()
fig, ax = plt.subplots()
df_sim[df_sim.MC_comp_class == 'light'].avg_inice_radius.plot(kind='hist', bins=50, ax=ax, alpha=0.75)
df_sim[df_sim.MC_comp_class == 'heavy'].avg_inice_radius.plot(kind='hist', bins=50, ax=ax, alpha=0.75)
ax.grid()
plt.show()
fig, ax = plt.subplots()
df_sim[df_sim.MC_comp_class == 'light'].invcharge_inice_radius.plot(kind='hist', bins=50, ax=ax, alpha=0.75)
df_sim[df_sim.MC_comp_class == 'heavy'].invcharge_inice_radius.plot(kind='hist', bins=50, ax=ax, alpha=0.75)
ax.grid()
plt.show()
fig, ax = plt.subplots()
df_sim[df_sim.MC_comp_class == 'light'].max_inice_radius.plot(kind='hist', bins=50, ax=ax, alpha=0.75)
df_sim[df_sim.MC_comp_class == 'heavy'].max_inice_radius.plot(kind='hist', bins=50, ax=ax, alpha=0.75)
ax.grid()
plt.show()
corr = df_sim[feature_list].corr()
# Generate a mask for the upper triangle
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
fig, ax = plt.subplots()
sns.heatmap(corr, mask=mask, cmap='RdBu_r', center=0,
square=True, xticklabels=feature_labels, yticklabels=feature_labels,
linewidths=.5, cbar_kws={'label': 'Covariance'}, annot=True, ax=ax)
# outfile = args.outdir + '/feature_covariance.png'
# plt.savefig(outfile)
plt.show()
label_replacement = {feature: labels for feature, labels in zip(feature_list, feature_labels)}
with plt.rc_context({'text.usetex': False}):
g = sns.pairplot(df_sim.sample(frac=1)[:1000], vars=feature_list, hue='MC_comp_class',
plot_kws={'alpha': 0.5, 'linewidth': 0},
diag_kws={'histtype': 'step', 'linewidth': 2, 'fill': True, 'alpha': 0.75, 'bins': 15})
for i in range(len(feature_list)):
for j in range(len(feature_list)):
xlabel = g.axes[i][j].get_xlabel()
ylabel = g.axes[i][j].get_ylabel()
if xlabel in label_replacement.keys():
g.axes[i][j].set_xlabel(label_replacement[xlabel])
if ylabel in label_replacement.keys():
g.axes[i][j].set_ylabel(label_replacement[ylabel])
g.fig.get_children()[-1].set_title('Comp class')
# g.fig.get_children()[-1].set_bbox_to_anchor((1.1, 0.5, 0, 0))
data = comp.preprocess_data(comp_class=comp_class, return_energy=True)
is_finite_mask = np.isfinite(data.X)
not_finite_mask = np.logical_not(is_finite_mask)
finite_data_mask = np.logical_not(np.any(not_finite_mask, axis=1))
data = data[finite_data_mask]
Explanation: Data preprocessing
[ back to top ]
1. Load simulation/data dataframe and apply specified quality cuts
2. Extract desired features from dataframe
3. Get separate testing and training datasets
4. Feature transformation
End of explanation
clf_name = pipeline.named_steps['classifier'].__class__.__name__
print('=' * 30)
print(clf_name)
weights = sim_train.energy**-1.7
pipeline.fit(sim_train.X, sim_train.y)
# pipeline.fit(sim_train.X, sim_train.y, classifier__sample_weight=weights)
train_pred = pipeline.predict(sim_train.X)
train_acc = accuracy_score(sim_train.y, train_pred)
print('Training accuracy = {:.2%}'.format(train_acc))
test_pred = pipeline.predict(sim_test.X)
test_acc = accuracy_score(sim_test.y, test_pred)
print('Testing accuracy = {:.2%}'.format(test_acc))
print('=' * 30)
num_features = len(feature_list)
importances = pipeline.named_steps['classifier'].feature_importances_
indices = np.argsort(importances)[::-1]
fig, ax = plt.subplots()
for f in range(num_features):
print('{}) {}'.format(f + 1, importances[indices[f]]))
plt.ylabel('Feature Importances')
plt.bar(range(num_features),
importances[indices],
align='center')
plt.xticks(range(num_features),
feature_labels[indices], rotation=90)
plt.xlim([-1, len(feature_list)])
plt.show()
Explanation: Run classifier over training and testing sets to get an idea of the degree of overfitting
End of explanation
def get_frac_correct(train, test, pipeline, comp_list):
assert isinstance(train, comp.analysis.DataSet), 'train dataset must be a DataSet'
assert isinstance(test, comp.analysis.DataSet), 'test dataset must be a DataSet'
assert train.y is not None, 'train must have true y values'
assert test.y is not None, 'test must have true y values'
pipeline.fit(train.X, train.y)
test_predictions = pipeline.predict(test.X)
correctly_identified_mask = (test_predictions == test.y)
# Construct MC composition masks
MC_comp_mask = {}
for composition in comp_list:
MC_comp_mask[composition] = (test.le.inverse_transform(test.y) == composition)
MC_comp_mask['total'] = np.array([True]*len(test))
reco_frac, reco_frac_err = {}, {}
for composition in comp_list+['total']:
comp_mask = MC_comp_mask[composition]
# Get number of MC comp in each reco energy bin
num_MC_energy = np.histogram(test.log_energy[comp_mask],
bins=energybins.log_energy_bins)[0]
num_MC_energy_err = np.sqrt(num_MC_energy)
# Get number of correctly identified comp in each reco energy bin
num_reco_energy = np.histogram(test.log_energy[comp_mask & correctly_identified_mask],
bins=energybins.log_energy_bins)[0]
num_reco_energy_err = np.sqrt(num_reco_energy)
# Calculate correctly identified fractions as a function of MC energy
reco_frac[composition], reco_frac_err[composition] = comp.ratio_error(
num_reco_energy, num_reco_energy_err,
num_MC_energy, num_MC_energy_err)
return reco_frac, reco_frac_err
Explanation: Fraction correctly identified
[ back to top ]
End of explanation
# Split training data into CV training and testing folds
kf = KFold(n_splits=10)
frac_correct_folds = defaultdict(list)
fold_num = 0
print('Fold ', end='')
for train_index, test_index in kf.split(sim_train.X):
fold_num += 1
print('{}...'.format(fold_num), end='')
reco_frac, reco_frac_err = get_frac_correct(sim_train[train_index],
sim_train[test_index],
pipeline, comp_list)
for composition in comp_list:
frac_correct_folds[composition].append(reco_frac[composition])
frac_correct_folds['total'].append(reco_frac['total'])
frac_correct_gen_err = {key: np.std(frac_correct_folds[key], axis=0) for key in frac_correct_folds}
# scores = np.array(frac_correct_folds['total'])
# score = scores.mean(axis=1).mean()
# score_std = scores.mean(axis=1).std()
avg_frac_correct_data = {'values': np.mean(frac_correct_folds['total'], axis=0), 'errors': np.std(frac_correct_folds['total'], axis=0)}
avg_frac_correct, avg_frac_correct_err = comp.analysis.averaging_error(**avg_frac_correct_data)
reco_frac, reco_frac_stat_err = get_frac_correct(sim_train, sim_test, pipeline, comp_list)
# Plot fraction of events correctlt classified vs energy
fig, ax = plt.subplots()
for composition in comp_list + ['total']:
err = np.sqrt(frac_correct_gen_err[composition]**2 + reco_frac_stat_err[composition]**2)
plotting.plot_steps(energybins.log_energy_midpoints, reco_frac[composition], err, ax,
color_dict[composition], composition)
plt.xlabel('$\log_{10}(E_{\mathrm{reco}}/\mathrm{GeV})$')
ax.set_ylabel('Fraction correctly identified')
ax.set_ylim([0.0, 1.0])
ax.set_xlim([energybins.log_energy_min, energybins.log_energy_max])
ax.grid()
leg = plt.legend(loc='upper center', frameon=False,
bbox_to_anchor=(0.5, # horizontal
1.1),# vertical
ncol=len(comp_list)+1, fancybox=False)
# set the linewidth of each legend object
for legobj in leg.legendHandles:
legobj.set_linewidth(3.0)
cv_str = 'Accuracy: {:0.2f}\% (+/- {:0.1f}\%)'.format(avg_frac_correct*100, avg_frac_correct_err*100)
ax.text(7.4, 0.2, cv_str,
ha="center", va="center", size=10,
bbox=dict(boxstyle='round', fc="white", ec="gray", lw=0.8))
plt.savefig('/home/jbourbeau/public_html/figures/frac-correct-{}.png'.format(pipeline_str))
plt.show()
# Plot the two-class decision scores
classifier_score = pipeline.decision_function(sim_train.X)
light_mask = sim_train.le.inverse_transform(sim_train.y) == 'light'
heavy_mask = sim_train.le.inverse_transform(sim_train.y) == 'heavy'
fig, ax = plt.subplots()
score_bins = np.linspace(-1, 1, 50)
ax.hist(classifier_score[light_mask], bins=score_bins, label='light', alpha=0.75)
ax.hist(classifier_score[heavy_mask], bins=score_bins, label='heavy', alpha=0.75)
ax.grid()
ax.legend()
plt.show()
import multiprocessing as mp
kf = KFold(n_splits=10)
frac_correct_folds = defaultdict(list)
# Define an output queue
output = mp.Queue()
# define a example function
def rand_string(length, output):
Generates a random string of numbers, lower- and uppercase chars.
rand_str = ''.join(random.choice(
string.ascii_lowercase
+ string.ascii_uppercase
+ string.digits)
for i in range(length))
output.put(rand_str)
# Setup a list of processes that we want to run
processes = [mp.Process(target=get_frac_correct,
args=(sim_train[train_index],
sim_train[test_index],
pipeline, comp_list)) for train_index, test_index in kf.split(sim_train.X)]
# Run processes
for p in processes:
p.start()
# Exit the completed processes
for p in processes:
p.join()
# Get process results from the output queue
results = [output.get() for p in processes]
print(results)
Explanation: Calculate classifier generalization error via 10-fold CV
End of explanation
def get_num_comp_reco(train, test, pipeline, comp_list):
assert isinstance(train, comp.analysis.DataSet), 'train dataset must be a DataSet'
assert isinstance(test, comp.analysis.DataSet), 'test dataset must be a DataSet'
assert train.y is not None, 'train must have true y values'
pipeline.fit(train.X, train.y)
test_predictions = pipeline.predict(test.X)
# Get number of correctly identified comp in each reco energy bin
num_reco_energy, num_reco_energy_err = {}, {}
for composition in comp_list:
# print('composition = {}'.format(composition))
comp_mask = train.le.inverse_transform(test_predictions) == composition
# print('sum(comp_mask) = {}'.format(np.sum(comp_mask)))
print(test.log_energy[comp_mask])
num_reco_energy[composition] = np.histogram(test.log_energy[comp_mask],
bins=energybins.log_energy_bins)[0]
num_reco_energy_err[composition] = np.sqrt(num_reco_energy[composition])
num_reco_energy['total'] = np.histogram(test.log_energy, bins=energybins.log_energy_bins)[0]
num_reco_energy_err['total'] = np.sqrt(num_reco_energy['total'])
return num_reco_energy, num_reco_energy_err
df_sim = comp.load_dataframe(datatype='sim', config='IC79')
df_sim[['log_dEdX', 'num_millipede_particles']].corr()
max_zenith_rad = df_sim['lap_zenith'].max()
# Get number of events per energy bin
num_reco_energy, num_reco_energy_err = get_num_comp_reco(sim_train, data, pipeline, comp_list)
import pprint
pprint.pprint(num_reco_energy)
pprint.pprint(num_reco_energy_err)
# Solid angle
solid_angle = 2*np.pi*(1-np.cos(max_zenith_rad))
print(num_reco_energy['light'].sum())
print(num_reco_energy['heavy'].sum())
frac_light = num_reco_energy['light'].sum()/num_reco_energy['total'].sum()
print(frac_light)
# Live-time information
goodrunlist = pd.read_table('/data/ana/CosmicRay/IceTop_GRL/IC79_2010_GoodRunInfo_4IceTop.txt', skiprows=[0, 3])
goodrunlist.head()
livetimes = goodrunlist['LiveTime(s)']
livetime = np.sum(livetimes[goodrunlist['Good_it_L2'] == 1])
print('livetime (seconds) = {}'.format(livetime))
print('livetime (days) = {}'.format(livetime/(24*60*60)))
fig, ax = plt.subplots()
for composition in comp_list + ['total']:
# Calculate dN/dE
y = num_reco_energy[composition]
y_err = num_reco_energy_err[composition]
plotting.plot_steps(energybins.log_energy_midpoints, y, y_err,
ax, color_dict[composition], composition)
ax.set_yscale("log", nonposy='clip')
plt.xlabel('$\log_{10}(E_{\mathrm{reco}}/\mathrm{GeV})$')
ax.set_ylabel('Counts')
# ax.set_xlim([6.3, 8.0])
# ax.set_ylim([10**-6, 10**-1])
ax.grid(linestyle=':')
leg = plt.legend(loc='upper center', frameon=False,
bbox_to_anchor=(0.5, # horizontal
1.1),# vertical
ncol=len(comp_list)+1, fancybox=False)
# set the linewidth of each legend object
for legobj in leg.legendHandles:
legobj.set_linewidth(3.0)
plt.savefig('/home/jbourbeau/public_html/figures/rate.png')
plt.show()
fig, ax = plt.subplots()
for composition in comp_list + ['total']:
# Calculate dN/dE
y = num_reco_energy[composition]
y_err = num_reco_energy_err[composition]
# Add time duration
# y = y / livetime
# y_err = y / livetime
y, y_err = comp.analysis.ratio_error(y, y_err, livetime, 0.005*livetime)
plotting.plot_steps(energybins.log_energy_midpoints, y, y_err,
ax, color_dict[composition], composition)
ax.set_yscale("log", nonposy='clip')
plt.xlabel('$\log_{10}(E_{\mathrm{reco}}/\mathrm{GeV})$')
ax.set_ylabel('Rate [s$^{-1}$]')
# ax.set_xlim([6.3, 8.0])
# ax.set_ylim([10**-6, 10**-1])
ax.grid(linestyle=':')
leg = plt.legend(loc='upper center', frameon=False,
bbox_to_anchor=(0.5, # horizontal
1.1),# vertical
ncol=len(comp_list)+1, fancybox=False)
# set the linewidth of each legend object
for legobj in leg.legendHandles:
legobj.set_linewidth(3.0)
plt.savefig('/home/jbourbeau/public_html/figures/rate.png')
plt.show()
df_sim, cut_dict_sim = comp.load_dataframe(datatype='sim', config='IC79', return_cut_dict=True)
selection_mask = np.array([True] * len(df_sim))
standard_cut_keys = ['IceTopQualityCuts', 'lap_InIce_containment',
'num_hits_1_60',
# 'num_hits_1_60', 'max_qfrac_1_60',
'InIceQualityCuts']
for key in standard_cut_keys:
selection_mask *= cut_dict_sim[key]
df_sim = df_sim[selection_mask]
def get_energy_res(df_sim, energy_bins):
reco_log_energy = df_sim['lap_log_energy'].values
MC_log_energy = df_sim['MC_log_energy'].values
energy_res = reco_log_energy - MC_log_energy
bin_centers, bin_medians, energy_err = comp.analysis.data_functions.get_medians(reco_log_energy,
energy_res,
energy_bins)
return np.abs(bin_medians)
def counts_to_flux(counts, counts_err, eff_area=156390.673059, livetime=1):
# Calculate dN/dE
y = counts/energybins.energy_bin_widths
y_err = counts_err/energybins.energy_bin_widths
# Add effective area
eff_area = np.array([eff_area]*len(y))
eff_area_error = np.array([0.01 * eff_area]*len(y_err))
y, y_err = comp.analysis.ratio_error(y, y_err, eff_area, eff_area_error)
# Add solid angle
y = y / solid_angle
y_err = y_err / solid_angle
# Add time duration
# y = y / livetime
# y_err = y / livetime
livetime = np.array([livetime]*len(y))
flux, flux_err = comp.analysis.ratio_error(y, y_err, livetime, 0.01*livetime)
# Add energy scaling
scaled_flux = energybins.energy_midpoints**2.7 * flux
scaled_flux_err = energybins.energy_midpoints**2.7 * flux_err
return scaled_flux, scaled_flux_err
# Plot fraction of events vs energy
# fig, ax = plt.subplots(figsize=(8, 6))
fig = plt.figure()
ax = plt.gca()
for composition in comp_list + ['total']:
y, y_err = counts_to_flux(num_reco_energy[composition], num_reco_energy_err[composition], livetime=livetime)
plotting.plot_steps(energybins.log_energy_midpoints, y, y_err, ax, color_dict[composition], composition)
ax.set_yscale("log", nonposy='clip')
plt.xlabel('$\log_{10}(E_{\mathrm{reco}}/\mathrm{GeV})$')
# ax.set_ylabel('$\mathrm{E}^{2.7} \\frac{\mathrm{dN}}{\mathrm{dE dA d\Omega dt}} \ [\mathrm{GeV}^{1.7} \mathrm{m}^{-2} \mathrm{sr}^{-1} \mathrm{s}^{-1}]$')
ax.set_ylabel('$\mathrm{E}^{2.7} \ J(E) \ [\mathrm{GeV}^{1.7} \mathrm{m}^{-2} \mathrm{sr}^{-1} \mathrm{s}^{-1}]$')
ax.set_xlim([6.4, 9.0])
ax.set_ylim([10**2, 10**5])
ax.grid(linestyle='dotted', which="both")
# Add 3-year scraped flux
df_proton = pd.read_csv('3yearscraped/proton', sep='\t', header=None, names=['energy', 'flux'])
df_helium = pd.read_csv('3yearscraped/helium', sep='\t', header=None, names=['energy', 'flux'])
df_light = pd.DataFrame.from_dict({'energy': df_proton.energy,
'flux': df_proton.flux + df_helium.flux})
df_oxygen = pd.read_csv('3yearscraped/oxygen', sep='\t', header=None, names=['energy', 'flux'])
df_iron = pd.read_csv('3yearscraped/iron', sep='\t', header=None, names=['energy', 'flux'])
df_heavy = pd.DataFrame.from_dict({'energy': df_oxygen.energy,
'flux': df_oxygen.flux + df_iron.flux})
# if comp_class:
# ax.plot(np.log10(df_light.energy), df_light.flux, label='3 yr light',
# marker='.', ls=':')
# ax.plot(np.log10(df_heavy.energy), df_heavy.flux, label='3 yr heavy',
# marker='.', ls=':')
# ax.plot(np.log10(df_heavy.energy), df_heavy.flux+df_light.flux, label='3 yr total',
# marker='.', ls=':')
# else:
# ax.plot(np.log10(df_proton.energy), df_proton.flux, label='3 yr proton',
# marker='.', ls=':')
# ax.plot(np.log10(df_helium.energy), df_helium.flux, label='3 yr helium',
# marker='.', ls=':', color=color_dict['He'])
# ax.plot(np.log10(df_oxygen.energy), df_oxygen.flux, label='3 yr oxygen',
# marker='.', ls=':', color=color_dict['O'])
# ax.plot(np.log10(df_iron.energy), df_iron.flux, label='3 yr iron',
# marker='.', ls=':', color=color_dict['Fe'])
# ax.plot(np.log10(df_iron.energy), df_proton.flux+df_helium.flux+df_oxygen.flux+df_iron.flux, label='3 yr total',
# marker='.', ls=':', color='C2')
leg = plt.legend(loc='upper center', frameon=False,
bbox_to_anchor=(0.5, # horizontal
1.15),# vertical
ncol=len(comp_list)+1, fancybox=False)
# set the linewidth of each legend object
for legobj in leg.legendHandles:
legobj.set_linewidth(3.0)
plt.savefig('/home/jbourbeau/public_html/figures/spectrum.png')
plt.show()
if not comp_class:
# Add 3-year scraped flux
df_proton = pd.read_csv('3yearscraped/proton', sep='\t', header=None, names=['energy', 'flux'])
df_helium = pd.read_csv('3yearscraped/helium', sep='\t', header=None, names=['energy', 'flux'])
df_oxygen = pd.read_csv('3yearscraped/oxygen', sep='\t', header=None, names=['energy', 'flux'])
df_iron = pd.read_csv('3yearscraped/iron', sep='\t', header=None, names=['energy', 'flux'])
# Plot fraction of events vs energy
fig, axarr = plt.subplots(2, 2, figsize=(8, 6))
for composition, ax in zip(comp_list + ['total'], axarr.flatten()):
# Calculate dN/dE
y = num_reco_energy[composition]/energybins.energy_bin_widths
y_err = num_reco_energy_err[composition]/energybins.energy_bin_widths
# Add effective area
y, y_err = comp.analysis.ratio_error(y, y_err, eff_area, eff_area_error)
# Add solid angle
y = y / solid_angle
y_err = y_err / solid_angle
# Add time duration
y = y / livetime
y_err = y / livetime
y = energybins.energy_midpoints**2.7 * y
y_err = energybins.energy_midpoints**2.7 * y_err
plotting.plot_steps(energybins.log_energy_midpoints, y, y_err, ax, color_dict[composition], composition)
# Load 3-year flux
df_3yr = pd.read_csv('3yearscraped/{}'.format(composition), sep='\t',
header=None, names=['energy', 'flux'])
ax.plot(np.log10(df_3yr.energy), df_3yr.flux, label='3 yr {}'.format(composition),
marker='.', ls=':', color=color_dict[composition])
ax.set_yscale("log", nonposy='clip')
# ax.set_xscale("log", nonposy='clip')
ax.set_xlabel('$\log_{10}(E_{\mathrm{reco}}/\mathrm{GeV})$')
ax.set_ylabel('$\mathrm{E}^{2.7} \\frac{\mathrm{dN}}{\mathrm{dE dA d\Omega dt}} \ [\mathrm{GeV}^{1.7} \mathrm{m}^{-2} \mathrm{sr}^{-1} \mathrm{s}^{-1}]$')
ax.set_xlim([6.3, 8])
ax.set_ylim([10**3, 10**5])
ax.grid(linestyle='dotted', which="both")
ax.legend()
plt.savefig('/home/jbourbeau/public_html/figures/spectrum.png')
plt.show()
Explanation: Spectrum
[ back to top ]
End of explanation
bin_midpoints, _, counts, counts_err = comp.get1d('/home/jbourbeau/PyUnfold/unfolded_output_h3a.root', 'NC', 'Unf_ks_ACM/bin0')
light_counts = counts[::2]
heavy_counts = counts[1::2]
light_counts, heavy_counts
fig, ax = plt.subplots()
for composition in comp_list + ['total']:
y, y_err = counts_to_flux(num_reco_energy[composition], num_reco_energy_err[composition], livetime=livetime)
plotting.plot_steps(energybins.log_energy_midpoints, y, y_err, ax, color_dict[composition], composition)
h3a_light_flux, h3a_flux_err = counts_to_flux(light_counts, np.sqrt(light_counts), livetime=livetime)
h3a_heavy_flux, h3a_flux_err = counts_to_flux(heavy_counts, np.sqrt(heavy_counts), livetime=livetime)
ax.plot(energybins.log_energy_midpoints, h3a_light_flux, ls=':', label='h3a light unfolded')
ax.plot(energybins.log_energy_midpoints, h3a_heavy_flux, ls=':', label='h3a heavy unfolded')
ax.set_yscale("log", nonposy='clip')
plt.xlabel('$\log_{10}(E_{\mathrm{reco}}/\mathrm{GeV})$')
# ax.set_ylabel('$\mathrm{E}^{2.7} \\frac{\mathrm{dN}}{\mathrm{dE dA d\Omega dt}} \ [\mathrm{GeV}^{1.7} \mathrm{m}^{-2} \mathrm{sr}^{-1} \mathrm{s}^{-1}]$')
ax.set_ylabel('$\mathrm{E}^{2.7} \ J(E) \ [\mathrm{GeV}^{1.7} \mathrm{m}^{-2} \mathrm{sr}^{-1} \mathrm{s}^{-1}]$')
ax.set_xlim([6.4, 9.0])
ax.set_ylim([10**2, 10**5])
ax.grid(linestyle='dotted', which="both")
leg = plt.legend(loc='upper center', frameon=False,
bbox_to_anchor=(0.5, # horizontal
1.15),# vertical
ncol=len(comp_list)+1, fancybox=False)
# set the linewidth of each legend object
for legobj in leg.legendHandles:
legobj.set_linewidth(3.0)
plt.savefig('/home/jbourbeau/public_html/figures/spectrum-unfolded.png')
plt.show()
Explanation: Unfolding
[ back to top ]
End of explanation |
4,343 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Adding Custom Operator Steps in Integration Schemes
In addition to forces that modify particle accelerations every timestep, we can use REBOUNDx to add operations that happen before and/or after each REBOUND timestep. rebx.add_operator will make reasonable default choices depending on the type of operator you are attaching to a simulation, but you can also manually specify exactly what you'd like to do. We show this here by generating custom splitting integration schemes, which are a powerful integration method for long-term dynamics (particularly symplectic ones). See Tamayo et al. 2019 for details and examples.
We begin by making a two-planet system
Step1: We now consider a first-order Kepler splitting (Wisdom-Holman map)
Step2: We now set sim.integrator to none, so that REBOUND doesn't do anything in addition to the operators that we include, and we add our two operators, specifying the fraction of sim.dt we want each operator to act over (here the full timestep of 1). In this case since we've turned off the REBOUND timestep altogether, it doesn't matter if we add the operator "pre" timestep or "post" timestep, so we could have left it out.
Note that adding operators pushes them onto a linked list, so they will get executed in the opposite order that you add them in. Here, like we wrote above, the interaction step would happen first, followed by the Kepler step
Step3: One can show (see Tamayo et al. 2019) that to leading order this scheme is equivalent to one where one integrates the motion exactly with IAS15, but one includes a half step backward in time before the IAS step, and a half step forward in time after, i.e.
$K(\frac{1}{2})IAS(1)K(-\frac{1}{2})$
Step4: We now integrate the orbits, track the energy errors and plot them | Python Code:
import rebound
import reboundx
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def makesim():
sim = rebound.Simulation()
sim.G = 4*np.pi**2
sim.add(m=1.)
sim.add(m=1.e-4, a=1.)
sim.add(m=1.e-4, a=1.5)
sim.move_to_com()
return sim
Explanation: Adding Custom Operator Steps in Integration Schemes
In addition to forces that modify particle accelerations every timestep, we can use REBOUNDx to add operations that happen before and/or after each REBOUND timestep. rebx.add_operator will make reasonable default choices depending on the type of operator you are attaching to a simulation, but you can also manually specify exactly what you'd like to do. We show this here by generating custom splitting integration schemes, which are a powerful integration method for long-term dynamics (particularly symplectic ones). See Tamayo et al. 2019 for details and examples.
We begin by making a two-planet system:
End of explanation
sim = makesim()
rebx = reboundx.Extras(sim)
kep = rebx.load_operator("kepler")
inter = rebx.load_operator("interaction")
Explanation: We now consider a first-order Kepler splitting (Wisdom-Holman map):
$K(1)I(1)$
i.e., kick particles according to their interparticle forces for a full timestep (I), then evolve particles along a Kepler orbit (K) for a full timestep.
We can build it up from kepler and interaction steps, so we begin by creating those
End of explanation
sim.integrator="none"
rebx.add_operator(kep, dtfraction=1., timing="pre")
rebx.add_operator(inter, dtfraction=1., timing="pre")
Explanation: We now set sim.integrator to none, so that REBOUND doesn't do anything in addition to the operators that we include, and we add our two operators, specifying the fraction of sim.dt we want each operator to act over (here the full timestep of 1). In this case since we've turned off the REBOUND timestep altogether, it doesn't matter if we add the operator "pre" timestep or "post" timestep, so we could have left it out.
Note that adding operators pushes them onto a linked list, so they will get executed in the opposite order that you add them in. Here, like we wrote above, the interaction step would happen first, followed by the Kepler step:
End of explanation
sim2 = makesim()
rebx2 = reboundx.Extras(sim2)
kep = rebx2.load_operator("kepler")
ias = rebx2.load_operator("ias15")
sim2.integrator="none"
rebx2.add_operator(kep, dtfraction=0.5, timing="pre")
rebx2.add_operator(ias, dtfraction=1, timing="pre")
rebx2.add_operator(kep, dtfraction=-0.5, timing="pre")
Explanation: One can show (see Tamayo et al. 2019) that to leading order this scheme is equivalent to one where one integrates the motion exactly with IAS15, but one includes a half step backward in time before the IAS step, and a half step forward in time after, i.e.
$K(\frac{1}{2})IAS(1)K(-\frac{1}{2})$
End of explanation
dt = 0.0037*sim.particles[1].P
sim.dt = dt
sim2.dt = dt
Nout = 1000
E0 = sim.calculate_energy()
Eerr = np.zeros(Nout)
Eerr2 = np.zeros(Nout)
times = np.linspace(0, 10, Nout)
for i, time in enumerate(times):
sim.integrate(time, exact_finish_time=0)
sim2.integrate(time, exact_finish_time=0)
E = sim.calculate_energy()
E2 = sim2.calculate_energy()
Eerr[i] = np.abs((E-E0)/E0)
Eerr2[i] = np.abs((E2-E0)/E0)
fig, ax = plt.subplots(figsize=(12,8))
ax.plot(times, Eerr, '.', label='1st-order Split')
ax.plot(times, Eerr2, '.', label='1st-order Modified IAS')
ax.set_yscale('log')
ax.set_xlabel('Time (Inner Planet Orbits)', fontsize=18)
ax.set_ylabel('Relative Energy Error', fontsize=18)
ax.legend(fontsize=18)
Explanation: We now integrate the orbits, track the energy errors and plot them:
End of explanation |
4,344 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Exercise 4
Imports
Step1: Complete graph Laplacian
In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.
A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node.
Here is $K_5$
Step2: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
Step4: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
Step5: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$. | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
Explanation: Numpy Exercise 4
Imports
End of explanation
import networkx as nx
K_5=nx.complete_graph(5)
nx.draw(K_5)
Explanation: Complete graph Laplacian
In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.
A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node.
Here is $K_5$:
End of explanation
def complete_deg(n):
np.zeros((n,n), dtype = np.int)
np.diag(np.diag(n-1))
print complete_deg()
D = complete_deg() #I dont get this!!! Ughhhh!
assert D.shape==(5,5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*np.ones(5))
assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
Explanation: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
End of explanation
def complete_adj(n):
Return the integer valued adjacency matrix A for the complete graph K_n.
# YOUR CODE HERE
raise NotImplementedError()
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
Explanation: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
End of explanation |
4,345 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Weights and Bias
Weight Shape
Step1: Bias Shape
Step2: Convolutional Layers
Step3: Activation Shape
Step4: Activation2 Shape
Step5: Fully Connected Layer
The output of a Convolutional layer must flattened to a single layer
Reshape | Python Code:
# 3 x 3 filter shape
filter1 = [
[.1, .1, .2],
[.1, .1, .2],
[.2, .2, .2],
]
# Each filter only has one input channel (grey scale)
# 3 x 3 x 1
channel_filters1 = [filter1]
# We want to output 2 channels which requires another set of 3 x 3 x 1
filter2 = [
[.9, .5, .9],
[.5, .3, .5],
[.9, .5, .9],
]
channel_filters2 = [filter2]
# Initialized Weights
# 3 x 3 x 1 x 2
convolution_layer1 = [channel_filters1, channel_filters2]
print(convolution_layer1[0][0][2][0])
for filters in convolution_layer1:
for channel_filter in filters:
for row in channel_filter:
print(row)
print()
Explanation: Convolutional Weights and Bias
Weight Shape: 3 x 3 x 1 x 2
* Filter sizes: 3 x 3 (Modeler's choice)
* Input Channels: 1 (Greyscale)
* Output Channels: 2 (Modeler's choice)
End of explanation
biases_1 = [0.1, 0.1]
Explanation: Bias Shape: 2
Matches the number of output channels
End of explanation
# Number of pixels to shift want evaluating a filter
stride_1 = 1
# Transpose to match inputs
W1 = tf.Variable(np.transpose(convolution_layer1), dtype=tf.float32)
B1 = tf.Variable(biases_1, dtype=tf.float32)
print(W1.shape)
Explanation: Convolutional Layers
End of explanation
stride_shape = [1, stride_1, stride_1, 1]
preactivation = tf.nn.conv2d(X, W1, strides=stride_shape, padding='SAME') + B1
activation_1 = tf.nn.relu(preactivation)
print(activation_1.shape)
# Create a session
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
x = sess.run(W1)
# Transpose to match our model
print(np.transpose(x))
# Transpose to match our model
feed_dict = {X: input_x}
Y1 = activation_1.eval(session=sess, feed_dict=feed_dict)
print(np.round_(np.transpose(Y1), 1))
Explanation: Activation Shape: 4 x 4 x 2
dimension_1 = hight / stride
dimension_2 = width / stride
dimension_3 = output_channel
End of explanation
init_2 = tf.truncated_normal([4, 4, 2, 4], stddev=0.1)
W2 = tf.Variable(init_2)
B2 = tf.Variable(tf.ones([4])/10)
stride_2 = 2
strides = [1, stride_2, stride_2, 1]
preactivate = tf.nn.conv2d(activation_1, W2, strides=strides, padding='SAME') + B2
activation_2 = tf.nn.relu(preactivate)
print(activation_2.shape)
Explanation: Activation2 Shape: 2 x 2 x 4
dimension_1 = d1 / stride
dimension_2 = d2 / stride
dimension_3 = output_channel
End of explanation
# reshape the output from the third convolution for the fully connected layer
reduced = int(np.multiply.reduce(list(activation_2.shape[1:])))
re_shape = [-1, reduced]
fully_connected_input = tf.reshape(activation_2, shape=re_shape)
print(fully_connected_input.shape)
fully_connected_nodes = 6
fc_w_init = tf.truncated_normal([reduced, fully_connected_nodes], stddev=0.1)
fully_connected_weights = tf.Variable(fc_w_init)
fc_b_init = tf.ones([fully_connected_nodes])/10
fully_connected_biases = tf.Variable(fc_b_init)
preactivate = tf.matmul(fully_connected_input, fully_connected_weights) + fully_connected_biases
fully_connected_activate = tf.nn.relu(preactivate)
print(fully_connected_activate.shape)
Explanation: Fully Connected Layer
The output of a Convolutional layer must flattened to a single layer
Reshape: [2, 2, 4] --> [16]
Select a number of nodes to output like a tradional ann layer. (200)
End of explanation |
4,346 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Coal production in mines 2013
by
Step1: Cleaned Data
We cleaned this data in the notebook stored in
Step2: Predict the Production of coal mines
Step3: Random Forest Regressor | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from sklearn.cross_validation import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import explained_variance_score, r2_score, mean_squared_error
sns.set();
Explanation: Coal production in mines 2013
by: Jonathan Whitmore
Abstract: We did a lot of analysis and came to some interesting conclusions.
End of explanation
df = pd.read_csv("../data/cleaned_coalpublic2013.csv", index_col='MSHA ID')
df[['Year', 'Mine_Name']].head()
Explanation: Cleaned Data
We cleaned this data in the notebook stored in: deliver/Data_cleaning.ipynb
End of explanation
features = ['Average_Employees',
'Labor_Hours',
]
categoricals = ['Mine_State',
'Mine_County',
'Mine_Status',
'Mine_Type',
'Company_Type',
'Operation_Type',
'Union_Code',
'Coal_Supply_Region',
]
target = 'log_production'
sns.set_context('poster')
fig = plt.subplots(figsize=(14,8))
sns.violinplot(y="Company_Type", x="log_production", data=df,
split=True, inner="stick");
plt.tight_layout()
plt.savefig("../figures/Coal_prediction_company_type_vs_log_production.png")
dummy_categoricals = []
for categorical in categoricals:
# Avoid the dummy variable trap!
drop_var = sorted(df[categorical].unique())[-1]
temp_df = pd.get_dummies(df[categorical], prefix=categorical)
df = pd.concat([df, temp_df], axis=1)
temp_df.drop('_'.join([categorical, str(drop_var)]), axis=1, inplace=True)
dummy_categoricals += temp_df.columns.tolist()
Explanation: Predict the Production of coal mines
End of explanation
train, test = train_test_split(df, test_size=0.3)
rf = RandomForestRegressor(n_estimators=100, oob_score=True)
rf.fit(train[features + dummy_categoricals], train[target])
fig = plt.subplots(figsize=(8,8))
sns.regplot(test[target], rf.predict(test[features + dummy_categoricals]), color='green')
plt.ylabel("Predicted production")
plt.xlim(0, 22)
plt.ylim(0, 22)
plt.tight_layout()
plt.savefig("../figures/Coal-production-RF-prediction.png")
predicted = rf.predict(test[features + dummy_categoricals])
print "R^2 score:", r2_score(test[target], predicted)
print "MSE:", mean_squared_error(test[target], predicted)
rf_importances = pd.DataFrame({'name':train[features + dummy_categoricals].columns,
'importance':rf.feature_importances_
}).sort_values(by='importance',
ascending=False).reset_index(drop=True)
rf_importances.head(5)
Explanation: Random Forest Regressor
End of explanation |
4,347 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TLE matching for Lucky-7 among the TLEs for 2019-038 launch using on-board GPS data.
Step2: SpaceTrack latest TLEs for objects 44387 - 44419 retrieved on 2019-07-25.
Step3: Load Lucky-7 GPS data taken on 2019-07-22 from 14
Step4: Compute and show the 10 TLEs that best match the GPS data.
Step5: It seems there is discontinuity in GPS measurements between measurement 329 and 330. This is probably caused by some jump in the GPS clock. The measurments after this discontinuity are rejected in GMAT orbit determination.
Step6: More information about the discontinuity. If we look at the fractional parts of the GPS TOW timestamps, we see that they jump from 0.64 to 0.65 precisely when the discontinuity happens.
Step7: Read orbit determination report to compute residuals.
Step8: Read orbit propagation to compute VBN frame.
Step9: Convert ECEF residuals to ECI using Astropy.
Step10: Interpolate orbit propagation to GPS measurement timestamps.
Step11: Compute rotation matrices from ECI to VNB frames.
Step12: Apply rotation matrices to ECI residuals to get VNB residuals. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from skyfield.sgp4lib import EarthSatellite
from skyfield.constants import AU_KM, DAY_S
from skyfield.functions import length_of
import skyfield.api
import tabulate
from IPython.display import HTML, display
import datetime
import astropy.coordinates
import astropy.units
import astropy.time
import pymap3d
Explanation: TLE matching for Lucky-7 among the TLEs for 2019-038 launch using on-board GPS data.
End of explanation
tle_lines = 1 44387U 19038A 19206.33263458 -.00000022 00000-0 89680-5 0 9990
2 44387 98.5700 168.8669 0002514 51.7553 308.3852 14.23333186 2857
1 44388U 19038B 19206.36606677 .00001117 00000-0 10440-3 0 9999
2 44388 97.6849 168.1317 0022451 173.5235 186.6281 14.95946237 2991
1 44389U 19038C 19206.35937814 -.00000014 00000-0 38734-5 0 9993
2 44389 97.6827 168.1189 0021313 165.8587 194.3236 14.96428939 2996
1 44390U 19038D 19206.35871831 .00000027 00000-0 75041-5 0 9996
2 44390 97.6805 168.1107 0021062 166.1919 193.9883 14.96478435 2997
1 44391U 19038E 19206.35779695 .00000050 00000-0 95274-5 0 9998
2 44391 97.6825 168.1203 0021054 166.3223 193.8574 14.96548603 2911
1 44392U 19038G 19205.88690027 +.00000392 +00000-0 +26353-4 0 9997
2 44392 097.4910 167.4713 0021806 178.9933 181.1345 15.12195063002817
1 44393U 19038H 19206.15303022 .00000320 00000-0 22257-4 0 9995
2 44393 97.4910 167.7292 0022189 177.9175 182.2152 15.12083062 2975
1 44394U 19038J 19205.82309084 .00000380 00000-0 25812-4 0 9999
2 44394 97.4909 167.4032 0022186 179.3942 180.7319 15.12010425 2924
1 44395U 19038K 19206.15896261 .00000219 00000-0 16598-4 0 9997
2 44395 97.4895 167.7238 0025435 174.7112 185.4392 15.11626007 2963
1 44396U 19038L 19206.42148378 .00001396 00000-0 84950-4 0 9991
2 44396 97.4883 167.9801 0024845 173.5362 186.6271 15.11813598 3005
1 44397U 19038M 19206.16041197 .00000424 00000-0 28678-4 0 9990
2 44397 97.4910 167.7190 0025923 172.6862 187.4752 15.11519292 2971
1 44398U 19038N 19206.15963198 .00000636 00000-0 41021-4 0 9996
2 44398 97.4901 167.7249 0025611 174.4923 185.6594 15.11582531 2979
1 44399U 19038P 19206.15922158 .00000271 00000-0 19634-4 0 9998
2 44399 97.4912 167.7222 0024945 179.1368 180.9909 15.11606882 2971
1 44400U 19038Q 19205.89416108 .00000254 00000-0 18674-4 0 9998
2 44400 97.4910 167.4617 0025520 173.7123 186.4433 15.11627714 2932
1 44401U 19038R 19206.15776714 .00000423 00000-0 28468-4 0 9991
2 44401 97.4910 167.7227 0025373 172.4970 187.6645 15.11722205 2978
1 44402U 19038S 19205.89270865 +.00000643 +00000-0 +41242-4 0 9997
2 44402 097.4889 167.4639 0025062 174.9579 185.1909 15.11748403002935
1 44403U 19038T 19205.89267484 +.00000788 +00000-0 +49749-4 0 9991
2 44403 097.4920 167.4613 0023765 178.1468 181.9855 15.11754092002930
1 44404U 19038U 19205.89172683 +.00000447 +00000-0 +29821-4 0 9999
2 44404 097.4909 167.4648 0022470 179.8087 180.3156 15.11819815002935
1 44405U 19038V 19205.89147434 .00000577 00000-0 37298-4 0 9997
2 44405 97.4877 167.4621 0024938 174.5796 185.5710 15.11844133 2935
1 44406U 19038W 19205.82562047 +.00000315 +00000-0 +22120-4 0 9990
2 44406 097.4909 167.3999 0024970 173.8943 186.2596 15.11811566002913
1 44407U 19038X 19205.89232946 +.00000457 +00000-0 +30400-4 0 9997
2 44407 097.4916 167.4598 0023734 178.5508 181.5795 15.11774397002924
1 44408U 19038Y 19205.89120858 +.00000132 +00000-0 +11466-4 0 9997
2 44408 097.4904 167.4577 0023490 178.9201 181.2085 15.11854096002937
1 44409U 19038Z 19205.89059028 +.00000930 +00000-0 +57706-4 0 9991
2 44409 097.4873 167.4616 0024823 174.2194 185.9328 15.11918345002937
1 44410U 19038AA 19205.89061514 +.00000571 +00000-0 +36935-4 0 9998
2 44410 097.4909 167.4663 0022512 179.1061 181.0213 15.11908437002932
1 44411U 19038AB 19206.35696713 .00000567 00000-0 36945-4 0 9992
2 44411 97.4929 167.9161 0023884 177.1778 182.9592 15.11677382 2672
1 44412U 19038AC 19205.89255725 .00000422 00000-0 28384-4 0 9993
2 44412 97.4908 167.4637 0022932 179.0776 181.0501 15.11755112 2628
1 44413U 19038AD 19206.35489650 .00000743 00000-0 47013-4 0 9998
2 44413 97.4910 167.9135 0023659 176.7496 183.3893 15.11839459 3009
1 44414U 19038AE 19205.89167858 +.00000266 +00000-0 +19261-4 0 9994
2 44414 097.4911 167.4659 0023970 179.6795 180.4455 15.11820252002469
1 44419U 19038F 19206.15083009 .00000356 00000-0 24253-4 0 9991
2 44419 97.4911 167.7323 0021800 178.0519 182.0801 15.12252398 2972
tles = [EarthSatellite(*z) for z in zip(tle_lines.split('\n')[::2], tle_lines.split('\n')[1::2])]
Explanation: SpaceTrack latest TLEs for objects 44387 - 44419 retrieved on 2019-07-25.
End of explanation
lucky7_gps = np.loadtxt('lucky7_posW15.csv', delimiter = ',')
# filter out invalid positions (X = Y = Z = 0)
invalid_gps = np.all(lucky7_gps[:,:3] == 0, axis = 1)
lucky7_gps = lucky7_gps[~invalid_gps,:]
lucky7_gps[:, :3] *= 1e-3 # convert to km
ts = skyfield.api.load.timescale()
gps_to_tai_offset_days = 19/DAY_S
gps_week2063_jd_tai = 2458685.5 + gps_to_tai_offset_days
t = ts.tai(jd = gps_week2063_jd_tai + lucky7_gps[:,3]/DAY_S)
t[0].utc_datetime(), t[-1].utc_datetime()
Explanation: Load Lucky-7 GPS data taken on 2019-07-22 from 14:05 to 22:29 UTC.
End of explanation
tles_ITRF = np.array([tle.ITRF_position_velocity_error(t)[0] * AU_KM for tle in tles])
rms_error = np.sqrt(np.average(np.sum((tles_ITRF - lucky7_gps[:,:3].transpose().reshape(1,3,-1))**2, axis = 1), axis = 1))
output = [(tles[j].model.satnum, tles[j].epoch.utc_datetime().strftime('%Y-%m-%d %H:%M'), rms_error[j]) for j in np.argsort(rms_error)[:10]]
display(HTML(tabulate.tabulate(output, headers = ('NORAD', 'TLE epoch', 'RMS error (km)'), tablefmt = 'html')))
x, v = tles[np.argmin(rms_error)].ITRF_position_velocity_error(t[0])[:2]
x * AU_KM
v * AU_KM / DAY_S
t[0].tai - 2430000.0
t[-1].tai - 2430000.0
Explanation: Compute and show the 10 TLEs that best match the GPS data.
End of explanation
t[330].tai - 2430000.0
Explanation: It seems there is discontinuity in GPS measurements between measurement 329 and 330. This is probably caused by some jump in the GPS clock. The measurments after this discontinuity are rejected in GMAT orbit determination.
End of explanation
lucky7_gps[:,3] % 1
np.where(np.diff(lucky7_gps[:,3] % 1))
tai_modjulian = t.tai - 2430000.0
gps_pos = tles_ITRF[np.argmin(rms_error),...]
with open('/tmp/gpsl7_data.gmd', 'w') as f:
for j in range(tai_modjulian.size):
pos = lucky7_gps[j,:3]
f.write(f'{tai_modjulian[j]} GPS_PosVec 9014 0 {pos[0]} {pos[1]} {pos[2]}\n')
Explanation: More information about the discontinuity. If we look at the fractional parts of the GPS TOW timestamps, we see that they jump from 0.64 to 0.65 precisely when the discontinuity happens.
End of explanation
with open('l7_estimation_report') as f:
report = f.readlines()
iterations = [(j, int(l.split()[2].strip(':'))) for j,l in enumerate(report) if l.split()[1:2] == ['ITERATION']]
last_iter = max([i[1] for i in iterations])-1
start_line = [i[0] for i in iterations if i[1] == last_iter][2]
end_line = [i[0] for i in iterations if i[1] == last_iter + 1][0]
residual_lines = [l for l in report[start_line:end_line] if l.split()[0:1] == [str(last_iter)]]
ecef_position = np.array([float(l.split()[-2]) for l in residual_lines]).reshape((-1,3))
ecef_residual = lucky7_gps[:,:3] - ecef_position
plt.figure(figsize = (12,8), facecolor = 'w')
plt.plot(t.utc_datetime(), ecef_residual * 1e3, '.');
plt.ylim((-100,100))
plt.xlabel('UTC time')
plt.ylabel('Residual (m)')
plt.legend(['X', 'Y', 'Z'])
plt.title('GPS residuals with respect to precise orbit (ECEF coordinates)');
Explanation: Read orbit determination report to compute residuals.
End of explanation
with open('l7_eph.oem') as f:
oem_lines = f.readlines()
oem_utc = np.array([np.datetime64(l.split()[0]) for l in oem_lines[18:]])
oem_state = np.array([l.split()[1:] for l in oem_lines[18:]], dtype = 'float')
timestamp = ((oem_utc - np.datetime64('1970-01-01T00:00:00'))/ np.timedelta64(1, 's'))
oem_t = ts.utc([datetime.datetime.utcfromtimestamp(t).replace(tzinfo = skyfield.api.utc) for t in timestamp])
oem_tai_modjulian = oem_t.tai - 2430000.0
Explanation: Read orbit propagation to compute VBN frame.
End of explanation
r = astropy.coordinates.CartesianRepresentation(ecef_residual, xyz_axis = 1, unit = astropy.units.km)
obs_time = astropy.time.Time(t.utc_datetime())
eci_residual = np.array(astropy.coordinates.ITRS(r, obstime = obs_time).transform_to(astropy.coordinates.GCRS(obstime = obs_time)).cartesian.xyz).transpose()
Explanation: Convert ECEF residuals to ECI using Astropy.
End of explanation
oem_x = np.interp(tai_modjulian, oem_tai_modjulian, oem_state[:,0])
oem_y = np.interp(tai_modjulian, oem_tai_modjulian, oem_state[:,1])
oem_z = np.interp(tai_modjulian, oem_tai_modjulian, oem_state[:,2])
oem_vx = np.interp(tai_modjulian, oem_tai_modjulian, oem_state[:,3])
oem_vy = np.interp(tai_modjulian, oem_tai_modjulian, oem_state[:,4])
oem_vz = np.interp(tai_modjulian, oem_tai_modjulian, oem_state[:,5])
oem_eci = np.array([oem_x,oem_y,oem_z,oem_vx,oem_vy,oem_vz]).transpose()
Explanation: Interpolate orbit propagation to GPS measurement timestamps.
End of explanation
def crossprod(x,y):
return np.stack([x[:,1]*y[:,2] - x[:,2]*y[:,1], x[:,2]*y[:,0]-x[:,0]*y[:,2], x[:,0]*y[:,1]-x[:,1]*y[:,0]], axis = 1)
V = oem_eci[:,3:]
V = V / np.sqrt(np.sum(V**2, axis = 1)).reshape((-1,1))
R = oem_eci[:,:3]
N = crossprod(V,R)
N = N / np.sqrt(np.sum(N**2, axis = 1)).reshape((-1,1))
B = crossprod(V,N)
VNB_rot = np.stack((V,N,B), axis = 1)
Explanation: Compute rotation matrices from ECI to VNB frames.
End of explanation
vnb_residual = np.einsum('ijk,ik->ij', VNB_rot, eci_residual)
plt.figure(figsize = (12,8), facecolor = 'w')
plt.plot(t.utc_datetime(), vnb_residual * 1e3, '.');
plt.ylim((-100,100))
plt.xlabel('UTC time')
plt.ylabel('Residual (m)')
plt.legend(['V', 'N', 'B'])
plt.title('GPS residuals with respect to precise orbit (VNB coordinates)');
plt.figure(figsize = (12,8), facecolor = 'w')
plt.plot(t.utc_datetime(), vnb_residual * 1e3, '.');
plt.ylim((-30,30))
plt.xlabel('UTC time')
plt.ylabel('Residual (m)')
plt.legend(['V', 'N', 'B'])
plt.title('GPS residuals with respect to precise orbit (VNB coordinates)');
plt.figure(figsize = (12,8), facecolor = 'w')
correction = (lucky7_gps[:,3] % 1 - 0.64) * np.sqrt(np.sum(oem_eci[:,3:]**2, axis = 1))
correction = np.array([[1,0,0]]) * correction.reshape((-1,1))
plt.plot(t.utc_datetime(), (vnb_residual + correction) * 1e3, '.');
plt.ylim((-30,30))
plt.xlabel('UTC time')
plt.ylabel('Residual (m)')
plt.legend(['V', 'N', 'B'])
plt.title('GPS residuals with respect to precise orbit, including timestamp correction (VNB coordinates)');
Explanation: Apply rotation matrices to ECI residuals to get VNB residuals.
End of explanation |
4,348 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Verify producing the sames results.
Step1: General timing
Step2: API testing, making the same method calls and verifying results. Intentionally doing the full matrix in the (very unexpected event) that d(u, v) != d(v, u)
Step3: pw_distances scaling tests.
Step4: Extend to larger sample counts for fast unifrac | Python Code:
import numpy.testing as npt
ids, otu_ids, otu_data, t = get_random_samples(10, tree, True)
fu_mat = make_and_run_pw_distances(unifrac, otu_data, otu_ids=otu_ids, tree=t)
u_mat = pw_distances(unweighted_unifrac, otu_data, otu_ids=otu_ids, tree=t)
fwu_mat = make_and_run_pw_distances(w_unifrac, otu_data, otu_ids=otu_ids, tree=t)
wu_mat = pw_distances(weighted_unifrac, otu_data, otu_ids=otu_ids, tree=t)
fwun_mat = make_and_run_pw_distances(w_unifrac, otu_data, otu_ids=otu_ids, tree=t, normalized=True)
wun_mat = pw_distances(weighted_unifrac, otu_data, otu_ids=otu_ids, tree=t, normalized=True)
npt.assert_almost_equal(fu_mat.data, u_mat.data)
npt.assert_almost_equal(fwu_mat.data, wu_mat.data)
npt.assert_almost_equal(fwun_mat.data, wun_mat.data)
Explanation: Verify producing the sames results.
End of explanation
%timeit make_and_run_pw_distances(unifrac, otu_data, otu_ids=otu_ids, tree=t)
%timeit pw_distances(unweighted_unifrac, otu_data, otu_ids=otu_ids, tree=t)
%timeit make_and_run_pw_distances(w_unifrac, otu_data, otu_ids=otu_ids, tree=t)
%timeit pw_distances(weighted_unifrac, otu_data, otu_ids=otu_ids, tree=t)
%timeit make_and_run_pw_distances(w_unifrac, otu_data, otu_ids=otu_ids, tree=t, normalized=True)
%timeit pw_distances(weighted_unifrac, otu_data, otu_ids=otu_ids, tree=t, normalized=True)
Explanation: General timing
End of explanation
method_sets = [[unweighted_unifrac, unweighted_unifrac_fast],
[weighted_unifrac, weighted_unifrac_fast]]
ids, otu_ids, otu_data, t = get_random_samples(5, tree, True)
for i in range(len(otu_data)):
for j in range(len(otu_data)):
for method_set in method_sets:
method_results = []
for method in method_set:
method_results.append(method(otu_data[i], otu_data[j], otu_ids, t))
npt.assert_almost_equal(*method_results)
Explanation: API testing, making the same method calls and verifying results. Intentionally doing the full matrix in the (very unexpected event) that d(u, v) != d(v, u)
End of explanation
sample_counts = [2, 4, 8, 16, 32]
uw_times = []
uwf_times = []
w_times = []
wn_times = []
wf_times = []
wnf_times = []
uw_times_p = []
uwf_times_p = []
w_times_p = []
wn_times_p = []
wf_times_p = []
wnf_times_p = []
for n_samples in sample_counts:
ids, otu_ids, otu_data, t = get_random_samples(n_samples, tree, True)
# sheared trees
for times, method in [[uw_times_p, unweighted_unifrac], [w_times_p, weighted_unifrac]]:
result = %timeit -o pw_distances(method, otu_data, otu_ids=otu_ids, tree=t)
times.append(result.best)
result = %timeit -o pw_distances(weighted_unifrac, otu_data, otu_ids=otu_ids, tree=t, normalized=True)
wn_times_p.append(result.best)
for times, method in [[uwf_times_p, unifrac], [wf_times_p, w_unifrac]]:
result = %timeit -o make_and_run_pw_distances(method, otu_data, otu_ids=otu_ids, tree=t)
times.append(result.best)
result = %timeit -o make_and_run_pw_distances(w_unifrac, otu_data, otu_ids=otu_ids, tree=t, normalized=True)
wnf_times_p.append(result.best)
# full trees
for times, method in [[uw_times, unweighted_unifrac], [w_times, weighted_unifrac]]:
result = %timeit -o pw_distances(method, otu_data, otu_ids=otu_ids, tree=tree)
times.append(result.best)
result = %timeit -o pw_distances(weighted_unifrac, otu_data, otu_ids=otu_ids, tree=tree, normalized=True)
wn_times.append(result.best)
for times, method in [[uwf_times, unifrac], [wf_times, w_unifrac]]:
result = %timeit -o make_and_run_pw_distances(method, otu_data, otu_ids=otu_ids, tree=tree)
times.append(result.best)
result = %timeit -o make_and_run_pw_distances(w_unifrac, otu_data, otu_ids=otu_ids, tree=tree, normalized=True)
wnf_times.append(result.best)
fig = figure(figsize=(6,6))
plot(sample_counts, uw_times, '--', color='blue')
plot(sample_counts, w_times, '--', color='cyan')
plot(sample_counts, uwf_times, '--', color='red')
plot(sample_counts, wf_times, '--', color='orange')
plot(sample_counts, wn_times, '--', color='green')
plot(sample_counts, wnf_times, '--', color='black')
plot(sample_counts, uw_times_p, color='blue')
plot(sample_counts, w_times_p, color='cyan')
plot(sample_counts, uwf_times_p, color='red')
plot(sample_counts, wf_times_p, color='orange')
plot(sample_counts, wn_times_p, color='green')
plot(sample_counts, wnf_times_p, color='black')
legend_acronyms = [
('u', 'unweighted unifrac'),
('w', 'weighted unifrac'),
('fu', 'unweighted fast unifrac'),
('fw', 'weighted fast unifrac'),
('wn', 'weighted normalized unifrac'),
('fwn', 'weighted normalized fast unifrac'),
('u-p', 'unweighted unifrac pruned tree'),
('w-p', 'weighted unifrac pruned tree'),
('fu-p', 'unweighted fast unifrac pruned tree'),
('fw-p', 'weighted fast unifrac pruned tree'),
('wn-p', 'weighted normalized unifrac pruned tree'),
('fwn-p', 'weighted normalized fast unifrac pruned tree')
]
legend([i[0] for i in legend_acronyms], loc=2)
title("pw_distances scaling, with full trees")
xlabel('number of samples', fontsize=15)
ylabel('time (seconds)', fontsize=15)
xscale('log', basex=2)
yscale('log')
xlim(min(sample_counts), max(sample_counts))
ylim(min(uwf_times_p), max(w_times))
tick_params(axis='both', which='major', labelsize=12)
tick_params(axis='both', which='minor', labelsize=12)
savefig('edu vs fast.png')
for ac, name in legend_acronyms:
print("%s\t: %s" % (ac, name))
Explanation: pw_distances scaling tests.
End of explanation
sample_counts_ext = [64, 128, 256, 512, 1024]
for n_samples in sample_counts_ext:
print("sample count: %d" % n_samples)
ids, otu_ids, otu_data, t = get_random_samples(n_samples, tree, True)
for times, method in [[uwf_times_p, unifrac], [wf_times_p, w_unifrac]]:
result = %timeit -o make_and_run_pw_distances(method, otu_data, otu_ids=otu_ids, tree=t)
times.append(result.best)
result = %timeit -o make_and_run_pw_distances(w_unifrac, otu_data, otu_ids=otu_ids, tree=t, normalized=True)
wnf_times_p.append(result.best)
for times, method in [[uwf_times, unifrac], [wf_times, w_unifrac]]:
result = %timeit -o make_and_run_pw_distances(method, otu_data, otu_ids=otu_ids, tree=tree)
times.append(result.best)
result = %timeit -o make_and_run_pw_distances(w_unifrac, otu_data, otu_ids=otu_ids, tree=tree, normalized=True)
wnf_times.append(result.best)
# at 4GB mem for 1024 set, counts array in this case is ~(1024 x 180000) or approx 1.4GB
# so not _that_ bad given the other resident data structures and notebook state.
sample_counts_ext = [64, 128, 256, 512, 1024]
sample_counts_full = sample_counts[:]
sample_counts_full.extend(sample_counts_ext)
fig = figure(figsize=(6,6))
plot(sample_counts_full, uwf_times, '--', color='red')
plot(sample_counts_full, wf_times, '--', color='orange')
plot(sample_counts_full, wnf_times, '--', color='black')
plot(sample_counts_full, uwf_times_p, color='red')
plot(sample_counts_full, wf_times_p, color='orange')
plot(sample_counts_full, wnf_times_p, color='black')
legend_acronyms = [
('fu', 'unweighted fast unifrac'),
('fw', 'weighted fast unifrac'),
('fwn', 'weighted normalized fast unifrac'),
('fu-p', 'unweighted fast unifrac pruned tree'),
('fw-p', 'weighted fast unifrac pruned tree'),
('fwn-p', 'weighted normalized fast unifrac pruned tree')
]
legend([i[0] for i in legend_acronyms], loc=2)
title("pw_distances scaling, extended fast unifrac")
xlabel('number of samples', fontsize=15)
ylabel('time (seconds)', fontsize=15)
xscale('log', basex=2)
yscale('log')
xlim(min(sample_counts_full), max(sample_counts_full))
ylim(min(uwf_times_p), max([uwf_times[-1], wf_times[-1], wnf_times[-1]]))
tick_params(axis='both', which='major', labelsize=12)
tick_params(axis='both', which='minor', labelsize=12)
savefig('fast extended.png')
for ac, name in legend_acronyms:
print("%s\t: %s" % (ac, name))
n_upper_tri = lambda n: max((n**2 / 2.0) - n, 1)
time_per_calc = lambda times, counts: [(t / n_upper_tri(c)) for t, c in zip(times, counts)]
fig = figure(figsize=(6,6))
plot(sample_counts_full, time_per_calc(uwf_times, sample_counts_full), '--', color='red')
plot(sample_counts_full, time_per_calc(wf_times, sample_counts_full), '--', color='orange')
plot(sample_counts_full, time_per_calc(wnf_times, sample_counts_full), '--', color='black')
plot(sample_counts_full, time_per_calc(uwf_times_p, sample_counts_full), color='red')
plot(sample_counts_full, time_per_calc(wf_times_p, sample_counts_full), color='orange')
plot(sample_counts_full, time_per_calc(wnf_times_p, sample_counts_full), color='black')
legend_acronyms = [
('fu', 'unweighted fast unifrac'),
('fw', 'weighted fast unifrac'),
('fwn', 'weighted normalized fast unifrac'),
('fu-p', 'unweighted fast unifrac pruned tree'),
('fw-p', 'weighted fast unifrac pruned tree'),
('fwn-p', 'weighted normalized fast unifrac pruned tree')
]
legend([i[0] for i in legend_acronyms], loc=2)
title("pw_distances scaling, fast unifrac extended")
xlabel('number of samples', fontsize=15)
ylabel('time (seconds) per pairwise calc', fontsize=15)
xscale('log', basex=2)
yscale('log')
xlim(min(sample_counts_full), max(sample_counts_full))
#ylim(min(uwf_times_p), max([uwf_times[-1], wf_times[-1], wnf_times[-1]]))
tick_params(axis='both', which='major', labelsize=12)
tick_params(axis='both', which='minor', labelsize=12)
savefig('fast extended per calc.png')
for ac, name in legend_acronyms:
print("%s\t: %s" % (ac, name))
Explanation: Extend to larger sample counts for fast unifrac
End of explanation |
4,349 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to PyMC3
Probabilistic programming (PP) allows flexible specification of Bayesian statistical models in code. PyMC3 is a new, open-source PP framework with an intuitive and readable, yet powerful, syntax that is close to the natural syntax statisticians use to describe models. It features next-generation Markov chain Monte Carlo (MCMC) sampling algorithms such as the No-U-Turn Sampler (NUTS; Hoffman, 2014), a self-tuning variant of Hamiltonian Monte Carlo (HMC; Duane, 1987). This class of samplers works well on high dimensional and complex posterior distributions and allows many complex models to be fit without specialized knowledge about fitting algorithms. HMC and NUTS take advantage of gradient information from the likelihood to achieve much faster convergence than traditional sampling methods, especially for larger models. NUTS also has several self-tuning strategies for adaptively setting the tunable parameters of Hamiltonian Monte Carlo, which means you usually don't need to have specialized knowledge about how the algorithms work. PyMC3, Stan (Stan Development Team, 2014), and the LaplacesDemon package for R are currently the only PP packages to offer HMC.
PyMC3 Features
Probabilistic programming in Python confers a number of advantages including multi-platform compatibility, an expressive yet clean and readable syntax, easy integration with other scientific libraries, and extensibility via C, C++, Fortran or Cython. These features make it relatively straightforward to write and use custom statistical distributions, samplers and transformation functions, as required by Bayesian analysis.
PyMC3's feature set helps to make Bayesian analysis as painless as possible. Here is a short list of some of its features
Step1: This example will generate 1000 posterior samples.
Step2: Motivating Example
Step3: We represent our conceptual model formally as a statistical model
Step4: We have done two things here. First, we have created a Model object; a Model is a Python object that encapsulates all of the variables that comprise our theoretical model, keeping them in a single container so that they may be used as a unit. After a Model is created, we can populate it with all of the model components that we specified when we wrote the model down.
Notice that the Model above was declared using a with statement. This expression is used to define a Python idiom known as a context manager. Context managers, in general, are used to manage resources of some kind within a program. In this case, our resource is a Model, and we would like to add variables to it so that we can fit our statistical model. The key characteristic of the context manager is that the resources it manages are only defined within the indented block corresponding to the with statement. PyMC uses this idiom to automatically add defined variables to a model. Thus, any variable we define is automatically added to the Model, without having to explicitly add it. This avoids the repetitive syntax of add methods/functions that you see in some machine learning packages
Step5: However, variables can be explicitly added to models without the use of a context manager, via the variable's optional model argument.
python
disaster_model = Model()
switchpoint = DiscreteUniform('switchpoint', lower=0, upper=110, model=disaster_model)
Or, if we just want a discrete uniform distribution, and do not need to use it in a PyMC3 model necessarily, we can create one using the dist classmethod.
Step6: DiscreteUniform is an object that represents uniformly-distributed discrete variables. Use of this distribution
suggests that we have no preference a priori regarding the location of the switchpoint; all values are equally likely.
PyMC3 includes most of the common random variable distributions used for statistical modeling. For example, the following discrete random variables are available.
Step7: By having a library of variables that represent statistical distributions, users are relieved of having to code distrbutions themselves.
Similarly, we can create the exponentially-distributed variables early_mean and late_mean for the early and late Poisson rates, respectively (also in the context of the model distater_model)
Step8: In this instance, we are told that the variables are being transformed. In PyMC3, variables with purely positive priors like Exponential are transformed with a log function. This makes sampling more robust. Behind the scenes, a variable in the unconstrained space (named <variable name>_log) is added to the model for sampling. In this model this happens behind the scenes. Variables with priors that constrain them on two sides, like Beta or Uniform (continuous), are also transformed to be unconstrained but with a log odds transform.
Next, we define the variable rate, which selects the early rate early_mean for times before switchpoint and the late rate late_mean for times after switchpoint. We create rate using the switch function, which returns early_mean when the switchpoint is larger than (or equal to) a particular year, and late_mean otherwise.
Step9: The last step is to define the data likelihood, or sampling distribution. In this case, our measured outcome is the number of disasters in each year, disasters. This is a stochastic variable but unlike early_mean and late_mean we have observed its value. To express this, we set the argument observed to the observed sequence of disasters. This tells PyMC that this distribution's value is fixed, and should not be changed
Step10: The model that we specified at the top of the page has now been fully implemented in PyMC3. Let's have a look at the model's attributes to see what we have.
The stochastic nodes in the model are identified in the vars (i.e. variables) attribute
Step11: The last two variables are the log-transformed versions of the early and late rate parameters. The original variables have become deterministic nodes in the model, since they only represent values that have been back-transformed from the transformed variable, which has been subject to fitting or sampling.
Step12: You might wonder why rate, which is a deterministic component of the model, is not in this list. This is because, unlike the other components of the model, rate has not been given a name and given a formal PyMC data structure. It is essentially an intermediate calculation in the model, implying that we are not interested in its value when it comes to summarizing the output from the model. Most PyMC objects have a name assigned; these names are used for storage and post-processing
Step13: Now, rate is included in the Model's deterministics list, and the model will retain its samples during MCMC sampling, for example.
Step14: Why are data and unknown variables represented by the same object?
Since its represented by PyMC random variable object, disasters is defined by its dependence on its parent rate even though its value is fixed. This isn't just a quirk of PyMC's syntax; Bayesian hierarchical notation itself makes no distinction between random variables and data. The reason is simple
Step15: PyMC's built-in distribution variables can also be used to generate random values from that variable. For example, the switchpoint, which is a discrete uniform random variable, can generate random draws
Step16: As we noted earlier, some variables have undergone transformations prior to sampling. Such variables will have transformed attributes that points to the variable that it has been transformed to.
Step17: Variables will usually have an associated distribution, as determined by the constructor used to create it. For example, the switchpoint variable was created by calling DiscreteUniform(). Hence, its distribution is DiscreteUniform
Step18: As with all Python objects, the underlying type of a variable can be exposed with the type() function
Step19: We will learn more about these types in an upcoming section.
Variable log-probabilities
All PyMC3 stochastic variables can evaluate their probability mass or density functions at a particular value, given the values of their parents. The logarithm of a stochastic object's probability mass or density can be
accessed via the logp method.
Step20: For vector-valued variables like disasters, the logp attribute returns the sum of the logarithms of
the joint probability or density of all elements of the value.
Step21: Custom variables
Though we created the variables in disaster_model using well-known probability distributions that are available in PyMC3, its possible to create custom distributions by wrapping functions that compute an arbitrary log-probability using the DensityDist function. For example, our initial example showed an exponential survival function, which accounts for censored data. If we pass this function as the logp argument for DensityDist, we can use it as the data likelihood in a survival model
Step22: This returns the Markov chain of draws from the model in a data structure called a trace.
Step23: The sample() function always takes at least one argument, draws, which specifies how many samples to draw. However, there are a number of additional optional arguments that are worth knowing about
Step24: The step argument is what allows users to manually override the sampling algorithms used to fit the model. For example, if we wanted to use a slice sampler to sample the early_mean and late_mean variables, we could specify it
Step25: Accessing the samples
The output of the sample function is a MultiTrace object, which stores the sequence of samples for each variable in the model. These traces can be accessed using dict-style indexing
Step26: The trace can also be sliced using the NumPy array slice [start
Step27: Sampling output
You can examine the marginal posterior of any variable by plotting a
histogram of its trace
Step28: PyMC has its own plotting functionality dedicated to plotting MCMC output. For example, we can obtain a time series plot of the trace and a histogram using traceplot
Step29: The upper left-hand pane of each figure shows the temporal series of the
samples from each parameter, while below is an autocorrelation plot of
the samples. The right-hand pane shows a histogram of the trace. The
trace is useful for evaluating and diagnosing the algorithm's
performance, while the histogram is useful for
visualizing the posterior.
For a non-graphical summary of the posterior, simply call the stats method. | Python Code:
%load ../data/melanoma_data.py
%matplotlib inline
import seaborn as sns; sns.set_context('notebook')
from pymc3 import Normal, Model, DensityDist, sample, log, exp
with Model() as melanoma_survival:
# Convert censoring indicators to indicators for failure event
failure = (censored==0).astype(int)
# Parameters (intercept and treatment effect) for survival rate
beta = Normal('beta', mu=0.0, sd=1e5, shape=2)
# Survival rates, as a function of treatment
lam = exp(beta[0] + beta[1]*treat)
# Survival likelihood, accounting for censoring
def logp(failure, value):
return (failure * log(lam) - lam * value).sum()
x = DensityDist('x', logp, observed={'failure':failure, 'value':t})
Explanation: Introduction to PyMC3
Probabilistic programming (PP) allows flexible specification of Bayesian statistical models in code. PyMC3 is a new, open-source PP framework with an intuitive and readable, yet powerful, syntax that is close to the natural syntax statisticians use to describe models. It features next-generation Markov chain Monte Carlo (MCMC) sampling algorithms such as the No-U-Turn Sampler (NUTS; Hoffman, 2014), a self-tuning variant of Hamiltonian Monte Carlo (HMC; Duane, 1987). This class of samplers works well on high dimensional and complex posterior distributions and allows many complex models to be fit without specialized knowledge about fitting algorithms. HMC and NUTS take advantage of gradient information from the likelihood to achieve much faster convergence than traditional sampling methods, especially for larger models. NUTS also has several self-tuning strategies for adaptively setting the tunable parameters of Hamiltonian Monte Carlo, which means you usually don't need to have specialized knowledge about how the algorithms work. PyMC3, Stan (Stan Development Team, 2014), and the LaplacesDemon package for R are currently the only PP packages to offer HMC.
PyMC3 Features
Probabilistic programming in Python confers a number of advantages including multi-platform compatibility, an expressive yet clean and readable syntax, easy integration with other scientific libraries, and extensibility via C, C++, Fortran or Cython. These features make it relatively straightforward to write and use custom statistical distributions, samplers and transformation functions, as required by Bayesian analysis.
PyMC3's feature set helps to make Bayesian analysis as painless as possible. Here is a short list of some of its features:
Fits Bayesian statistical models with Markov chain Monte Carlo, variational inference and
other algorithms.
Includes a large suite of well-documented statistical distributions.
Creates summaries including tables and plots.
Traces can be saved to the disk as plain text, SQLite or pandas dataframes.
Several convergence diagnostics and model checking methods are available.
Extensible: easily incorporates custom step methods and unusual probability distributions.
MCMC loops can be embedded in larger programs, and results can be analyzed with the full power of Python.
Here, we present a primer on the use of PyMC3 for solving general Bayesian statistical inference and prediction problems. We will first see the basics of how to use PyMC3, motivated by a simple example: installation, data creation, model definition, model fitting and posterior analysis. Then we will cover two case studies and use them to show how to define and fit more sophisticated models. Finally we will show how to extend PyMC3 and discuss other useful features: the Generalized Linear Models subpackage, custom distributions, custom transformations and alternative storage backends.
End of explanation
with melanoma_survival:
trace = sample(1000)
from pymc3 import traceplot
traceplot(trace[500:], varnames=['beta']);
Explanation: This example will generate 1000 posterior samples.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
disasters_data = np.array([4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6,
3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5,
2, 2, 3, 4, 2, 1, 3, 2, 2, 1, 1, 1, 1, 3, 0, 0,
1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1,
0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2,
3, 3, 1, 1, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4,
0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1])
n_years = len(disasters_data)
plt.figure(figsize=(12.5, 3.5))
plt.bar(np.arange(1851, 1962), disasters_data, color="#348ABD")
plt.xlabel("Year")
plt.ylabel("Disasters")
plt.title("UK coal mining disasters, 1851-1962")
plt.xlim(1851, 1962);
Explanation: Motivating Example: Coal mining disasters
Consider the following time series of recorded coal mining disasters in the UK from 1851 to 1962 (Jarrett, 1979). The number of disasters is thought to have been affected by changes in safety regulations during this period.
Let's build a model for this series and attempt to estimate when the change occurred.
End of explanation
from pymc3 import DiscreteUniform
with Model() as disaster_model:
switchpoint = DiscreteUniform('switchpoint', lower=0, upper=n_years)
Explanation: We represent our conceptual model formally as a statistical model:
$$\begin{array}{ccc}
(y_t | \tau, \lambda_1, \lambda_2) \sim\text{Poisson}\left(r_t\right), & r_t=\left{
\begin{array}{lll}
\lambda_1 &\text{if}& t< \tau\
\lambda_2 &\text{if}& t\ge \tau
\end{array}\right.,&t\in[t_l,t_h]\
\tau \sim \text{DiscreteUniform}(t_l, t_h)\
\lambda_1\sim \text{Exponential}(a)\
\lambda_2\sim \text{Exponential}(b)
\end{array}$$
Because we have defined $y$ by its dependence on $\tau$, $\lambda_1$ and $\lambda_2$, the latter three are known as the parents of $y$ and $D$ is called their child. Similarly, the parents of $\tau$ are $t_l$ and $t_h$, and $\tau$ is the child of $t_l$ and $t_h$.
Implementing a PyMC Model
At the model-specification stage (before the data are observed), $y$, $\tau$, $\lambda_1$, and $\lambda_2$ are all random variables. Bayesian "random" variables have not necessarily arisen from a physical random process. The Bayesian interpretation of probability is epistemic, meaning random variable $x$'s probability distribution $p(x)$ represents our knowledge and uncertainty about $x$'s value. Candidate values of $x$ for which $p(x)$ is high are relatively more probable, given what we know.
We can generally divide the variables in a Bayesian model into two types: stochastic and deterministic. The only deterministic variable in this model is $r$. If we knew the values of $r$'s parents, we could compute the value of $r$ exactly. A deterministic like $r$ is defined by a mathematical function that returns its value given values for its parents. Deterministic variables are sometimes called the systemic part of the model. The nomenclature is a bit confusing, because these objects usually represent random variables; since the parents of $r$ are random, $r$ is random also.
On the other hand, even if the values of the parents of variables switchpoint, disasters (before observing the data), early_mean or late_mean were known, we would still be uncertain of their values. These variables are stochastic, characterized by probability distributions that express how plausible their candidate values are, given values for their parents.
Let's begin by defining the unknown switchpoint as a discrete uniform random variable:
End of explanation
foo = DiscreteUniform('foo', lower=0, upper=10)
Explanation: We have done two things here. First, we have created a Model object; a Model is a Python object that encapsulates all of the variables that comprise our theoretical model, keeping them in a single container so that they may be used as a unit. After a Model is created, we can populate it with all of the model components that we specified when we wrote the model down.
Notice that the Model above was declared using a with statement. This expression is used to define a Python idiom known as a context manager. Context managers, in general, are used to manage resources of some kind within a program. In this case, our resource is a Model, and we would like to add variables to it so that we can fit our statistical model. The key characteristic of the context manager is that the resources it manages are only defined within the indented block corresponding to the with statement. PyMC uses this idiom to automatically add defined variables to a model. Thus, any variable we define is automatically added to the Model, without having to explicitly add it. This avoids the repetitive syntax of add methods/functions that you see in some machine learning packages:
python
model.add(a_variable)
model.add(another_variable)
model.add(yet_another_variable)
model.add(and_again)
model.add(please_kill_me_now)
...
In fact, PyMC variables cannot be defined without a corresponding Model:
End of explanation
x = DiscreteUniform.dist(lower=0, upper=100)
x
Explanation: However, variables can be explicitly added to models without the use of a context manager, via the variable's optional model argument.
python
disaster_model = Model()
switchpoint = DiscreteUniform('switchpoint', lower=0, upper=110, model=disaster_model)
Or, if we just want a discrete uniform distribution, and do not need to use it in a PyMC3 model necessarily, we can create one using the dist classmethod.
End of explanation
from pymc3 import discrete
discrete.__all__
Explanation: DiscreteUniform is an object that represents uniformly-distributed discrete variables. Use of this distribution
suggests that we have no preference a priori regarding the location of the switchpoint; all values are equally likely.
PyMC3 includes most of the common random variable distributions used for statistical modeling. For example, the following discrete random variables are available.
End of explanation
from pymc3 import Exponential
with disaster_model:
early_mean = Exponential('early_mean', 1)
late_mean = Exponential('late_mean', 1)
Explanation: By having a library of variables that represent statistical distributions, users are relieved of having to code distrbutions themselves.
Similarly, we can create the exponentially-distributed variables early_mean and late_mean for the early and late Poisson rates, respectively (also in the context of the model distater_model):
End of explanation
from pymc3 import switch
with disaster_model:
rate = switch(switchpoint >= np.arange(n_years), early_mean, late_mean)
Explanation: In this instance, we are told that the variables are being transformed. In PyMC3, variables with purely positive priors like Exponential are transformed with a log function. This makes sampling more robust. Behind the scenes, a variable in the unconstrained space (named <variable name>_log) is added to the model for sampling. In this model this happens behind the scenes. Variables with priors that constrain them on two sides, like Beta or Uniform (continuous), are also transformed to be unconstrained but with a log odds transform.
Next, we define the variable rate, which selects the early rate early_mean for times before switchpoint and the late rate late_mean for times after switchpoint. We create rate using the switch function, which returns early_mean when the switchpoint is larger than (or equal to) a particular year, and late_mean otherwise.
End of explanation
from pymc3 import Poisson
with disaster_model:
disasters = Poisson('disasters', mu=rate, observed=disasters_data)
Explanation: The last step is to define the data likelihood, or sampling distribution. In this case, our measured outcome is the number of disasters in each year, disasters. This is a stochastic variable but unlike early_mean and late_mean we have observed its value. To express this, we set the argument observed to the observed sequence of disasters. This tells PyMC that this distribution's value is fixed, and should not be changed:
End of explanation
disaster_model.vars
Explanation: The model that we specified at the top of the page has now been fully implemented in PyMC3. Let's have a look at the model's attributes to see what we have.
The stochastic nodes in the model are identified in the vars (i.e. variables) attribute:
End of explanation
disaster_model.deterministics
Explanation: The last two variables are the log-transformed versions of the early and late rate parameters. The original variables have become deterministic nodes in the model, since they only represent values that have been back-transformed from the transformed variable, which has been subject to fitting or sampling.
End of explanation
from pymc3 import Deterministic
with disaster_model:
rate = Deterministic('rate', switch(switchpoint >= np.arange(n_years), early_mean, late_mean))
Explanation: You might wonder why rate, which is a deterministic component of the model, is not in this list. This is because, unlike the other components of the model, rate has not been given a name and given a formal PyMC data structure. It is essentially an intermediate calculation in the model, implying that we are not interested in its value when it comes to summarizing the output from the model. Most PyMC objects have a name assigned; these names are used for storage and post-processing:
as keys in on-disk databases,
as axis labels in plots of traces,
as table labels in summary statistics.
If we wish to include rate in our output, we need to make it a Deterministic object, and give it a name:
End of explanation
disaster_model.deterministics
Explanation: Now, rate is included in the Model's deterministics list, and the model will retain its samples during MCMC sampling, for example.
End of explanation
disasters.dtype
early_mean.init_value
Explanation: Why are data and unknown variables represented by the same object?
Since its represented by PyMC random variable object, disasters is defined by its dependence on its parent rate even though its value is fixed. This isn't just a quirk of PyMC's syntax; Bayesian hierarchical notation itself makes no distinction between random variables and data. The reason is simple: to use Bayes' theorem to compute the posterior, we require the likelihood. Even though disasters's value is known and fixed, we need to formally assign it a probability distribution as if it were a random variable. Remember, the likelihood and the probability function are essentially the same, except that the former is regarded as a function of the parameters and the latter as a function of the data. This point can be counterintuitive at first, as many peoples' instinct is to regard data as fixed a priori and unknown variables as dependent on the data.
One way to understand this is to think of statistical models as predictive models for data, or as models of the processes that gave rise to data. Before observing the value of disasters, we could have sampled from its prior predictive distribution $p(y)$ (i.e. the marginal distribution of the data) as follows:
Sample early_mean, switchpoint and late_mean from their
priors.
Sample disasters conditional on these values.
Even after we observe the value of disasters, we need to use this process model to make inferences about early_mean , switchpoint and late_mean because its the only information we have about how the variables are related.
We will see later that we can sample from this fixed stochastic random variable, to obtain predictions after having observed our data.
PyMC3 Variables
Each of the built-in statistical variables are subclasses of the generic Distribution class in PyMC3. The Distribution carries relevant attributes about the probability distribution, such as the data type (called dtype), any relevant transformations (transform, see below), and initial values (init_value).
End of explanation
plt.hist(switchpoint.random(size=1000))
Explanation: PyMC's built-in distribution variables can also be used to generate random values from that variable. For example, the switchpoint, which is a discrete uniform random variable, can generate random draws:
End of explanation
early_mean.transformed
Explanation: As we noted earlier, some variables have undergone transformations prior to sampling. Such variables will have transformed attributes that points to the variable that it has been transformed to.
End of explanation
switchpoint.distribution
Explanation: Variables will usually have an associated distribution, as determined by the constructor used to create it. For example, the switchpoint variable was created by calling DiscreteUniform(). Hence, its distribution is DiscreteUniform:
End of explanation
type(switchpoint)
type(disasters)
Explanation: As with all Python objects, the underlying type of a variable can be exposed with the type() function:
End of explanation
switchpoint.logp({'switchpoint':55, 'early_mean_log':1, 'late_mean_log':1})
Explanation: We will learn more about these types in an upcoming section.
Variable log-probabilities
All PyMC3 stochastic variables can evaluate their probability mass or density functions at a particular value, given the values of their parents. The logarithm of a stochastic object's probability mass or density can be
accessed via the logp method.
End of explanation
disasters.logp({'switchpoint':55, 'early_mean_log':1, 'late_mean_log':1})
Explanation: For vector-valued variables like disasters, the logp attribute returns the sum of the logarithms of
the joint probability or density of all elements of the value.
End of explanation
with disaster_model:
trace = sample(2000)
Explanation: Custom variables
Though we created the variables in disaster_model using well-known probability distributions that are available in PyMC3, its possible to create custom distributions by wrapping functions that compute an arbitrary log-probability using the DensityDist function. For example, our initial example showed an exponential survival function, which accounts for censored data. If we pass this function as the logp argument for DensityDist, we can use it as the data likelihood in a survival model:
```python
def logp(failure, value):
return (failure * log(lam) - lam * value).sum()
x = DensityDist('x', logp, observed={'failure':failure, 'value':t})
```
Users are thus not
limited to the set of of statistical distributions provided by PyMC.
Fitting the model with MCMC
PyMC3's sample function will fit probability models (linked collections of variables) like ours using Markov chain Monte Carlo (MCMC) sampling. Unless we manually assign particular algorithms to variables in our model, PyMC will assign algorithms that it deems appropriate (it usually does a decent job of this):
End of explanation
trace
Explanation: This returns the Markov chain of draws from the model in a data structure called a trace.
End of explanation
help(sample)
Explanation: The sample() function always takes at least one argument, draws, which specifies how many samples to draw. However, there are a number of additional optional arguments that are worth knowing about:
End of explanation
from pymc3 import Slice
with disaster_model:
trace = sample(1000, step=Slice(vars=[early_mean, late_mean]))
Explanation: The step argument is what allows users to manually override the sampling algorithms used to fit the model. For example, if we wanted to use a slice sampler to sample the early_mean and late_mean variables, we could specify it:
End of explanation
trace['late_mean']
Explanation: Accessing the samples
The output of the sample function is a MultiTrace object, which stores the sequence of samples for each variable in the model. These traces can be accessed using dict-style indexing:
End of explanation
trace['late_mean', -5:]
Explanation: The trace can also be sliced using the NumPy array slice [start:stop:step].
End of explanation
plt.hist(trace['late_mean']);
Explanation: Sampling output
You can examine the marginal posterior of any variable by plotting a
histogram of its trace:
End of explanation
from pymc3 import traceplot
traceplot(trace[500:], varnames=['early_mean', 'late_mean', 'switchpoint']);
Explanation: PyMC has its own plotting functionality dedicated to plotting MCMC output. For example, we can obtain a time series plot of the trace and a histogram using traceplot:
End of explanation
from pymc3 import summary
summary(trace[500:], varnames=['early_mean', 'late_mean'])
Explanation: The upper left-hand pane of each figure shows the temporal series of the
samples from each parameter, while below is an autocorrelation plot of
the samples. The right-hand pane shows a histogram of the trace. The
trace is useful for evaluating and diagnosing the algorithm's
performance, while the histogram is useful for
visualizing the posterior.
For a non-graphical summary of the posterior, simply call the stats method.
End of explanation |
4,350 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dual Momentum Sector Rotation (DMSR)
'Relative momentum looks at price strength with respect to other assets.
Absolute momentum uses an asset’s own past performance to infer future
performance. Absolute momentum can reduce downside exposure as well
enhance returns. The best approach is to use both types of momentum
together. That is what dual momentum is all about.'
https
Step1: Some global data
Step2: Run Strategy
Step3: View logs
Step4: Generate strategy stats - display all available stats
Step5: Run Benchmark, Retrieve benchmark logs, and Generate benchmark stats
Step6: Plot Equity Curves
Step7: Bar Graph | Python Code:
import datetime
import matplotlib.pyplot as plt
import pandas as pd
import pinkfish as pf
import strategy
# Format price data.
pd.options.display.float_format = '{:0.2f}'.format
%matplotlib inline
# Set size of inline plots.
'''note: rcParams can't be in same cell as import matplotlib
or %matplotlib inline
%matplotlib notebook: will lead to interactive plots embedded within
the notebook, you can zoom and resize the figure
%matplotlib inline: only draw static images in the notebook
'''
plt.rcParams["figure.figsize"] = (10, 7)
Explanation: Dual Momentum Sector Rotation (DMSR)
'Relative momentum looks at price strength with respect to other assets.
Absolute momentum uses an asset’s own past performance to infer future
performance. Absolute momentum can reduce downside exposure as well
enhance returns. The best approach is to use both types of momentum
together. That is what dual momentum is all about.'
https://www.optimalmomentum.com/momentum/
Buy Signal: When the S&P 500 is above its 10-month simple moving average, buy the sectors with the biggest gains over a three-month timeframe and (optionally) has positive absolute momentum.
Sell Signal: (Optionally) Exit all positions when the S&P 500 moves below its 10-month simple moving average on a monthly closing basis, or (optionaly) exit a single position if it has negative absolute momentum.
Rebalance: Once per month, sell sectors that fall out of the top tier (three) and buy the sectors that move into the top tier (two or three).
https://school.stockcharts.com/doku.php?id=trading_strategies:sector_rotation_roc
https://robotwealth.com/dual-momentum-review/
You can reproduce the results on robowealth by setting the 'end' date to (2017, 1, 1). You can also note that these methods have NOT done so well since 2018, and especially didn't handle the COVID downturn very well.
End of explanation
SP500_Sectors = ['SPY', 'XLB', 'XLE', 'XLF', 'XLI', 'XLK', 'XLP', 'XLU', 'XLV', 'XLY']
Other_Sectors = ['RSP', 'DIA', 'IWM', 'QQQ', 'DAX', 'EEM', 'TLT', 'GLD', 'XHB']
Diversified_Assets = ['SPY', 'TLT', 'NLY', 'GLD']
Diversified_Assets_Reddit = ['IWB', 'IEV', 'EWJ', 'EPP', 'IEF', 'SHY', 'GLD']
Robot_Dual_Momentum_Equities = ['SPY', 'CWI']
Robot_Dual_Momentum_Bonds = ['CSJ', 'HYG']
Robot_Dual_Momentum_Equities_Bonds = ['SPY', 'AGG']
Robot_Wealth = ['IWM', 'SPY', 'VGK', 'IEV', 'EWJ', 'EPP', 'IEF', 'SHY', 'GLD']
# Pick one of the above
symbols = SP500_Sectors
capital = 10000
start = datetime.datetime(2007, 1, 1)
#start = datetime.datetime(*pf.SP500_BEGIN)
end = datetime.datetime.now()
#end = datetime.datetime(2019, 12, 1)
options = {
'use_adj' : True,
'use_cache' : True,
'lookback': 6,
'margin': 1,
'use_absolute_mom': False,
'use_regime_filter': False,
'top_tier': 2
#'top_tier': int(len(symbols)/2)
}
options
Explanation: Some global data
End of explanation
s = strategy.Strategy(symbols, capital, start, end, options)
s.run()
Explanation: Run Strategy
End of explanation
s.rlog.head()
s.tlog.tail()
s.dbal.tail()
Explanation: View logs
End of explanation
pf.print_full(s.stats)
Explanation: Generate strategy stats - display all available stats
End of explanation
benchmark = pf.Benchmark('SPY', s.capital, s.start, s.end, use_adj=True)
benchmark.run()
Explanation: Run Benchmark, Retrieve benchmark logs, and Generate benchmark stats
End of explanation
pf.plot_equity_curve(s.dbal, benchmark=benchmark.dbal)
Explanation: Plot Equity Curves: Strategy vs Benchmark
End of explanation
df = pf.plot_bar_graph(s.stats, benchmark.stats)
df
Explanation: Bar Graph: Strategy vs Benchmark
End of explanation |
4,351 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-2', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: IPSL
Source ID: SANDBOX-2
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
4,352 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
https
Step1: Input Dataset
The dataset that will be used for training is the TCIA CBIS-DDSM dataset. This dataset contains ~2500 mammography images in DICOM format. Each image is given a BI-RADS breast density score from 1 to 4. In this tutorial, we will build a binary classifier that distinguishes between breast density "2" (scattered density) and "3" (heterogeneously dense). These are the two most common and variably assigned scores. In the literature, this is said to be particularly difficult for radiologists to consistently distinguish.
Step2: Next, we are going to transfer the DICOM instances to the Cloud Healthcare API.
Note
Step3: Explore the Cloud Healthcare DICOM dataset (optional)
This is an optional section to explore the Cloud Healthcare DICOM dataset. In the following code, we simply just list the studies that we have loaded into the Cloud Healthcare API. You can modify the num_of_studies_to_print parameter to print as many studies as desired.
Step4: Convert DICOM to JPEG
The ML model that we will build requires that the dataset be in JPEG. We will leverage the Cloud Healthcare API to transcode DICOM to JPEG.
First we will create a Google Cloud Storage bucket to hold the output JPEG files. Next, we will use the ExportDicomData API to transform the DICOMs to JPEGs.
Step5: Next we will convert the DICOMs to JPEGs using the ExportDicomData.
Step6: Meanwhile, you should be able to observe the JPEG images being added to your Google Cloud Storage bucket.
Next, we will join the training data stored in Google Cloud Storage with the labels in the TCIA website. The output of this step is a CSV file that is input to AutoML. This CSV contains a list of pairs of (IMAGE_PATH, LABEL).
Step7: Training
This section will focus on using AutoML through its API. AutoML can also be used through the user interface found here. The below steps in this section can all be done through the web UI .
We will use AutoML Vision to train the classification model. AutoML provides a fully managed solution for training the model. All we will do is input the list of input images and labels. The trained model in AutoML will be able to classify the mammography images as either "2" (scattered density) or "3" (heterogeneously dense).
As a first step, we will create a AutoML dataset.
Step8: Next, we will import the CSV that contains the list of (IMAGE_PATH, LABEL) list into AutoML. Please ignore errors regarding an existing ground truth.
Step9: The output of the previous step is an operation that will need to poll the status for. We will poll until the operation's "done" field is set to true. This will take a few minutes to complete so we will wait until completion.
Step10: Next, we will train the model to perform classification. We will set the training budget to be a maximum of 1hr (but this can be modified below). The cost of using AutoML can be found here. Typically, the longer the model is trained for, the more accurate it will be.
Step11: The output of the previous step is also an operation that will need to poll the status of. We will poll until the operation's "done" field is set to true. This will take a few minutes to complete.
Step12: Next, we will check out the accuracy metrics for the trained model. The following command will return the AUC (ROC), precision and recall for the model, for various ML classification thresholds.
Step13: Inference
To allow medical imaging ML models to be easily integrated into clinical workflows, an inference module can be used. A standalone modality, a PACS system or a DICOM router can push DICOM instances into Cloud Healthcare DICOM stores, allowing ML models to be triggered for inference. This inference results can then be structured into various DICOM formats (e.g. DICOM structured reports) and stored in the Cloud Healthcare API, which can then be retrieved by the customer.
The inference module is built as a Docker container and deployed using Kubernetes, allowing you to easily scale your deployment. The dataflow for inference can look as follows (see corresponding diagram below)
Step14: Next, we will building the inference module using Cloud Build API. This will create a Docker container that will be stored in Google Container Registry. The inference module code is found in inference.py. The build script used to build the Docker container for this module is cloudbuild.yaml. Progress of build may be found on cloud build dashboard.
Step15: Next, we will deploy the inference module to Kubernetes.
Then we create a Kubernetes Cluster and a Deployment for the inference module.
Step16: Next, we will store a mammography DICOM instance from the TCIA dataset to the DICOM store. This is the image that we will request inference for. Pushing this instance to the DICOM store will result in a Pubsub message, which will trigger the inference module.
Step17: You should be able to observe the inference module's logs by running the following command. In the logs, you should observe that the inference module successfully recieved the the Pubsub message and ran inference on the DICOM instance. The logs should also include the inference results. It can take a few minutes for the Kubernetes deployment to start up, so you many need to run this a few times. The logs should also include the inference results. It can take a few minutes for the Kubernetes deployment to start up, so you many need to run this a few times.
Step18: You can also query the Cloud Healthcare DICOMWeb API (using QIDO-RS) to see that the DICOM structured report has been inserted for the study. The structured report contents can be found under tag "0040A730".
You can optionally also use WADO-RS to recieve the instance (e.g. for viewing). | Python Code:
%%bash
pip3 install git+https://github.com/GoogleCloudPlatform/healthcare.git#subdirectory=imaging/ml/toolkit
pip3 install dicomweb-client
pip3 install pydicom
Explanation: Copyright 2018 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
This tutorial is for educational purposes purposes only and is not intended for use in clinical diagnosis or clinical decision-making or for any other clinical use.
Training/Inference on Breast Density Classification Model on AutoML Vision
The goal of this tutorial is to train, deploy and run inference on a breast density classification model. Breast density is thought to be a factor for an increase in the risk for breast cancer. This will emphasize using the Cloud Healthcare API in order to store, retreive and transcode medical images (in DICOM format) in a managed and scalable way. This tutorial will focus on using Cloud AutoML Vision to scalably train and serve the model.
Note: This is the AutoML version of the Cloud ML Engine Codelab found here.
Requirements
A Google Cloud project.
Project has Cloud Healthcare API enabled.
Project has Cloud AutoML API enabled.
Project has Cloud Build API enabled.
Project has Kubernetes engine API enabled.
Project has Cloud Resource Manager API enabled.
Notebook dependencies
We will need to install the hcls_imaging_ml_toolkit package found here. This toolkit helps make working with DICOM objects and the Cloud Healthcare API easier.
In addition, we will install dicomweb-client to help us interact with the DIOCOMWeb API and pydicom which is used to help up construct DICOM objects.
End of explanation
project_id = "MY_PROJECT" # @param
location = "us-central1"
dataset_id = "MY_DATASET" # @param
dicom_store_id = "MY_DICOM_STORE" # @param
# Input data used by AutoML must be in a bucket with the following format.
automl_bucket_name = "gs://" + project_id + "-vcm"
%%bash -s {project_id} {location} {automl_bucket_name}
# Create bucket.
gsutil -q mb -c regional -l $2 $3
# Allow Cloud Healthcare API to write to bucket.
PROJECT_NUMBER=`gcloud projects describe $1 | grep projectNumber | sed 's/[^0-9]//g'`
SERVICE_ACCOUNT="service-${PROJECT_NUMBER}@gcp-sa-healthcare.iam.gserviceaccount.com"
COMPUTE_ENGINE_SERVICE_ACCOUNT="${PROJECT_NUMBER}[email protected]"
gsutil -q iam ch serviceAccount:${SERVICE_ACCOUNT}:objectAdmin $3
gsutil -q iam ch serviceAccount:${COMPUTE_ENGINE_SERVICE_ACCOUNT}:objectAdmin $3
gcloud projects add-iam-policy-binding $1 --member=serviceAccount:${SERVICE_ACCOUNT} --role=roles/pubsub.publisher
gcloud projects add-iam-policy-binding $1 --member=serviceAccount:${COMPUTE_ENGINE_SERVICE_ACCOUNT} --role roles/pubsub.admin
# Allow compute service account to create datasets and dicomStores.
gcloud projects add-iam-policy-binding $1 --member=serviceAccount:${COMPUTE_ENGINE_SERVICE_ACCOUNT} --role roles/healthcare.dicomStoreAdmin
gcloud projects add-iam-policy-binding $1 --member=serviceAccount:${COMPUTE_ENGINE_SERVICE_ACCOUNT} --role roles/healthcare.datasetAdmin
import json
import os
import google.auth
from google.auth.transport.requests import AuthorizedSession
from hcls_imaging_ml_toolkit import dicom_path
credentials, project = google.auth.default()
authed_session = AuthorizedSession(credentials)
# Path to Cloud Healthcare API.
HEALTHCARE_API_URL = 'https://healthcare.googleapis.com/v1'
# Create Cloud Healthcare API dataset.
path = os.path.join(HEALTHCARE_API_URL, 'projects', project_id, 'locations', location, 'datasets?dataset_id=' + dataset_id)
headers = {'Content-Type': 'application/json'}
resp = authed_session.post(path, headers=headers)
assert resp.status_code == 200, 'error creating Dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))
# Create Cloud Healthcare API DICOM store.
path = os.path.join(HEALTHCARE_API_URL, 'projects', project_id, 'locations', location, 'datasets', dataset_id, 'dicomStores?dicom_store_id=' + dicom_store_id)
resp = authed_session.post(path, headers=headers)
assert resp.status_code == 200, 'error creating DICOM store, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))
dicom_store_path = dicom_path.Path(project_id, location, dataset_id, dicom_store_id)
Explanation: Input Dataset
The dataset that will be used for training is the TCIA CBIS-DDSM dataset. This dataset contains ~2500 mammography images in DICOM format. Each image is given a BI-RADS breast density score from 1 to 4. In this tutorial, we will build a binary classifier that distinguishes between breast density "2" (scattered density) and "3" (heterogeneously dense). These are the two most common and variably assigned scores. In the literature, this is said to be particularly difficult for radiologists to consistently distinguish.
End of explanation
# Store DICOM instances in Cloud Healthcare API.
path = 'https://healthcare.googleapis.com/v1/{}:import'.format(dicom_store_path)
headers = {'Content-Type': 'application/json'}
body = {
'gcsSource': {
'uri': 'gs://gcs-public-data--healthcare-tcia-cbis-ddsm/dicom/**'
}
}
resp = authed_session.post(path, headers=headers, json=body)
assert resp.status_code == 200, 'error creating Dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))
response = json.loads(resp.text)
operation_name = response['name']
import time
def wait_for_operation_completion(path, timeout, sleep_time=30):
success = False
while time.time() < timeout:
print('Waiting for operation completion...')
resp = authed_session.get(path)
assert resp.status_code == 200, 'error polling for Operation results, code: {0}, response: {1}'.format(resp.status_code, resp.text)
response = json.loads(resp.text)
if 'done' in response:
if response['done'] == True and 'error' not in response:
success = True;
break
time.sleep(sleep_time)
print('Full response:\n{0}'.format(resp.text))
assert success, "operation did not complete successfully in time limit"
print('Success!')
return response
path = os.path.join(HEALTHCARE_API_URL, operation_name)
timeout = time.time() + 40*60 # Wait up to 40 minutes.
_ = wait_for_operation_completion(path, timeout)
Explanation: Next, we are going to transfer the DICOM instances to the Cloud Healthcare API.
Note: We are transfering >100GB of data so this will take some time to complete
End of explanation
num_of_studies_to_print = 2 # @param
path = os.path.join(HEALTHCARE_API_URL, dicom_store_path.dicomweb_path_str, 'studies')
resp = authed_session.get(path)
assert resp.status_code == 200, 'error querying Dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text)
response = json.loads(resp.text)
print(json.dumps(response[:num_of_studies_to_print], indent=2))
Explanation: Explore the Cloud Healthcare DICOM dataset (optional)
This is an optional section to explore the Cloud Healthcare DICOM dataset. In the following code, we simply just list the studies that we have loaded into the Cloud Healthcare API. You can modify the num_of_studies_to_print parameter to print as many studies as desired.
End of explanation
# Folder to store input images for AutoML Vision.
jpeg_folder = automl_bucket_name + "/images/"
Explanation: Convert DICOM to JPEG
The ML model that we will build requires that the dataset be in JPEG. We will leverage the Cloud Healthcare API to transcode DICOM to JPEG.
First we will create a Google Cloud Storage bucket to hold the output JPEG files. Next, we will use the ExportDicomData API to transform the DICOMs to JPEGs.
End of explanation
%%bash -s {jpeg_folder} {project_id} {location} {dataset_id} {dicom_store_id}
gcloud beta healthcare --project $2 dicom-stores export gcs $5 --location=$3 --dataset=$4 --mime-type="image/jpeg; transfer-syntax=1.2.840.10008.1.2.4.50" --gcs-uri-prefix=$1
Explanation: Next we will convert the DICOMs to JPEGs using the ExportDicomData.
End of explanation
# tensorflow==1.15.0 to have same versions in all environments - dataflow, automl, ai-platform
!pip install tensorflow==1.15.0 --ignore-installed
# CSV to hold (IMAGE_PATH, LABEL) list.
input_data_csv = automl_bucket_name + "/input.csv"
import csv
import os
import re
from tensorflow.python.lib.io import file_io
import scripts.tcia_utils as tcia_utils
# Get map of study_uid -> file paths.
path_list = file_io.get_matching_files(os.path.join(jpeg_folder, '*/*/*'))
study_uid_to_file_paths = {}
pattern = r'^{0}(?P<study_uid>[^/]+)/(?P<series_uid>[^/]+)/(?P<instance_uid>.*)'.format(jpeg_folder)
for path in path_list:
match = re.search(pattern, path)
study_uid_to_file_paths[match.group('study_uid')] = path
# Get map of study_uid -> labels.
study_uid_to_labels = tcia_utils.GetStudyUIDToLabelMap()
# Join the two maps, output results to CSV in Google Cloud Storage.
with file_io.FileIO(input_data_csv, 'w') as f:
writer = csv.writer(f, delimiter=',')
for study_uid, label in study_uid_to_labels.items():
if study_uid in study_uid_to_file_paths:
writer.writerow([study_uid_to_file_paths[study_uid], label])
Explanation: Meanwhile, you should be able to observe the JPEG images being added to your Google Cloud Storage bucket.
Next, we will join the training data stored in Google Cloud Storage with the labels in the TCIA website. The output of this step is a CSV file that is input to AutoML. This CSV contains a list of pairs of (IMAGE_PATH, LABEL).
End of explanation
automl_dataset_display_name = "MY_AUTOML_DATASET" # @param
import json
import os
# Path to AutoML API.
AUTOML_API_URL = 'https://automl.googleapis.com/v1beta1'
# Path to request creation of AutoML dataset.
path = os.path.join(AUTOML_API_URL, 'projects', project_id, 'locations', location, 'datasets')
# Headers (request in JSON format).
headers = {'Content-Type': 'application/json'}
# Body (encoded in JSON format).
config = {'display_name': automl_dataset_display_name, 'image_classification_dataset_metadata': {'classification_type': 'MULTICLASS'}}
resp = authed_session.post(path, headers=headers, json=config)
assert resp.status_code == 200, 'creating AutoML dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))
# Record the AutoML dataset name.
response = json.loads(resp.text)
automl_dataset_name = response['name']
Explanation: Training
This section will focus on using AutoML through its API. AutoML can also be used through the user interface found here. The below steps in this section can all be done through the web UI .
We will use AutoML Vision to train the classification model. AutoML provides a fully managed solution for training the model. All we will do is input the list of input images and labels. The trained model in AutoML will be able to classify the mammography images as either "2" (scattered density) or "3" (heterogeneously dense).
As a first step, we will create a AutoML dataset.
End of explanation
# Path to request import into AutoML dataset.
path = os.path.join(AUTOML_API_URL, automl_dataset_name + ':importData')
# Body (encoded in JSON format).
config = {'input_config': {'gcs_source': {'input_uris': [input_data_csv]}}}
resp = authed_session.post(path, headers=headers, json=config)
assert resp.status_code == 200, 'error importing AutoML dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))
# Record operation_name so we can poll for it later.
response = json.loads(resp.text)
operation_name = response['name']
Explanation: Next, we will import the CSV that contains the list of (IMAGE_PATH, LABEL) list into AutoML. Please ignore errors regarding an existing ground truth.
End of explanation
path = os.path.join(AUTOML_API_URL, operation_name)
timeout = time.time() + 40*60 # Wait up to 40 minutes.
_ = wait_for_operation_completion(path, timeout)
Explanation: The output of the previous step is an operation that will need to poll the status for. We will poll until the operation's "done" field is set to true. This will take a few minutes to complete so we will wait until completion.
End of explanation
# Name of the model.
model_display_name = "MY_MODEL_NAME" # @param
# Training budget (1 hr).
training_budget = 1 # @param
# Path to request import into AutoML dataset.
path = os.path.join(AUTOML_API_URL, 'projects', project_id, 'locations', location, 'models')
# Headers (request in JSON format).
headers = {'Content-Type': 'application/json'}
# Body (encoded in JSON format).
automl_dataset_id = automl_dataset_name.split('/')[-1]
config = {'display_name': model_display_name, 'dataset_id': automl_dataset_id, 'image_classification_model_metadata': {'train_budget': training_budget}}
resp = authed_session.post(path, headers=headers, json=config)
assert resp.status_code == 200, 'error creating AutoML model, code: {0}, response: {1}'.format(resp.status_code, contenresp.text)
print('Full response:\n{0}'.format(resp.text))
# Record operation_name so we can poll for it later.
response = json.loads(resp.text)
operation_name = response['name']
Explanation: Next, we will train the model to perform classification. We will set the training budget to be a maximum of 1hr (but this can be modified below). The cost of using AutoML can be found here. Typically, the longer the model is trained for, the more accurate it will be.
End of explanation
path = os.path.join(AUTOML_API_URL, operation_name)
timeout = time.time() + 40*60 # Wait up to 40 minutes.
sleep_time = 5*60 # Update each 5 minutes.
response = wait_for_operation_completion(path, timeout, sleep_time)
full_model_name = response['response']['name']
# google.cloud.automl to make api calls to Cloud AutoML
!pip install google-cloud-automl
from google.cloud import automl_v1
client = automl_v1.AutoMlClient()
response = client.deploy_model(full_model_name)
print(u'Model deployment finished. {}'.format(response.result()))
Explanation: The output of the previous step is also an operation that will need to poll the status of. We will poll until the operation's "done" field is set to true. This will take a few minutes to complete.
End of explanation
# Path to request to get model accuracy metrics.
path = os.path.join(AUTOML_API_URL, full_model_name, 'modelEvaluations')
resp = authed_session.get(path)
assert resp.status_code == 200, 'error getting AutoML model evaluations, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))
Explanation: Next, we will check out the accuracy metrics for the trained model. The following command will return the AUC (ROC), precision and recall for the model, for various ML classification thresholds.
End of explanation
# Pubsub config.
pubsub_topic_id = "MY_PUBSUB_TOPIC_ID" # @param
pubsub_subscription_id = "MY_PUBSUB_SUBSRIPTION_ID" # @param
# DICOM Store for store DICOM used for inference.
inference_dicom_store_id = "MY_INFERENCE_DICOM_STORE" # @param
pubsub_subscription_name = "projects/" + project_id + "/subscriptions/" + pubsub_subscription_id
inference_dicom_store_path = dicom_path.FromPath(dicom_store_path, store_id=inference_dicom_store_id)
%%bash -s {pubsub_topic_id} {pubsub_subscription_id} {project_id} {location} {dataset_id} {inference_dicom_store_id}
# Create Pubsub channel.
gcloud beta pubsub topics create $1
gcloud beta pubsub subscriptions create $2 --topic $1
# Create a Cloud Healthcare DICOM store that published on given Pubsub topic.
TOKEN=`gcloud beta auth application-default print-access-token`
NOTIFICATION_CONFIG="{notification_config: {pubsub_topic: \"projects/$3/topics/$1\"}}"
curl -s -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${TOKEN}" -d "${NOTIFICATION_CONFIG}" https://healthcare.googleapis.com/v1/projects/$3/locations/$4/datasets/$5/dicomStores?dicom_store_id=$6
# Enable Cloud Healthcare API to publish on given Pubsub topic.
PROJECT_NUMBER=`gcloud projects describe $3 | grep projectNumber | sed 's/[^0-9]//g'`
SERVICE_ACCOUNT="service-${PROJECT_NUMBER}@gcp-sa-healthcare.iam.gserviceaccount.com"
gcloud beta pubsub topics add-iam-policy-binding $1 --member="serviceAccount:${SERVICE_ACCOUNT}" --role="roles/pubsub.publisher"
Explanation: Inference
To allow medical imaging ML models to be easily integrated into clinical workflows, an inference module can be used. A standalone modality, a PACS system or a DICOM router can push DICOM instances into Cloud Healthcare DICOM stores, allowing ML models to be triggered for inference. This inference results can then be structured into various DICOM formats (e.g. DICOM structured reports) and stored in the Cloud Healthcare API, which can then be retrieved by the customer.
The inference module is built as a Docker container and deployed using Kubernetes, allowing you to easily scale your deployment. The dataflow for inference can look as follows (see corresponding diagram below):
Client application uses STOW-RS to push a new DICOM instance to the Cloud Healthcare DICOMWeb API.
The insertion of the DICOM instance triggers a Cloud Pubsub message to be published. The inference module will pull incoming Pubsub messages and will recieve a message for the previously inserted DICOM instance.
The inference module will retrieve the instance in JPEG format from the Cloud Healthcare API using WADO-RS.
The inference module will send the JPEG bytes to the model hosted on AutoML.
AutoML will return the prediction back to the inference module.
The inference module will package the prediction into a DICOM instance. This can potentially be a DICOM structured report, presentation state, or even burnt text on the image. In this codelab, we will focus on just DICOM structured reports, specifically Comprehensive Structured Reports. The structured report is then stored back in the Cloud Healthcare API using STOW-RS.
The client application can query for (or retrieve) the structured report by using QIDO-RS or WADO-RS. Pubsub can also be used by the client application to poll for the newly created DICOM structured report instance.
To begin, we will create a new DICOM store that will store our inference source (DICOM mammography instance) and results (DICOM structured report). In order to enable Pubsub notifications to be triggered on inserted instances, we will give the DICOM store a Pubsub channel to publish on.
End of explanation
%%bash -s {project_id}
PROJECT_ID=$1
gcloud builds submit --config scripts/inference/cloudbuild.yaml --timeout 1h scripts/inference
Explanation: Next, we will building the inference module using Cloud Build API. This will create a Docker container that will be stored in Google Container Registry. The inference module code is found in inference.py. The build script used to build the Docker container for this module is cloudbuild.yaml. Progress of build may be found on cloud build dashboard.
End of explanation
%%bash -s {project_id} {location} {pubsub_subscription_name} {full_model_name} {inference_dicom_store_path}
gcloud container clusters create inference-module --region=$2 --scopes https://www.googleapis.com/auth/cloud-platform --num-nodes=1
PROJECT_ID=$1
SUBSCRIPTION_PATH=$3
MODEL_PATH=$4
INFERENCE_DICOM_STORE_PATH=$5
cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: inference-module
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: inference-module
spec:
containers:
- name: inference-module
image: gcr.io/${PROJECT_ID}/inference-module:latest
command:
- "/opt/inference_module/bin/inference_module"
- "--subscription_path=${SUBSCRIPTION_PATH}"
- "--model_path=${MODEL_PATH}"
- "--dicom_store_path=${INFERENCE_DICOM_STORE_PATH}"
- "--prediction_service=AutoML"
EOF
Explanation: Next, we will deploy the inference module to Kubernetes.
Then we create a Kubernetes Cluster and a Deployment for the inference module.
End of explanation
# DICOM Study/Series UID of input mammography image that we'll push for inference.
input_mammo_study_uid = "1.3.6.1.4.1.9590.100.1.2.85935434310203356712688695661986996009"
input_mammo_series_uid = "1.3.6.1.4.1.9590.100.1.2.374115997511889073021386151921807063992"
input_mammo_instance_uid = "1.3.6.1.4.1.9590.100.1.2.289923739312470966435676008311959891294"
from google.cloud import storage
from dicomweb_client.api import DICOMwebClient
from dicomweb_client import session_utils
from pydicom
storage_client = storage.Client()
bucket = storage_client.bucket('gcs-public-data--healthcare-tcia-cbis-ddsm', user_project=project_id)
blob = bucket.blob("dicom/{}/{}/{}.dcm".format(input_mammo_study_uid,input_mammo_series_uid,input_mammo_instance_uid))
blob.download_to_filename('example.dcm')
dataset = pydicom.dcmread('example.dcm')
session = session_utils.create_session_from_gcp_credentials()
study_path = dicom_path.FromPath(inference_dicom_store_path, study_uid=input_mammo_study_uid)
dicomweb_url = os.path.join(HEALTHCARE_API_URL, study_path.dicomweb_path_str)
dcm_client = DICOMwebClient(dicomweb_url, session)
dcm_client.store_instances(datasets=[dataset])
Explanation: Next, we will store a mammography DICOM instance from the TCIA dataset to the DICOM store. This is the image that we will request inference for. Pushing this instance to the DICOM store will result in a Pubsub message, which will trigger the inference module.
End of explanation
!kubectl logs -l app=inference-module
Explanation: You should be able to observe the inference module's logs by running the following command. In the logs, you should observe that the inference module successfully recieved the the Pubsub message and ran inference on the DICOM instance. The logs should also include the inference results. It can take a few minutes for the Kubernetes deployment to start up, so you many need to run this a few times. The logs should also include the inference results. It can take a few minutes for the Kubernetes deployment to start up, so you many need to run this a few times.
End of explanation
dcm_client.search_for_instances(study_path.study_uid, fields=['all'])
Explanation: You can also query the Cloud Healthcare DICOMWeb API (using QIDO-RS) to see that the DICOM structured report has been inserted for the study. The structured report contents can be found under tag "0040A730".
You can optionally also use WADO-RS to recieve the instance (e.g. for viewing).
End of explanation |
4,353 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Assignment 2
Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
Step1: First reload the data we generated in 1_notmnist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this
Step4: Let's run this computation and iterate
Step5: Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of session.run().
Step6: Let's run it | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
Explanation: Deep Learning
Assignment 2
Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
End of explanation
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
Explanation: First reload the data we generated in 1_notmnist.ipynb.
End of explanation
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
Explanation: Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
End of explanation
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random valued following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
Explanation: We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this:
* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:
with graph.as_default():
...
Then you can run the operations on this graph as many times as you want by calling session.run(), providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:
with tf.Session(graph=graph) as session:
...
Let's load all the data into TensorFlow and build the computation graph corresponding to our training:
End of explanation
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.initialize_all_variables().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
Explanation: Let's run this computation and iterate:
End of explanation
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
Explanation: Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of session.run().
End of explanation
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
Explanation: Let's run it:
End of explanation |
4,354 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Simple audio recognition
Step2: Import the mini Speech Commands dataset
To save time with data loading, you will be working with a smaller version of the Speech Commands dataset. The original dataset consists of over 105,000 audio files in the <a href="https
Step3: The dataset's audio clips are stored in eight folders corresponding to each speech command
Step4: Extract the audio clips into a list called filenames, and shuffle it
Step5: Split filenames into training, validation and test sets using a 80
Step6: Read the audio files and their labels
In this section you will preprocess the dataset, creating decoded tensors for the waveforms and the corresponding labels. Note that
Step7: Now, let's define a function that preprocesses the dataset's raw WAV audio files into audio tensors
Step8: Define a function that creates labels using the parent directories for each file
Step9: Define another helper function—get_waveform_and_label—that puts it all together
Step10: Build the training set to extract the audio-label pairs
Step11: Let's plot a few audio waveforms
Step12: Convert waveforms to spectrograms
The waveforms in the dataset are represented in the time domain. Next, you'll transform the waveforms from the time-domain signals into the time-frequency-domain signals by computing the <a href="https
Step13: Next, start exploring the data. Print the shapes of one example's tensorized waveform and the corresponding spectrogram, and play the original audio
Step14: Now, define a function for displaying a spectrogram
Step15: Plot the example's waveform over time and the corresponding spectrogram (frequencies over time)
Step16: Now, define a function that transforms the waveform dataset into spectrograms and their corresponding labels as integer IDs
Step17: Map get_spectrogram_and_label_id across the dataset's elements with Dataset.map
Step18: Examine the spectrograms for different examples of the dataset
Step19: Build and train the model
Repeat the training set preprocessing on the validation and test sets
Step20: Batch the training and validation sets for model training
Step21: Add Dataset.cache and Dataset.prefetch operations to reduce read latency while training the model
Step22: For the model, you'll use a simple convolutional neural network (CNN), since you have transformed the audio files into spectrogram images.
Your tf.keras.Sequential model will use the following Keras preprocessing layers
Step23: Configure the Keras model with the Adam optimizer and the cross-entropy loss
Step24: Train the model over 10 epochs for demonstration purposes
Step25: Let's plot the training and validation loss curves to check how your model has improved during training
Step26: Evaluate the model performance
Run the model on the test set and check the model's performance
Step27: Display a confusion matrix
Use a <a href="https
Step28: Run inference on an audio file
Finally, verify the model's prediction output using an input audio file of someone saying "no". How well does your model perform? | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import os
import pathlib
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import models
from IPython import display
# Set the seed value for experiment reproducibility.
seed = 42
tf.random.set_seed(seed)
np.random.seed(seed)
Explanation: Simple audio recognition: Recognizing keywords
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/audio/simple_audio">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/audio/simple_audio.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/audio/simple_audio.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/audio/simple_audio.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial demonstrates how to preprocess audio files in the WAV format and build and train a basic <a href="https://en.wikipedia.org/wiki/Speech_recognition" class="external">automatic speech recognition</a> (ASR) model for recognizing ten different words. You will use a portion of the Speech Commands dataset (<a href="https://arxiv.org/abs/1804.03209" class="external">Warden, 2018</a>), which contains short (one-second or less) audio clips of commands, such as "down", "go", "left", "no", "right", "stop", "up" and "yes".
Real-world speech and audio recognition <a href="https://ai.googleblog.com/search/label/Speech%20Recognition" class="external">systems</a> are complex. But, like image classification with the MNIST dataset, this tutorial should give you a basic understanding of the techniques involved.
Setup
Import necessary modules and dependencies. Note that you'll be using <a href="https://seaborn.pydata.org/" class="external">seaborn</a> for visualization in this tutorial.
End of explanation
DATASET_PATH = 'data/mini_speech_commands'
data_dir = pathlib.Path(DATASET_PATH)
if not data_dir.exists():
tf.keras.utils.get_file(
'mini_speech_commands.zip',
origin="http://storage.googleapis.com/download.tensorflow.org/data/mini_speech_commands.zip",
extract=True,
cache_dir='.', cache_subdir='data')
Explanation: Import the mini Speech Commands dataset
To save time with data loading, you will be working with a smaller version of the Speech Commands dataset. The original dataset consists of over 105,000 audio files in the <a href="https://www.aelius.com/njh/wavemetatools/doc/riffmci.pdf" class="external">WAV (Waveform) audio file format</a> of people saying 35 different words. This data was collected by Google and released under a CC BY license.
Download and extract the mini_speech_commands.zip file containing the smaller Speech Commands datasets with tf.keras.utils.get_file:
End of explanation
commands = np.array(tf.io.gfile.listdir(str(data_dir)))
commands = commands[commands != 'README.md']
print('Commands:', commands)
Explanation: The dataset's audio clips are stored in eight folders corresponding to each speech command: no, yes, down, go, left, up, right, and stop:
End of explanation
filenames = tf.io.gfile.glob(str(data_dir) + '/*/*')
filenames = tf.random.shuffle(filenames)
num_samples = len(filenames)
print('Number of total examples:', num_samples)
print('Number of examples per label:',
len(tf.io.gfile.listdir(str(data_dir/commands[0]))))
print('Example file tensor:', filenames[0])
Explanation: Extract the audio clips into a list called filenames, and shuffle it:
End of explanation
train_files = filenames[:6400]
val_files = filenames[6400: 6400 + 800]
test_files = filenames[-800:]
print('Training set size', len(train_files))
print('Validation set size', len(val_files))
print('Test set size', len(test_files))
Explanation: Split filenames into training, validation and test sets using a 80:10:10 ratio, respectively:
End of explanation
test_file = tf.io.read_file(DATASET_PATH+'/down/0a9f9af7_nohash_0.wav')
test_audio, _ = tf.audio.decode_wav(contents=test_file)
test_audio.shape
Explanation: Read the audio files and their labels
In this section you will preprocess the dataset, creating decoded tensors for the waveforms and the corresponding labels. Note that:
Each WAV file contains time-series data with a set number of samples per second.
Each sample represents the <a href="https://en.wikipedia.org/wiki/Amplitude" class="external">amplitude</a> of the audio signal at that specific time.
In a <a href="https://en.wikipedia.org/wiki/Audio_bit_depth" class="external">16-bit</a> system, like the WAV files in the mini Speech Commands dataset, the amplitude values range from -32,768 to 32,767.
The <a href="https://en.wikipedia.org/wiki/Sampling_(signal_processing)#Audio_sampling" class="external">sample rate</a> for this dataset is 16kHz.
The shape of the tensor returned by tf.audio.decode_wav is [samples, channels], where channels is 1 for mono or 2 for stereo. The mini Speech Commands dataset only contains mono recordings.
End of explanation
def decode_audio(audio_binary):
# Decode WAV-encoded audio files to `float32` tensors, normalized
# to the [-1.0, 1.0] range. Return `float32` audio and a sample rate.
audio, _ = tf.audio.decode_wav(contents=audio_binary)
# Since all the data is single channel (mono), drop the `channels`
# axis from the array.
return tf.squeeze(audio, axis=-1)
Explanation: Now, let's define a function that preprocesses the dataset's raw WAV audio files into audio tensors:
End of explanation
def get_label(file_path):
parts = tf.strings.split(
input=file_path,
sep=os.path.sep)
# Note: You'll use indexing here instead of tuple unpacking to enable this
# to work in a TensorFlow graph.
return parts[-2]
Explanation: Define a function that creates labels using the parent directories for each file:
Split the file paths into tf.RaggedTensors (tensors with ragged dimensions—with slices that may have different lengths).
End of explanation
def get_waveform_and_label(file_path):
label = get_label(file_path)
audio_binary = tf.io.read_file(file_path)
waveform = decode_audio(audio_binary)
return waveform, label
Explanation: Define another helper function—get_waveform_and_label—that puts it all together:
The input is the WAV audio filename.
The output is a tuple containing the audio and label tensors ready for supervised learning.
End of explanation
AUTOTUNE = tf.data.AUTOTUNE
files_ds = tf.data.Dataset.from_tensor_slices(train_files)
waveform_ds = files_ds.map(
map_func=get_waveform_and_label,
num_parallel_calls=AUTOTUNE)
Explanation: Build the training set to extract the audio-label pairs:
Create a tf.data.Dataset with Dataset.from_tensor_slices and Dataset.map, using get_waveform_and_label defined earlier.
You'll build the validation and test sets using a similar procedure later on.
End of explanation
rows = 3
cols = 3
n = rows * cols
fig, axes = plt.subplots(rows, cols, figsize=(10, 12))
for i, (audio, label) in enumerate(waveform_ds.take(n)):
r = i // cols
c = i % cols
ax = axes[r][c]
ax.plot(audio.numpy())
ax.set_yticks(np.arange(-1.2, 1.2, 0.2))
label = label.numpy().decode('utf-8')
ax.set_title(label)
plt.show()
Explanation: Let's plot a few audio waveforms:
End of explanation
def get_spectrogram(waveform):
# Zero-padding for an audio waveform with less than 16,000 samples.
input_len = 16000
waveform = waveform[:input_len]
zero_padding = tf.zeros(
[16000] - tf.shape(waveform),
dtype=tf.float32)
# Cast the waveform tensors' dtype to float32.
waveform = tf.cast(waveform, dtype=tf.float32)
# Concatenate the waveform with `zero_padding`, which ensures all audio
# clips are of the same length.
equal_length = tf.concat([waveform, zero_padding], 0)
# Convert the waveform to a spectrogram via a STFT.
spectrogram = tf.signal.stft(
equal_length, frame_length=255, frame_step=128)
# Obtain the magnitude of the STFT.
spectrogram = tf.abs(spectrogram)
# Add a `channels` dimension, so that the spectrogram can be used
# as image-like input data with convolution layers (which expect
# shape (`batch_size`, `height`, `width`, `channels`).
spectrogram = spectrogram[..., tf.newaxis]
return spectrogram
Explanation: Convert waveforms to spectrograms
The waveforms in the dataset are represented in the time domain. Next, you'll transform the waveforms from the time-domain signals into the time-frequency-domain signals by computing the <a href="https://en.wikipedia.org/wiki/Short-time_Fourier_transform" class="external">short-time Fourier transform (STFT)</a> to convert the waveforms to as <a href="https://en.wikipedia.org/wiki/Spectrogram" clas="external">spectrograms</a>, which show frequency changes over time and can be represented as 2D images. You will feed the spectrogram images into your neural network to train the model.
A Fourier transform (tf.signal.fft) converts a signal to its component frequencies, but loses all time information. In comparison, STFT (tf.signal.stft) splits the signal into windows of time and runs a Fourier transform on each window, preserving some time information, and returning a 2D tensor that you can run standard convolutions on.
Create a utility function for converting waveforms to spectrograms:
The waveforms need to be of the same length, so that when you convert them to spectrograms, the results have similar dimensions. This can be done by simply zero-padding the audio clips that are shorter than one second (using tf.zeros).
When calling tf.signal.stft, choose the frame_length and frame_step parameters such that the generated spectrogram "image" is almost square. For more information on the STFT parameters choice, refer to <a href="https://www.coursera.org/lecture/audio-signal-processing/stft-2-tjEQe" class="external">this Coursera video</a> on audio signal processing and STFT.
The STFT produces an array of complex numbers representing magnitude and phase. However, in this tutorial you'll only use the magnitude, which you can derive by applying tf.abs on the output of tf.signal.stft.
End of explanation
for waveform, label in waveform_ds.take(1):
label = label.numpy().decode('utf-8')
spectrogram = get_spectrogram(waveform)
print('Label:', label)
print('Waveform shape:', waveform.shape)
print('Spectrogram shape:', spectrogram.shape)
print('Audio playback')
display.display(display.Audio(waveform, rate=16000))
Explanation: Next, start exploring the data. Print the shapes of one example's tensorized waveform and the corresponding spectrogram, and play the original audio:
End of explanation
def plot_spectrogram(spectrogram, ax):
if len(spectrogram.shape) > 2:
assert len(spectrogram.shape) == 3
spectrogram = np.squeeze(spectrogram, axis=-1)
# Convert the frequencies to log scale and transpose, so that the time is
# represented on the x-axis (columns).
# Add an epsilon to avoid taking a log of zero.
log_spec = np.log(spectrogram.T + np.finfo(float).eps)
height = log_spec.shape[0]
width = log_spec.shape[1]
X = np.linspace(0, np.size(spectrogram), num=width, dtype=int)
Y = range(height)
ax.pcolormesh(X, Y, log_spec)
Explanation: Now, define a function for displaying a spectrogram:
End of explanation
fig, axes = plt.subplots(2, figsize=(12, 8))
timescale = np.arange(waveform.shape[0])
axes[0].plot(timescale, waveform.numpy())
axes[0].set_title('Waveform')
axes[0].set_xlim([0, 16000])
plot_spectrogram(spectrogram.numpy(), axes[1])
axes[1].set_title('Spectrogram')
plt.show()
Explanation: Plot the example's waveform over time and the corresponding spectrogram (frequencies over time):
End of explanation
def get_spectrogram_and_label_id(audio, label):
spectrogram = get_spectrogram(audio)
label_id = tf.math.argmax(label == commands)
return spectrogram, label_id
Explanation: Now, define a function that transforms the waveform dataset into spectrograms and their corresponding labels as integer IDs:
End of explanation
spectrogram_ds = waveform_ds.map(
map_func=get_spectrogram_and_label_id,
num_parallel_calls=AUTOTUNE)
Explanation: Map get_spectrogram_and_label_id across the dataset's elements with Dataset.map:
End of explanation
rows = 3
cols = 3
n = rows*cols
fig, axes = plt.subplots(rows, cols, figsize=(10, 10))
for i, (spectrogram, label_id) in enumerate(spectrogram_ds.take(n)):
r = i // cols
c = i % cols
ax = axes[r][c]
plot_spectrogram(spectrogram.numpy(), ax)
ax.set_title(commands[label_id.numpy()])
ax.axis('off')
plt.show()
Explanation: Examine the spectrograms for different examples of the dataset:
End of explanation
def preprocess_dataset(files):
files_ds = tf.data.Dataset.from_tensor_slices(files)
output_ds = files_ds.map(
map_func=get_waveform_and_label,
num_parallel_calls=AUTOTUNE)
output_ds = output_ds.map(
map_func=get_spectrogram_and_label_id,
num_parallel_calls=AUTOTUNE)
return output_ds
train_ds = spectrogram_ds
val_ds = preprocess_dataset(val_files)
test_ds = preprocess_dataset(test_files)
Explanation: Build and train the model
Repeat the training set preprocessing on the validation and test sets:
End of explanation
batch_size = 64
train_ds = train_ds.batch(batch_size)
val_ds = val_ds.batch(batch_size)
Explanation: Batch the training and validation sets for model training:
End of explanation
train_ds = train_ds.cache().prefetch(AUTOTUNE)
val_ds = val_ds.cache().prefetch(AUTOTUNE)
Explanation: Add Dataset.cache and Dataset.prefetch operations to reduce read latency while training the model:
End of explanation
for spectrogram, _ in spectrogram_ds.take(1):
input_shape = spectrogram.shape
print('Input shape:', input_shape)
num_labels = len(commands)
# Instantiate the `tf.keras.layers.Normalization` layer.
norm_layer = layers.Normalization()
# Fit the state of the layer to the spectrograms
# with `Normalization.adapt`.
norm_layer.adapt(data=spectrogram_ds.map(map_func=lambda spec, label: spec))
model = models.Sequential([
layers.Input(shape=input_shape),
# Downsample the input.
layers.Resizing(32, 32),
# Normalize.
norm_layer,
layers.Conv2D(32, 3, activation='relu'),
layers.Conv2D(64, 3, activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.25),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dropout(0.5),
layers.Dense(num_labels),
])
model.summary()
Explanation: For the model, you'll use a simple convolutional neural network (CNN), since you have transformed the audio files into spectrogram images.
Your tf.keras.Sequential model will use the following Keras preprocessing layers:
tf.keras.layers.Resizing: to downsample the input to enable the model to train faster.
tf.keras.layers.Normalization: to normalize each pixel in the image based on its mean and standard deviation.
For the Normalization layer, its adapt method would first need to be called on the training data in order to compute aggregate statistics (that is, the mean and the standard deviation).
End of explanation
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'],
)
Explanation: Configure the Keras model with the Adam optimizer and the cross-entropy loss:
End of explanation
EPOCHS = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=EPOCHS,
callbacks=tf.keras.callbacks.EarlyStopping(verbose=1, patience=2),
)
Explanation: Train the model over 10 epochs for demonstration purposes:
End of explanation
metrics = history.history
plt.plot(history.epoch, metrics['loss'], metrics['val_loss'])
plt.legend(['loss', 'val_loss'])
plt.show()
Explanation: Let's plot the training and validation loss curves to check how your model has improved during training:
End of explanation
test_audio = []
test_labels = []
for audio, label in test_ds:
test_audio.append(audio.numpy())
test_labels.append(label.numpy())
test_audio = np.array(test_audio)
test_labels = np.array(test_labels)
y_pred = np.argmax(model.predict(test_audio), axis=1)
y_true = test_labels
test_acc = sum(y_pred == y_true) / len(y_true)
print(f'Test set accuracy: {test_acc:.0%}')
Explanation: Evaluate the model performance
Run the model on the test set and check the model's performance:
End of explanation
confusion_mtx = tf.math.confusion_matrix(y_true, y_pred)
plt.figure(figsize=(10, 8))
sns.heatmap(confusion_mtx,
xticklabels=commands,
yticklabels=commands,
annot=True, fmt='g')
plt.xlabel('Prediction')
plt.ylabel('Label')
plt.show()
Explanation: Display a confusion matrix
Use a <a href="https://developers.google.com/machine-learning/glossary#confusion-matrix" class="external">confusion matrix</a> to check how well the model did classifying each of the commands in the test set:
End of explanation
sample_file = data_dir/'no/01bb6a2a_nohash_0.wav'
sample_ds = preprocess_dataset([str(sample_file)])
for spectrogram, label in sample_ds.batch(1):
prediction = model(spectrogram)
plt.bar(commands, tf.nn.softmax(prediction[0]))
plt.title(f'Predictions for "{commands[label[0]]}"')
plt.show()
Explanation: Run inference on an audio file
Finally, verify the model's prediction output using an input audio file of someone saying "no". How well does your model perform?
End of explanation |
4,355 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Regression
Step1: Following the steps prescribed by Jake Vanderplas in his awesome text Python Data Science Handbook. He has kindly provided all his codes on github as well.
Step 1. Choose a class of model.
In this case we are using linear regression
Step2: Step 2. Choose model hyperparameters.
Step3: Step 3. Arrange data into a features matrix and target vector
Step4: Step 4. Fit the model to your data.
Step5: If you are statistically trained, you would normally dig into other information such as normality of the residuals and check for autocorrelation etc. You may also want to evaluation the parameters as well. Those are valid statistical modelling questions.
Machine Learning focus is on prediction. You will not find these information with the scikit-learn package. Do take note of this key difference between statistics and machine learning.
Step 5. Predict labels for unknown data | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
X = np.random.rand(100)
y = X + 0.1 * np.random.randn(100)
plt.scatter(X, y);
plt.show()
Explanation: Linear Regression
End of explanation
from sklearn.linear_model import LinearRegression
Explanation: Following the steps prescribed by Jake Vanderplas in his awesome text Python Data Science Handbook. He has kindly provided all his codes on github as well.
Step 1. Choose a class of model.
In this case we are using linear regression
End of explanation
model = LinearRegression(fit_intercept=True)
Explanation: Step 2. Choose model hyperparameters.
End of explanation
X = X.reshape(-1, 1)
X.shape
Explanation: Step 3. Arrange data into a features matrix and target vector
End of explanation
model.fit(X, y)
model.coef_
model.intercept_
Explanation: Step 4. Fit the model to your data.
End of explanation
x_test = np.linspace(0, 1)
x_test
y_pred = model.predict(x_test.reshape(-1,1))
plt.scatter(X, y)
plt.plot(x_test, y_pred);
plt.show()
Explanation: If you are statistically trained, you would normally dig into other information such as normality of the residuals and check for autocorrelation etc. You may also want to evaluation the parameters as well. Those are valid statistical modelling questions.
Machine Learning focus is on prediction. You will not find these information with the scikit-learn package. Do take note of this key difference between statistics and machine learning.
Step 5. Predict labels for unknown data
End of explanation |
4,356 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
(quickstart)=
Quickstart
Step1: The easiest way to get started with using emcee is to use it for a project. To get you started, here’s an annotated, fully-functional example that demonstrates a standard usage pattern.
How to sample a multi-dimensional Gaussian
We’re going to demonstrate how you might draw samples from the multivariate Gaussian density given by
Step2: Then, we’ll code up a Python function that returns the density $p(\vec{x})$ for specific values of $\vec{x}$, $\vec{\mu}$ and $\Sigma^{-1}$. In fact, emcee actually requires the logarithm of $p$. We’ll call it log_prob
Step3: It is important that the first argument of the probability function is
the position of a single "walker" (a N dimensional
numpy array). The following arguments are going to be constant every
time the function is called and the values come from the args parameter
of our {class}EnsembleSampler that we'll see soon.
Now, we'll set up the specific values of those "hyperparameters" in 5
dimensions
Step4: and where cov is $\Sigma$.
How about we use 32 walkers? Before we go on, we need to guess a starting point for each
of the 32 walkers. This position will be a 5-dimensional vector so the
initial guess should be a 32-by-5 array.
It's not a very good guess but we'll just guess a
random number between 0 and 1 for each component
Step5: Now that we've gotten past all the bookkeeping stuff, we can move on to
the fun stuff. The main interface provided by emcee is the
{class}EnsembleSampler object so let's get ourselves one of those
Step6: Remember how our function log_prob required two extra arguments when it
was called? By setting up our sampler with the args argument, we're
saying that the probability function should be called as
Step7: If we didn't provide any
args parameter, the calling sequence would be log_prob(p0[0]) instead.
It's generally a good idea to run a few "burn-in" steps in your MCMC
chain to let the walkers explore the parameter space a bit and get
settled into the maximum of the density. We'll run a burn-in of 100
steps (yep, I just made that number up... it's hard to really know
how many steps of burn-in you'll need before you start) starting from
our initial guess p0
Step8: You'll notice that I saved the final position of the walkers (after the
100 steps) to a variable called state. You can check out what will be
contained in the other output variables by looking at the documentation for
the {func}EnsembleSampler.run_mcmc function. The call to the
{func}EnsembleSampler.reset method clears all of the important bookkeeping
parameters in the sampler so that we get a fresh start. It also clears the
current positions of the walkers so it's a good thing that we saved them
first.
Now, we can do our production run of 10000 steps
Step9: The samples can be accessed using the {func}EnsembleSampler.get_chain method.
This will return an array
with the shape (10000, 32, 5) giving the parameter values for each walker
at each step in the chain.
Take note of that shape and make sure that you know where each of those numbers come from.
You can make histograms of these samples to get an estimate of the density that you were sampling
Step10: Another good test of whether or not the sampling went well is to check
the mean acceptance fraction of the ensemble using the
{func}EnsembleSampler.acceptance_fraction property
Step11: and the integrated autocorrelation time (see the {ref}autocorr tutorial for more details) | Python Code:
%config InlineBackend.figure_format = "retina"
from matplotlib import rcParams
rcParams["savefig.dpi"] = 100
rcParams["figure.dpi"] = 100
rcParams["font.size"] = 20
Explanation: (quickstart)=
Quickstart
End of explanation
import numpy as np
Explanation: The easiest way to get started with using emcee is to use it for a project. To get you started, here’s an annotated, fully-functional example that demonstrates a standard usage pattern.
How to sample a multi-dimensional Gaussian
We’re going to demonstrate how you might draw samples from the multivariate Gaussian density given by:
$$
p(\vec{x}) \propto \exp \left [ - \frac{1}{2} (\vec{x} -
\vec{\mu})^\mathrm{T} \, \Sigma ^{-1} \, (\vec{x} - \vec{\mu})
\right ]
$$
where $\vec{\mu}$ is an $N$-dimensional vector position of the mean of the density and $\Sigma$ is the square N-by-N covariance matrix.
The first thing that we need to do is import the necessary modules:
End of explanation
def log_prob(x, mu, cov):
diff = x - mu
return -0.5 * np.dot(diff, np.linalg.solve(cov, diff))
Explanation: Then, we’ll code up a Python function that returns the density $p(\vec{x})$ for specific values of $\vec{x}$, $\vec{\mu}$ and $\Sigma^{-1}$. In fact, emcee actually requires the logarithm of $p$. We’ll call it log_prob:
End of explanation
ndim = 5
np.random.seed(42)
means = np.random.rand(ndim)
cov = 0.5 - np.random.rand(ndim**2).reshape((ndim, ndim))
cov = np.triu(cov)
cov += cov.T - np.diag(cov.diagonal())
cov = np.dot(cov, cov)
Explanation: It is important that the first argument of the probability function is
the position of a single "walker" (a N dimensional
numpy array). The following arguments are going to be constant every
time the function is called and the values come from the args parameter
of our {class}EnsembleSampler that we'll see soon.
Now, we'll set up the specific values of those "hyperparameters" in 5
dimensions:
End of explanation
nwalkers = 32
p0 = np.random.rand(nwalkers, ndim)
Explanation: and where cov is $\Sigma$.
How about we use 32 walkers? Before we go on, we need to guess a starting point for each
of the 32 walkers. This position will be a 5-dimensional vector so the
initial guess should be a 32-by-5 array.
It's not a very good guess but we'll just guess a
random number between 0 and 1 for each component:
End of explanation
import emcee
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob, args=[means, cov])
Explanation: Now that we've gotten past all the bookkeeping stuff, we can move on to
the fun stuff. The main interface provided by emcee is the
{class}EnsembleSampler object so let's get ourselves one of those:
End of explanation
log_prob(p0[0], means, cov)
Explanation: Remember how our function log_prob required two extra arguments when it
was called? By setting up our sampler with the args argument, we're
saying that the probability function should be called as:
End of explanation
state = sampler.run_mcmc(p0, 100)
sampler.reset()
Explanation: If we didn't provide any
args parameter, the calling sequence would be log_prob(p0[0]) instead.
It's generally a good idea to run a few "burn-in" steps in your MCMC
chain to let the walkers explore the parameter space a bit and get
settled into the maximum of the density. We'll run a burn-in of 100
steps (yep, I just made that number up... it's hard to really know
how many steps of burn-in you'll need before you start) starting from
our initial guess p0:
End of explanation
sampler.run_mcmc(state, 10000);
Explanation: You'll notice that I saved the final position of the walkers (after the
100 steps) to a variable called state. You can check out what will be
contained in the other output variables by looking at the documentation for
the {func}EnsembleSampler.run_mcmc function. The call to the
{func}EnsembleSampler.reset method clears all of the important bookkeeping
parameters in the sampler so that we get a fresh start. It also clears the
current positions of the walkers so it's a good thing that we saved them
first.
Now, we can do our production run of 10000 steps:
End of explanation
import matplotlib.pyplot as plt
samples = sampler.get_chain(flat=True)
plt.hist(samples[:, 0], 100, color="k", histtype="step")
plt.xlabel(r"$\theta_1$")
plt.ylabel(r"$p(\theta_1)$")
plt.gca().set_yticks([]);
Explanation: The samples can be accessed using the {func}EnsembleSampler.get_chain method.
This will return an array
with the shape (10000, 32, 5) giving the parameter values for each walker
at each step in the chain.
Take note of that shape and make sure that you know where each of those numbers come from.
You can make histograms of these samples to get an estimate of the density that you were sampling:
End of explanation
print(
"Mean acceptance fraction: {0:.3f}".format(
np.mean(sampler.acceptance_fraction)
)
)
Explanation: Another good test of whether or not the sampling went well is to check
the mean acceptance fraction of the ensemble using the
{func}EnsembleSampler.acceptance_fraction property:
End of explanation
print(
"Mean autocorrelation time: {0:.3f} steps".format(
np.mean(sampler.get_autocorr_time())
)
)
Explanation: and the integrated autocorrelation time (see the {ref}autocorr tutorial for more details)
End of explanation |
4,357 | Given the following text description, write Python code to implement the functionality described.
Description:
A simple program which should return the value of x if n is
a prime number and should return the value of y otherwise.
Examples:
for x_or_y(7, 34, 12) == 34
for x_or_y(15, 8, 5) == 5
| Python Code:
def x_or_y(n, x, y):
if n == 1:
return y
for i in range(2, n):
if n % i == 0:
return y
break
else:
return x |
4,358 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-2', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: IPSL
Source ID: SANDBOX-2
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
4,359 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'niwa', 'sandbox-3', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: NIWA
Source ID: SANDBOX-3
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:30
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
4,360 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Analysis in High Energy Physics
Step1: Two-tailed $p$-value
As for the two-tailed Gaussian,
$\displaystyle p(x) = P(\left|X\right| \geq x) = 1-\text{erf}\left(\frac{x}{\sqrt{2}\sigma}\right) \equiv \text{erfc}\left(\frac{x}{\sqrt{2}\sigma}\right)$,
it is seen that for $x=n \sigma$, then
$\displaystyle p(n \sigma) = P(\left|X\right| \geq n \sigma) = 1-\text{erf}\left(\frac{n}{\sqrt{2}}\right)$,
thus,
$\displaystyle \text{erf}\left(\frac{n}{\sqrt{2}}\right) = 1 - p(n \sigma)$.
Step2: However, at this point we are at an impass analytically, as the integral of a Gaussian function over a finite range has no analytical solution, and must be evaluated numerically.
So using erfc,
Step3: and using erf,
Step4: the same output is found (as required by the defintion of the functions).
One-tailed $p$-value
A one-sided p-value considers the probability for the data to have produced a value as extreme or grearer than the observed value on only one side of the distribution. For example, the p-value for the right tail of a Gaussian is $p(x) = \displaystyle P\left(X \geq x\right) = 1-\Phi(x)$, and the p-value for the left tail of a Gaussian is $p(-x) = \displaystyle P\left(X \leq -x\right) = \Phi(-x)$.
It is seen by symmetry $p(x) = p(-x)$ and that for a normalized Gaussian a one-tailed p-vaule is 1/2 that of a two-tailed p-value.
\begin{split}
p(x) = P\left(X \geq \left|x\right|\right)&= 1 - \frac{1}{\sqrt{2\pi}}\int\limits_{-\infty}^{x} e^{-t^2/2}\,dt = 1 - \frac{1}{2}\left(1+\text{erf}\left(\frac{x}{\sqrt{2}}\right)\right)\
&= 1-\Phi(x)\
&= \frac{1}{2}\left(1-\text{erf}\left(\frac{x}{\sqrt{2}}\right)\right) = \frac{1}{2}\text{erfc}\left(\frac{x}{\sqrt{2}}\right)
\end{split}
Step5: thus for $x = n \sigma$,
$\displaystyle \text{erf}\left(\frac{n\sigma}{\sqrt{2}}\right) = 1 - 2\,p(n \sigma)$.
Step6: Summary
Step7: Sanity Check | Python Code:
import math
import numpy as np
from scipy import special as special
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
from prettytable import PrettyTable
Explanation: Data Analysis in High Energy Physics: Exercise 1.5 $p$-values
Find the number of standard deviations corresponding to $p$-values of 10%, 5%, and 1% for a Gaussian distribution. Consider both one-sided and two-sided $p$-values.
Reminder: The error function is defined as the symmetric integral over the range of the standard Gaussian,
$\displaystyle \text{erf}(x) = \frac{1}{\sqrt{\pi}} \int\limits_{-x}^{x}e^{-t^2}\,dt = \frac{2}{\sqrt{\pi}} \int\limits_{0}^{x}e^{-t^2}\,dt\,,$
and so the probability for Gaussian distributed data to lie within $y$ of the mean is
$\displaystyle P(\mu - y \leq x \leq \mu + y) = \int\limits_{\mu - y}^{\mu + y} \frac{1}{\sqrt{2\pi} \sigma} e^{-(x-\mu)^2/2\sigma^2}\,dx = \frac{2}{\sqrt{\pi}} \int\limits_{0}^{y/\sqrt{2}\sigma} e^{-t^2}\,dt = \text{erf}\left(\frac{y}{\sqrt{2}\sigma}\right)\,.$
End of explanation
mean = 0
sigma = 1
nsigma = 1
x = np.linspace(-5,5,100)
plt.plot(x,mlab.normpdf(x,mean,sigma),color='black')
xlTail = np.linspace(-5,-nsigma)
xrTail = np.linspace(nsigma,5)
plt.fill_between(xlTail,0,mlab.normpdf(xlTail,mean,sigma),facecolor='red')
plt.fill_between(xrTail,0,mlab.normpdf(xrTail,mean,sigma),facecolor='red')
plt.show()
Explanation: Two-tailed $p$-value
As for the two-tailed Gaussian,
$\displaystyle p(x) = P(\left|X\right| \geq x) = 1-\text{erf}\left(\frac{x}{\sqrt{2}\sigma}\right) \equiv \text{erfc}\left(\frac{x}{\sqrt{2}\sigma}\right)$,
it is seen that for $x=n \sigma$, then
$\displaystyle p(n \sigma) = P(\left|X\right| \geq n \sigma) = 1-\text{erf}\left(\frac{n}{\sqrt{2}}\right)$,
thus,
$\displaystyle \text{erf}\left(\frac{n}{\sqrt{2}}\right) = 1 - p(n \sigma)$.
End of explanation
pvalues = [0.10, 0.05, 0.01]
for p in pvalues:
print("{} standard deviations corresponds to a p-value of {}".format(math.sqrt(2.)*special.erfcinv(p),p))
Explanation: However, at this point we are at an impass analytically, as the integral of a Gaussian function over a finite range has no analytical solution, and must be evaluated numerically.
So using erfc,
End of explanation
for p in pvalues:
print("{} standard deviations corresponds to a p-value of {}".format(math.sqrt(2.)*special.erfinv(1-p),p))
Explanation: and using erf,
End of explanation
plt.plot(x,mlab.normpdf(x,mean,sigma),color='black')
plt.fill_between(xrTail,0,mlab.normpdf(xrTail,mean,sigma),facecolor='red')
plt.show()
Explanation: the same output is found (as required by the defintion of the functions).
One-tailed $p$-value
A one-sided p-value considers the probability for the data to have produced a value as extreme or grearer than the observed value on only one side of the distribution. For example, the p-value for the right tail of a Gaussian is $p(x) = \displaystyle P\left(X \geq x\right) = 1-\Phi(x)$, and the p-value for the left tail of a Gaussian is $p(-x) = \displaystyle P\left(X \leq -x\right) = \Phi(-x)$.
It is seen by symmetry $p(x) = p(-x)$ and that for a normalized Gaussian a one-tailed p-vaule is 1/2 that of a two-tailed p-value.
\begin{split}
p(x) = P\left(X \geq \left|x\right|\right)&= 1 - \frac{1}{\sqrt{2\pi}}\int\limits_{-\infty}^{x} e^{-t^2/2}\,dt = 1 - \frac{1}{2}\left(1+\text{erf}\left(\frac{x}{\sqrt{2}}\right)\right)\
&= 1-\Phi(x)\
&= \frac{1}{2}\left(1-\text{erf}\left(\frac{x}{\sqrt{2}}\right)\right) = \frac{1}{2}\text{erfc}\left(\frac{x}{\sqrt{2}}\right)
\end{split}
End of explanation
for p in pvalues:
print("{} standard deviations corresponds to a p-value of {}".format((math.sqrt(2.)/sigma)*special.erfcinv(2*p),p))
print("")
for p in pvalues:
print("{} standard deviations corresponds to a p-value of {}".format((math.sqrt(2.)/sigma)*special.erfinv(1-2*p),p))
Explanation: thus for $x = n \sigma$,
$\displaystyle \text{erf}\left(\frac{n\sigma}{\sqrt{2}}\right) = 1 - 2\,p(n \sigma)$.
End of explanation
def nSigmaTwoTailed(p):
return math.sqrt(2.)*special.erfcinv(p)
def nSigmaOneTailed(p, sigma):
return (math.sqrt(2.)/sigma)*special.erfcinv(2*p)
# this needs to be turned into a loop of some sort
t = PrettyTable()
t.field_names = ["p-values", "n sigma 2-tailed", "n sigma 1-tailed"]
t.add_row([pvalues[0], nSigmaTwoTailed(pvalues[0]), nSigmaOneTailed(pvalues[0],sigma)])
t.add_row([pvalues[1], nSigmaTwoTailed(pvalues[1]), nSigmaOneTailed(pvalues[1],sigma)])
t.add_row([pvalues[2], nSigmaTwoTailed(pvalues[2]), nSigmaOneTailed(pvalues[2],sigma)])
print(t)
Explanation: Summary
End of explanation
checkvalues = [0.317310507863, 0.045500263896, 0.002699796063, 0.000063342484, 0.000000573303]
for p in checkvalues:
print("{:0.3f} standard deviations corresponds to a p-value of {}".format(nSigmaTwoTailed(p),p))
Explanation: Sanity Check
End of explanation |
4,361 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercises
Q 1
When talking about floating point, we discussed machine epsilon, $\epsilon$—this is the smallest number that when added to 1 is still different from 1.
We'll compute $\epsilon$ here
Step1: Q 4
The Fibbonacci sequence is a numerical sequence where each number is the sum of the 2 preceding numbers, e.g., 1, 1, 2, 3, 5, 8, 13, ...
Create a list where the elements are the terms in the Fibbonacci sequence
Step3: <span class="fa fa-star"></span> Q 7
Here's some text (the Gettysburg Address). Our goal is to count how many times each word repeats. We'll do a brute force method first, and then we'll look a ways to do it more efficiently (and compactly).
Step4: We've already seen the .split() method will, by default, split by spaces, so it will split this into words, producing a list
Step5: Now, the next problem is that some of these still have punctuation. In particular, we see ".", ",", and "--".
When considering a word, we can get rid of these by using the replace() method
Step6: Another problem is case—we want to count "but" and "But" as the same. Strings have a lower() method that can be used to covert a string
Step7: Recall that strings are immutable, so replace() produces a new string on output.
your task
Create a dictionary that uses the unique words as keys and has as a value the number of times that word appears.
Write a loop over the words in the string (using our split version) and do the following
Step8: More compact way
We can actually do this a lot more compactly by using another list comprehensions and another python datatype called a set. A set is a group of items, where each item is unique (e.g., no repetitions).
Here's a list comprehension that removes all the punctuation and converts to lower case
Step9: and by using the set() function, we turn the list into a set, removing any duplicates
Step10: now we can loop over the unique words and use the count method of a list to find how many there are
Step11: Even shorter -- we can use a dictionary comprehension, like a list comprehension | Python Code:
import random
random_number = random.randint(0,9)
Explanation: Exercises
Q 1
When talking about floating point, we discussed machine epsilon, $\epsilon$—this is the smallest number that when added to 1 is still different from 1.
We'll compute $\epsilon$ here:
Pick an initial guess for $\epsilon$ of eps = 1.
Create a loop that checks whether 1 + eps is different from 1
Each loop iteration, cut the value of eps in half
What value of $\epsilon$ do you find?
Q 2
To iterate over the tuples, where the i-th tuple contains i-th elements of certain sequences, we can use zip(*sequences) function.
We will iterate over two lists, names and age, and print out the resulting tuples.
Start by initializing lists names = ["Mary", "John", "Sarah"] and age = [21, 56, 98].
Iterate over the tuples containing a name and an age, the zip(list1, list2) function might be useful here.
Print out formatted strings of the type "NAME is AGE years old".
Q 3
The function enumerate(sequence) returns tuples containing indecies of objects in the sequence, and the objects.
The random module provides tools for working with the random objects. In particular, random.randint(start, end) generates a random number not smaller than start, and not bigger than end.
Generate a list of 10 random numbers from 0 to 9.
Using the enumerate(random_list) function, iterate over the tuples of random numbers and their indecies, and print out "Match: NUMBER and INDEX" if the random number and its index in the list match.
End of explanation
titles = ["don quixote",
"in search of lost time",
"ulysses",
"the odyssey",
"war and piece",
"moby dick",
"the divine comedy",
"hamlet",
"the adventures of huckleberry finn",
"the great gatsby"]
Explanation: Q 4
The Fibbonacci sequence is a numerical sequence where each number is the sum of the 2 preceding numbers, e.g., 1, 1, 2, 3, 5, 8, 13, ...
Create a list where the elements are the terms in the Fibbonacci sequence:
Start with the list fib = [1, 1]
Loop 25 times, compute the next term as the sum of the previous 2 terms and append to the list
After the loop is complete, print out the terms
You may find it useful to use fib[-1] and fib[-2] to access the last to items in the list
Q 5
We can use the input() function to ask for input from the prompt (note: in python 2 the function was called raw_input()).
Create an empty list and use a while loop to ask the user for input and append their input to the list. Keep looping until 10 items are added to the list
Q 6
Here is a list of book titles (from http://thegreatestbooks.org). Loop through the list and capitalize each word in each title. You might find the .capitalize() method that works on strings useful.
End of explanation
gettysburg_address =
Four score and seven years ago our fathers brought forth on this continent,
a new nation, conceived in Liberty, and dedicated to the proposition that
all men are created equal.
Now we are engaged in a great civil war, testing whether that nation, or
any nation so conceived and so dedicated, can long endure. We are met on
a great battle-field of that war. We have come to dedicate a portion of
that field, as a final resting place for those who here gave their lives
that that nation might live. It is altogether fitting and proper that we
should do this.
But, in a larger sense, we can not dedicate -- we can not consecrate -- we
can not hallow -- this ground. The brave men, living and dead, who struggled
here, have consecrated it, far above our poor power to add or detract. The
world will little note, nor long remember what we say here, but it can never
forget what they did here. It is for us the living, rather, to be dedicated
here to the unfinished work which they who fought here have thus far so nobly
advanced. It is rather for us to be here dedicated to the great task remaining
before us -- that from these honored dead we take increased devotion to that
cause for which they gave the last full measure of devotion -- that we here
highly resolve that these dead shall not have died in vain -- that this
nation, under God, shall have a new birth of freedom -- and that government
of the people, by the people, for the people, shall not perish from the earth.
Explanation: <span class="fa fa-star"></span> Q 7
Here's some text (the Gettysburg Address). Our goal is to count how many times each word repeats. We'll do a brute force method first, and then we'll look a ways to do it more efficiently (and compactly).
End of explanation
ga = gettysburg_address.split()
ga
Explanation: We've already seen the .split() method will, by default, split by spaces, so it will split this into words, producing a list:
End of explanation
a = "end.,"
b = a.replace(".", "").replace(",", "")
b
Explanation: Now, the next problem is that some of these still have punctuation. In particular, we see ".", ",", and "--".
When considering a word, we can get rid of these by using the replace() method:
End of explanation
a = "But"
b = "but"
a == b
a.lower() == b.lower()
Explanation: Another problem is case—we want to count "but" and "But" as the same. Strings have a lower() method that can be used to covert a string:
End of explanation
# your code here
Explanation: Recall that strings are immutable, so replace() produces a new string on output.
your task
Create a dictionary that uses the unique words as keys and has as a value the number of times that word appears.
Write a loop over the words in the string (using our split version) and do the following:
* remove any punctuation
* convert to lowercase
* test if the word is already a key in the dictionary (using the in operator)
- if the key exists, increment the word count for that key
- otherwise, add it to the dictionary with the appropiate count of 1.
At the end, print out the words and a count of how many times they appear
End of explanation
words = [q.lower().replace(".", "").replace(",", "") for q in ga]
Explanation: More compact way
We can actually do this a lot more compactly by using another list comprehensions and another python datatype called a set. A set is a group of items, where each item is unique (e.g., no repetitions).
Here's a list comprehension that removes all the punctuation and converts to lower case:
End of explanation
unique_words = set(words)
Explanation: and by using the set() function, we turn the list into a set, removing any duplicates:
End of explanation
count = {}
for uw in unique_words:
count[uw] = words.count(uw)
count
Explanation: now we can loop over the unique words and use the count method of a list to find how many there are
End of explanation
c = {uw: count[uw] for uw in unique_words}
c
Explanation: Even shorter -- we can use a dictionary comprehension, like a list comprehension
End of explanation |
4,362 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Blocking is typically done to reduce the number of tuple pairs considered for matching. There are several blocking methods proposed. The py_entitymatching package supports a subset of such blocking methods (#ref to what is supported). One such supported blocker is attribute equivalence blocker. This IPython notebook illustrates how to perform blocking using attribute equivalence blocker.
First, we need to import py_entitymatching package and other libraries as follows
Step1: Then, read the input tablse from the datasets directory
Step2: Different Ways to Block Using Attribute Equivalence Blocker
Once the tables are read, we can do blocking using attribute equivalence blocker.
There are three different ways to do attribute equivalence blocking
Step3: For the given two tables, we will assume that two persons with different zipcode values do not refer to the same real world person. So, we apply attribute equivalence blocking on zipcode. That is, we block all the tuple pairs that have different zipcodes.
Step4: Note that the tuple pairs in the candidate set have the same zipcode.
The attributes included in the candidate set are based on l_output_attrs and r_output_attrs mentioned in block_tables command (the key columns are included by default). Specifically, the list of attributes mentioned in l_output_attrs are picked from table A and the list of attributes mentioned in r_output_attrs are picked from table B. The attributes in the candidate set are prefixed based on l_output_prefix and r_ouptut_prefix parameter values mentioned in block_tables command.
Step5: Note that the metadata of C1 includes key, foreign key to the left and right tables (i.e A and B) and pointers to left and right tables.
Handling Missing Values
If the input tuples have missing values in the blocking attribute, then they are ignored by default. This is because, including all possible tuple pairs with missing values can significantly increase the size of the candidate set. But if you want to include them, then you can set allow_missing paramater to be True.
Step6: The candidate set C2 includes all possible tuple pairs with missing values.
Block a Candidate Set of Tuple Pairs
In the above, we see that the candidate set produced after blocking over input tables include tuple pairs that have different birth years. We will assume that two persons with different birth years cannot refer to the same person. So, we block the candidate set of tuple pairs on birth_year. That is, we block all the tuple pairs that have different birth years.
Step7: Note that, the tuple pairs in the resulting candidate set have the same birth year.
The attributes included in the resulting candidate set are based on the input candidate set (i.e the same attributes are retained).
Step8: As we saw earlier the metadata of C3 includes the same metadata as C1. That is, it includes key, foreign key to the left and right tables (i.e A and B) and pointers to left and right tables.
Handling Missing Values
If the tuple pairs included in the candidate set have missing values in the blocking attribute, then they are ignored by default. This is because, including all possible tuple pairs with missing values can significantly increase the size of the candidate set. But if you want to include them, then you can set allow_missing paramater to be True.
Step9: We see that A1 is the left table to C2.
Step10: Block Two tuples To Check If a Tuple Pair Would Get Blocked
We can apply attribute equivalence blocking to a tuple pair to check if it is going to get blocked. For example, we can check if the first tuple from A and B will get blocked if we block on zipcode. | Python Code:
%load_ext autotime
# Import py_entitymatching package
import py_entitymatching as em
import os
import pandas as pd
Explanation: Introduction
Blocking is typically done to reduce the number of tuple pairs considered for matching. There are several blocking methods proposed. The py_entitymatching package supports a subset of such blocking methods (#ref to what is supported). One such supported blocker is attribute equivalence blocker. This IPython notebook illustrates how to perform blocking using attribute equivalence blocker.
First, we need to import py_entitymatching package and other libraries as follows:
End of explanation
# Get the datasets directory
datasets_dir = em.get_install_path() + os.sep + 'datasets'
# Get the paths of the input tables
path_A = datasets_dir + os.sep + 'person_table_A.csv'
path_B = datasets_dir + os.sep + 'person_table_B.csv'
# Read the CSV files and set 'ID' as the key attribute
A = em.read_csv_metadata(path_A, key='ID')
B = em.read_csv_metadata(path_B, key='ID')
A.head()
B.head()
Explanation: Then, read the input tablse from the datasets directory
End of explanation
# Instantiate attribute equivalence blocker object
ab = em.AttrEquivalenceBlocker()
Explanation: Different Ways to Block Using Attribute Equivalence Blocker
Once the tables are read, we can do blocking using attribute equivalence blocker.
There are three different ways to do attribute equivalence blocking:
Block two tables to produce a candidate set of tuple pairs.
Block a candidate set of tuple pairs to typically produce a reduced candidate set of tuple pairs.
Block two tuples to check if a tuple pair would get blocked.
Block Tables to Produce a Candidate Set of Tuple Pairs
End of explanation
# Use block_tables to apply blocking over two input tables.
C1 = ab.block_tables(A, B,
l_block_attr='zipcode', r_block_attr='zipcode',
l_output_attrs=['name', 'birth_year', 'zipcode'],
r_output_attrs=['name', 'birth_year', 'zipcode'],
l_output_prefix='l_', r_output_prefix='r_')
# Display the candidate set of tuple pairs
C1.head()
Explanation: For the given two tables, we will assume that two persons with different zipcode values do not refer to the same real world person. So, we apply attribute equivalence blocking on zipcode. That is, we block all the tuple pairs that have different zipcodes.
End of explanation
# Show the metadata of C1
em.show_properties(C1)
id(A), id(B)
Explanation: Note that the tuple pairs in the candidate set have the same zipcode.
The attributes included in the candidate set are based on l_output_attrs and r_output_attrs mentioned in block_tables command (the key columns are included by default). Specifically, the list of attributes mentioned in l_output_attrs are picked from table A and the list of attributes mentioned in r_output_attrs are picked from table B. The attributes in the candidate set are prefixed based on l_output_prefix and r_ouptut_prefix parameter values mentioned in block_tables command.
End of explanation
# Introduce some missing values
A1 = em.read_csv_metadata(path_A, key='ID')
A1.ix[0, 'zipcode'] = pd.np.NaN
A1.ix[0, 'birth_year'] = pd.np.NaN
A1
# Use block_tables to apply blocking over two input tables.
C2 = ab.block_tables(A1, B,
l_block_attr='zipcode', r_block_attr='zipcode',
l_output_attrs=['name', 'birth_year', 'zipcode'],
r_output_attrs=['name', 'birth_year', 'zipcode'],
l_output_prefix='l_', r_output_prefix='r_',
allow_missing=True) # setting allow_missing parameter to True
len(C1), len(C2)
C2
Explanation: Note that the metadata of C1 includes key, foreign key to the left and right tables (i.e A and B) and pointers to left and right tables.
Handling Missing Values
If the input tuples have missing values in the blocking attribute, then they are ignored by default. This is because, including all possible tuple pairs with missing values can significantly increase the size of the candidate set. But if you want to include them, then you can set allow_missing paramater to be True.
End of explanation
# Instantiate Attr. Equivalence Blocker
ab = em.AttrEquivalenceBlocker()
# Use block_tables to apply blocking over two input tables.
C3 = ab.block_candset(C1, l_block_attr='birth_year', r_block_attr='birth_year')
C3.head()
Explanation: The candidate set C2 includes all possible tuple pairs with missing values.
Block a Candidate Set of Tuple Pairs
In the above, we see that the candidate set produced after blocking over input tables include tuple pairs that have different birth years. We will assume that two persons with different birth years cannot refer to the same person. So, we block the candidate set of tuple pairs on birth_year. That is, we block all the tuple pairs that have different birth years.
End of explanation
# Show the metadata of C1
em.show_properties(C3)
id(A), id(B)
Explanation: Note that, the tuple pairs in the resulting candidate set have the same birth year.
The attributes included in the resulting candidate set are based on the input candidate set (i.e the same attributes are retained).
End of explanation
# Display C2 (got by blocking over A1 and B)
C2
em.show_properties(C2)
em.show_properties(A1)
Explanation: As we saw earlier the metadata of C3 includes the same metadata as C1. That is, it includes key, foreign key to the left and right tables (i.e A and B) and pointers to left and right tables.
Handling Missing Values
If the tuple pairs included in the candidate set have missing values in the blocking attribute, then they are ignored by default. This is because, including all possible tuple pairs with missing values can significantly increase the size of the candidate set. But if you want to include them, then you can set allow_missing paramater to be True.
End of explanation
A1.head()
C4 = ab.block_candset(C2, l_block_attr='birth_year', r_block_attr='birth_year', allow_missing=False)
C4
# Set allow_missing to True
C5 = ab.block_candset(C2, l_block_attr='birth_year', r_block_attr='birth_year', allow_missing=True)
len(C4), len(C5)
C5
Explanation: We see that A1 is the left table to C2.
End of explanation
# Display the first tuple from table A
A.ix[[0]]
# Display the first tuple from table B
B.ix[[0]]
# Instantiate Attr. Equivalence Blocker
ab = em.AttrEquivalenceBlocker()
# Apply blocking to a tuple pair from the input tables on zipcode and get blocking status
status = ab.block_tuples(A.ix[0], B.ix[0], l_block_attr='zipcode', r_block_attr='zipcode')
# Print the blocking status
print(status)
Explanation: Block Two tuples To Check If a Tuple Pair Would Get Blocked
We can apply attribute equivalence blocking to a tuple pair to check if it is going to get blocked. For example, we can check if the first tuple from A and B will get blocked if we block on zipcode.
End of explanation |
4,363 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
22/10
Entrada y salida de datos
Funciones (parámetros por default, nombrados), módulos (distintas formas de importarlos)
CLASE DE LABORATORIO
Vencimiento TP1
Enunciado del TP 2
Funciones
Una de las premisas de Python era que "La legibilidad cuenta", y el uso de funciones ayudan mucho en que un código sea legible. <br>
En Python no existen los procedimientos
Step1: Aunque en ningún momento indicamos que lo que tiene que sumar son números, por lo que también puede sumar strings
Step3: Además, a esta función le podría agregar comentarios (docstrings) para que al hacer help de la función se entienda qué es lo que hace
Step5: El resultado de la función no es necesario que lo guarde en una variable, tranquilamente la puedo invocar y perder ese valor.
Step6: ¿Y qué sucede si no pongo el return en una función?
Step7: ¿Y si le asigno el resultado de este procedimiento a una variable?
Step8: Por lo que no existen los procedimientos, los "procedimientos" en realidad son funciones que devuelven None.
Y una prueba más de esto es el resultado de llamar a la función type y pasarle como parámetro la función sumar y el "procedimiento" imprimir
Step9: Ahora, si la función es un tipo de dato, significa que se lo puedo asignar a una variable...
Step10: ¿Y qué pasa si ahora llamo a mi_suma con los parámetros 1 y 2 como hice antes con sumar?
Step11: Lista de parámetros
¿Qué pasa cuando no sabemos cuántos parámetros nos pueden pasar, pero si sabemos qué hacer con ellos?
Step12: Parámetros por defecto
Algo más común que no saber la cantidad de parámetros que nos van a pasar es asumir que ciertos parámetros pueden no pasarlos y para ellos asumiremos un valor por defecto. <br>
Por ejemplo
Step13: Para esta función nos pueden pasar 2, 3, 4 o 5 parámetros. Si nos pasan los 5 parámetros, se imprimirán los valores que nos pasen
Step14: Ahora, si nos pasan 4 parámetros, el intérprete asumirá que el faltante es param5, por lo que dicho parámetro tomará el valor False. Y lo mismo pasa con el resto de los parámetros.
Step15: ¿Y si le pasamos un sólo parámetro?.
Step16: ¿Y qué pasa si quiero pasarle los parámetros 1, 2 y el 5?. <br>
No es problema, para eso tenemos que usar parámetros nombrados
Step17: Lo mismo pasa si lo que quiero cambiar es el cuatro parámetro
Step18: Hasta se pueden nombrar todos los parámetros
Step19: Si bien puede parecer innecesario el uso de parámetros nombrados, en algunas oportunidades se suele usar para agregar claridad y legibilidad al código, y en otros para pasarle un diccionario
Step20: Uso de módulos externos
Así como en Pascal usando la cláusula Uses podíamos usar código que no pertenecía al archivo que estábamos codificando, en Python podemos hacer lo mismo usando la cláusula import y poniendo a continuación el nombre del módulo. <br>
Por ejemplo, si queremos importar el módulo datetime para trabajar con fechas y horas, tendríamos que hacer
Step21: Pero a diferencia de Pascal y C, acá podemos elegir importar una función o algo en particular de ese módulo, en lugar de traerlo todo. Para eso tendríamos que poner en primer lugar la cláusula from, luego el nombre del módulo y a continuación la cláusula import todo lo que queremos importar separada por comas. <br>
Por ejemplo, del módulo datetime podríamos traer los submódulos date y time. Después, para usarlos simplemente lo hacemos llamando lo que importamos sin el nombre del módulo. <br> | Python Code:
def sumar(x, y): # Defino la función sumar
return x + y
x = 4
z = 5
print sumar(x, z) # Invoco a la función sumar con los parámetros x y z
print sumar(1, 2) # Invoco a la función sumar con los parámetros 1 y 2
Explanation: 22/10
Entrada y salida de datos
Funciones (parámetros por default, nombrados), módulos (distintas formas de importarlos)
CLASE DE LABORATORIO
Vencimiento TP1
Enunciado del TP 2
Funciones
Una de las premisas de Python era que "La legibilidad cuenta", y el uso de funciones ayudan mucho en que un código sea legible. <br>
En Python no existen los procedimientos: son todas funciones. Incluso, aunque nosotros no devolvamos ningún valor, Python lo hará por nosotros, retornando None. <br>
La forma de devolver valores es, al igual que en C, usando la palabra reservada return y el valor a retornar. Y de igual forma, una vez que se ejecuta esa sentencia, no se ejecuta ninguna sentencia más de esa función; sin importar si está dentro de un ciclo o todavía no hayamos hecho nada. <br>
La definición de una función comienza usando la palabra reservada def, y continúa dejando un espacio, poniendo el nombre de la función, los parámetros entre paréntesis(los paréntesis son obligatorios por más que no se pasen parámetros) y un dos puntos para terminar la línea. En las líneas que le siguen va el código de la función, que, al igual que para las estructuras de control, la forma en que se indica el bloque de código que se tiene que ejecutar es haciendo uso de la indentación.<br>
El nombre de la función tiene que cumplir las mismas reglas para las variables, puede empezar con cualquier letra y el _ y después le puede seguir cualquier carácter alfanumérico más el _. <br>
Por ejemplo:
End of explanation
print sumar('hola ', 'mundo')
Explanation: Aunque en ningún momento indicamos que lo que tiene que sumar son números, por lo que también puede sumar strings:
End of explanation
def sumar(x, y):
Suma dos elementos y retorna el resultado.
return x + y
help(sumar)
Explanation: Además, a esta función le podría agregar comentarios (docstrings) para que al hacer help de la función se entienda qué es lo que hace:
End of explanation
def factorial(n):
Calcula el factorial de un número de forma iterativa.
for i in range(1,n):
n *= i
return n
fact_5 = factorial(5) # calculo el factorial de 5 y lo guardo en fact_5
factorial(10) # calculo el factorial de 10 y no lo guardo en ninguna variable
Explanation: El resultado de la función no es necesario que lo guarde en una variable, tranquilamente la puedo invocar y perder ese valor.
End of explanation
def imprimir(msg):
print msg
imprimir('Hola mundo')
Explanation: ¿Y qué sucede si no pongo el return en una función?
End of explanation
resultado = imprimir('Hola mundo')
print resultado
Explanation: ¿Y si le asigno el resultado de este procedimiento a una variable?
End of explanation
print type(imprimir)
print type(sumar)
print sumar
Explanation: Por lo que no existen los procedimientos, los "procedimientos" en realidad son funciones que devuelven None.
Y una prueba más de esto es el resultado de llamar a la función type y pasarle como parámetro la función sumar y el "procedimiento" imprimir:
End of explanation
mi_suma = sumar
Explanation: Ahora, si la función es un tipo de dato, significa que se lo puedo asignar a una variable...
End of explanation
print mi_suma(1, 2)
Explanation: ¿Y qué pasa si ahora llamo a mi_suma con los parámetros 1 y 2 como hice antes con sumar?
End of explanation
def sumar(*args):
suma = 0
for e in args:
suma += e
return suma
print sumar(1, 2)
print sumar(1, 2, 3, 4, 5)
print sumar(*[1, 2, 3, 4, 5, 6])
print sumar
Explanation: Lista de parámetros
¿Qué pasa cuando no sabemos cuántos parámetros nos pueden pasar, pero si sabemos qué hacer con ellos?
End of explanation
def imprimir_parametros(param1, param2, param3=5, param4="es el cuarto parametro", param5=False):
print param1, param2, param3, param4, param5
Explanation: Parámetros por defecto
Algo más común que no saber la cantidad de parámetros que nos van a pasar es asumir que ciertos parámetros pueden no pasarlos y para ellos asumiremos un valor por defecto. <br>
Por ejemplo:
End of explanation
imprimir_parametros(1, 2, 3, 4, 5)
Explanation: Para esta función nos pueden pasar 2, 3, 4 o 5 parámetros. Si nos pasan los 5 parámetros, se imprimirán los valores que nos pasen:
End of explanation
imprimir_parametros(1, 2, 3, 4)
imprimir_parametros(1, 2, 3)
imprimir_parametros(1, 2)
Explanation: Ahora, si nos pasan 4 parámetros, el intérprete asumirá que el faltante es param5, por lo que dicho parámetro tomará el valor False. Y lo mismo pasa con el resto de los parámetros.
End of explanation
imprimir_parametros(1)
Explanation: ¿Y si le pasamos un sólo parámetro?.
End of explanation
imprimir_parametros(1, 2, param5="Este el parametro5")
Explanation: ¿Y qué pasa si quiero pasarle los parámetros 1, 2 y el 5?. <br>
No es problema, para eso tenemos que usar parámetros nombrados:
End of explanation
imprimir_parametros(1, 2, param4=4)
Explanation: Lo mismo pasa si lo que quiero cambiar es el cuatro parámetro:
End of explanation
imprimir_parametros(param5=1, param3=2, param1=3, param2=4, param4=5)
Explanation: Hasta se pueden nombrar todos los parámetros:
End of explanation
parametros = {
'param1': 1,
'param2': 2,
'param3': 3,
'param4': 4,
'param5': 5,
}
imprimir_parametros(**parametros)
Explanation: Si bien puede parecer innecesario el uso de parámetros nombrados, en algunas oportunidades se suele usar para agregar claridad y legibilidad al código, y en otros para pasarle un diccionario:
End of explanation
import datetime
print datetime.date.today()
Explanation: Uso de módulos externos
Así como en Pascal usando la cláusula Uses podíamos usar código que no pertenecía al archivo que estábamos codificando, en Python podemos hacer lo mismo usando la cláusula import y poniendo a continuación el nombre del módulo. <br>
Por ejemplo, si queremos importar el módulo datetime para trabajar con fechas y horas, tendríamos que hacer:
Python
import datetime
Para usarlo simplemente tenemos que poner el nombre del módulo, un punto y la función que queramos usar. <br>
En este caso, dentro del módulo datetime vamos a usar la función que se encuentra en date y se llama today().
End of explanation
from datetime import date, time
print date.today()
print time(1, 23, 32)
Explanation: Pero a diferencia de Pascal y C, acá podemos elegir importar una función o algo en particular de ese módulo, en lugar de traerlo todo. Para eso tendríamos que poner en primer lugar la cláusula from, luego el nombre del módulo y a continuación la cláusula import todo lo que queremos importar separada por comas. <br>
Por ejemplo, del módulo datetime podríamos traer los submódulos date y time. Después, para usarlos simplemente lo hacemos llamando lo que importamos sin el nombre del módulo. <br>
End of explanation |
4,364 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Procesos ETVL usando IPython -- 9 -- Taller
Notas de clase sobre la extracción, transformación, visualización y carga de datos usando IPython
Juan David Velásquez Henao
[email protected]
Universidad Nacional de Colombia, Sede Medellín
Facultad de Minas
Medellín, Colombia
Licencia
Readme
Software utilizado.
Este es un documento interactivo escrito como un notebook de Jupyter, en el cual se presenta un tutorial sobre la extracción, transformación, visualización y carga de datos usando Python en el contexto de la ciencia de los datos. Los notebooks de Jupyter permiten incoporar simultáneamente código, texto, gráficos y ecuaciones. El código presentado en este notebook puede ejecutarse en los sistemas operativos Windows, Linux y OS X.
Haga click aquí para obtener instrucciones detalladas sobre como instalar Jupyter en Windows y Mac OS X.
Haga clic aquí para ver la última versión de este documento en nbviewer.
Descargue la última versión de este documento a su disco duro; luego, carguelo y ejecutelo en línea en Try Jupyter!
Contenido
Para el archivo AportesDiario_2015.csv, responda las siguientes preguntas usando IPython.
Step1: 1.-- Cuántos registros tiene el archivo?
Step2: 2.-- Cuántas regiones hidrológicas diferentes hay?
Step3: 3.-- Cuántos rios hay?
Step4: 4.-- Cuántos registros hay por región hidrológica?
Step5: 5.-- Cuál es el promedio de aportes en energía kWh por región?
Step6: 6.-- Cuáles registros no tienen datos?
Step7: 7.-- Grafique (gráfico de barras) la producción promedio por región hidrológica? | Python Code:
import pandas as pd
import statistics as st
import numpy as np
Explanation: Procesos ETVL usando IPython -- 9 -- Taller
Notas de clase sobre la extracción, transformación, visualización y carga de datos usando IPython
Juan David Velásquez Henao
[email protected]
Universidad Nacional de Colombia, Sede Medellín
Facultad de Minas
Medellín, Colombia
Licencia
Readme
Software utilizado.
Este es un documento interactivo escrito como un notebook de Jupyter, en el cual se presenta un tutorial sobre la extracción, transformación, visualización y carga de datos usando Python en el contexto de la ciencia de los datos. Los notebooks de Jupyter permiten incoporar simultáneamente código, texto, gráficos y ecuaciones. El código presentado en este notebook puede ejecutarse en los sistemas operativos Windows, Linux y OS X.
Haga click aquí para obtener instrucciones detalladas sobre como instalar Jupyter en Windows y Mac OS X.
Haga clic aquí para ver la última versión de este documento en nbviewer.
Descargue la última versión de este documento a su disco duro; luego, carguelo y ejecutelo en línea en Try Jupyter!
Contenido
Para el archivo AportesDiario_2015.csv, responda las siguientes preguntas usando IPython.
End of explanation
x=pd.read_csv('AportesDiario_2015.csv', sep=';',decimal=',',thousands='.',skiprows=2)
len(x)
x.head()
Explanation: 1.-- Cuántos registros tiene el archivo?
End of explanation
len(set(x['Region Hidrologica']))
Explanation: 2.-- Cuántas regiones hidrológicas diferentes hay?
End of explanation
len(set(x['Nombre Rio']))
Explanation: 3.-- Cuántos rios hay?
End of explanation
y = x.groupby('Region Hidrologica')
y.size()
Explanation: 4.-- Cuántos registros hay por región hidrológica?
End of explanation
x.groupby('Region Hidrologica').mean()['Aportes %']
Explanation: 5.-- Cuál es el promedio de aportes en energía kWh por región?
End of explanation
Caudal=len(x[x['Aportes Caudal m3/s'].isnull()])
Aportes=len(x[x['Aportes Energia kWh'].isnull()])
Aport =len(x[x['Aportes %'].isnull()])
print (Caudal)
print (Aportes)
print (Aport)
# x.dropna() borra los registros na
len(x) - len(x.dropna())
Explanation: 6.-- Cuáles registros no tienen datos?
End of explanation
import matplotlib
%matplotlib inline
x.groupby('Region Hidrologica').mean()['Aportes Energia kWh'].plot(kind='bar')
Explanation: 7.-- Grafique (gráfico de barras) la producción promedio por región hidrológica?
End of explanation |
4,365 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Des dates qui font des nombres premiers (suite) ?
Ce petit notebook Jupyter, écrit en Python, a pour but de résoudre la question suivante
Step1: Elle marche très bien, et est très rapide !
Step2: Pour des nombres de 8 chiffres (c'est tout petit), elle est vraiment rapide
Step3: $\implies$ $65 ~\text{ms}$ pour 10000 nombres à tester, ça me semble assez rapide pour ce qu'on veut en faire !
Transformer une date en nombre
On va utiliser le module datetime de la bibliothèque standard
Step4: C'est ensuite facile de transformer une date en nombre, selon les deux formats.
On utilise le formatage avec .format() (en Python 3)
Step5: Tester toutes les opérations possibles
On utilise la fonction itertools.permutations pour obtenir les permutations des trois nombres $x,y,z$,
On applique les opérations dans l'ordre $f_1(f_2(x, y), z)$ et $f_1(x, f_2(y, z))$, ce qui suffit à couvrir tous les cas.
On utilise le module operator pour avoir des fonctions pour les opérations autorisées.
Step6: On peut la vérifier sur de petites entrées
Step7: On voit que stocker juste l'entier résultat ne suffit pas, on aimerait garder trace de chaque façon de l'obtenir !
Step8: Si on stocke avec comme clés les expressions, on va en avoir BEAUCOUP.
Faisons l'inverse, avec le résultat de l'expression comme clés.
Step9: Beaucoup plus raisonnable ! Ici, pour le 2ème exemple, le plus grand nombre premier obtenu est $7 = (3 \times 2) + 1$.
Step10: Il faut ignorer les erreurs de calculs et ne pas ajouter le nombre dans ce cas
Step11: Tester sur un jour
Step12: Tester tous les jours de l'année
On peut partir du 1er janvier de cette année, et ajouter des jours un par un.
On utilise un itérateur (avec le mot clé yield), pour pouvoir facilement boucler sur tous les jours de l'année en cours | Python Code:
from sympy import isprime
Explanation: Des dates qui font des nombres premiers (suite) ?
Ce petit notebook Jupyter, écrit en Python, a pour but de résoudre la question suivante :
"Pour un jour fixé, en utilisant le jour le mois et les deux chiffres de l'année comme briques de base, et les opérations arithmétiques $+,\times,-,\mod,\%$, quel est le plus grand nombre premier que l'on peut obtenir ?"
Par exemple, en 2017, le 31 mai donne 31, 5, et 17.
$31 \times 5 \times 17$ n'est évidemment pas premier,
$17 \times (31 \mod 5) = 17$ est premier.
Pour d'autres questions sur les dates et les nombres premiers, ce premier notebook est aussi intéressant !.
Une première solution, naïve
On va d'abord écrire (ou importer) une fonction pour tester si un entier est premier,
Puis on va écrire une fonction qui transforme une date en ses trois nombres,
Et une fonction qui essaie toutes les opérations possibles sur les trois nombres,
Et enfin une boucle sur les 365 (ou 366) jours de l'année suffira à afficher, pour chaque jour, le plus grand nombre premier obtenu.
Tester la primalité, version tricheur
sympy propose une fonction sympy.isprime.
End of explanation
[isprime(i) for i in [2, 3, 5, 7, 10, 11, 13, 17, 2017]]
Explanation: Elle marche très bien, et est très rapide !
End of explanation
from numpy.random import randint
%timeit sum([isprime(i) for i in randint(1e8, 1e9-1, 10**4)])
Explanation: Pour des nombres de 8 chiffres (c'est tout petit), elle est vraiment rapide :
End of explanation
from datetime import datetime
today = datetime.today()
YEAR = today.year
print("On va travailler avec l'année", YEAR, "!")
Explanation: $\implies$ $65 ~\text{ms}$ pour 10000 nombres à tester, ça me semble assez rapide pour ce qu'on veut en faire !
Transformer une date en nombre
On va utiliser le module datetime de la bibliothèque standard :
End of explanation
def date_vers_nombre(date):
day = int("{:%d}".format(date))
month = int("{:%m}".format(date))
year = int("{:%Y}".format(date)[-2:])
return day, month, year
date = datetime(YEAR, 1, 12)
print(date_vers_nombre(date))
Explanation: C'est ensuite facile de transformer une date en nombre, selon les deux formats.
On utilise le formatage avec .format() (en Python 3) :
End of explanation
from itertools import permutations
from operator import mod, mul, add, pow, sub, floordiv
operations = [mod, mul, add, sub, floordiv]
def tous_les_resultats(nombres, ops=operations):
assert len(nombres) == 3
tous = []
for (x, y, z) in permutations(nombres):
# on a un ordre pour x, y, z
for f1 in ops:
for f2 in ops:
tous.append(f1(f2(x, y), z))
tous.append(f1(x, f2(y, z)))
# on enlève les doublons ici
return list(set(tous))
Explanation: Tester toutes les opérations possibles
On utilise la fonction itertools.permutations pour obtenir les permutations des trois nombres $x,y,z$,
On applique les opérations dans l'ordre $f_1(f_2(x, y), z)$ et $f_1(x, f_2(y, z))$, ce qui suffit à couvrir tous les cas.
On utilise le module operator pour avoir des fonctions pour les opérations autorisées.
End of explanation
tous_les_resultats([1, 2, 3], [add])
tous_les_resultats([1, 2, 3], [add, mul])
Explanation: On peut la vérifier sur de petites entrées :
End of explanation
noms_operations = {
mod: '%',
mul: '*',
add: '+',
sub: '-',
floordiv: '/',
}
def tous_les_resultats_2(nombres, ops=operations):
assert len(nombres) == 3
tous = {}
for (x, y, z) in permutations(nombres):
# on a un ordre pour x, y, z
for f1 in ops:
for f2 in ops:
n1 = f1(f2(x, y), z)
s1 = "{}({}({}, {}), {})".format(noms_operations[f1], noms_operations[f2], x, y, z)
tous[s1] = n1
n2 = f1(x, f2(y, z))
s2 = "{}({}, {}({}, {}))".format(noms_operations[f1], x, noms_operations[f2], y, z)
tous[s2] = n2
return tous
tous_les_resultats_2([1, 2, 3], [add])
tous_les_resultats_2([1, 2, 3], [add, mul])
Explanation: On voit que stocker juste l'entier résultat ne suffit pas, on aimerait garder trace de chaque façon de l'obtenir !
End of explanation
def tous_les_resultats_3(nombres, ops=operations):
assert len(nombres) == 3
tous = {}
for (x, y, z) in permutations(nombres):
# on a un ordre pour x, y, z
for f1 in ops:
for f2 in ops:
n1 = f1(f2(x, y), z)
s1 = "{}({}({}, {}), {})".format(noms_operations[f1], noms_operations[f2], x, y, z)
tous[n1] = s1
n2 = f1(x, f2(y, z))
s2 = "{}({}, {}({}, {}))".format(noms_operations[f1], x, noms_operations[f2], y, z)
tous[n2] = s2
return tous
tous_les_resultats_3([1, 2, 3], [add])
tous_les_resultats_3([1, 2, 3], [add, mul])
Explanation: Si on stocke avec comme clés les expressions, on va en avoir BEAUCOUP.
Faisons l'inverse, avec le résultat de l'expression comme clés.
End of explanation
def plus_grand_premier(nombres, ops=operations):
tous = tous_les_resultats_3(nombres, ops=ops)
premiers = [ p for p in tous.keys() if isprime(p) ]
plus_grand_premier = max(premiers)
expression = tous[plus_grand_premier]
return plus_grand_premier, expression
plus_grand_premier([1, 2, 3], [add, mul])
plus_grand_premier([1, 2, 3])
Explanation: Beaucoup plus raisonnable ! Ici, pour le 2ème exemple, le plus grand nombre premier obtenu est $7 = (3 \times 2) + 1$.
End of explanation
def tous_les_resultats_4(nombres, ops=operations):
assert len(nombres) == 3
tous = {}
for (x, y, z) in permutations(nombres):
# on a un ordre pour x, y, z
for f1 in ops:
for f2 in ops:
try:
n1 = f1(f2(x, y), z)
s1 = "{}({}({}, {}), {})".format(noms_operations[f1], noms_operations[f2], x, y, z)
tous[n1] = s1
except:
pass
try:
n2 = f1(x, f2(y, z))
s2 = "{}({}, {}({}, {}))".format(noms_operations[f1], x, noms_operations[f2], y, z)
tous[n2] = s2
except:
pass
return tous
def plus_grand_premier_2(nombres, ops=operations):
tous = tous_les_resultats_4(nombres, ops=ops)
premiers = [ p for p in tous.keys() if isprime(p) ]
plus_grand_premier = max(premiers)
expression = tous[plus_grand_premier]
return plus_grand_premier, expression
plus_grand_premier_2([1, 2, 3], [add, mul])
plus_grand_premier_2([1, 2, 3])
plus_grand_premier_2([12, 1, 93])
Explanation: Il faut ignorer les erreurs de calculs et ne pas ajouter le nombre dans ce cas:
End of explanation
date
x, y, z = date_vers_nombre(date)
plus_grand_premier_2([x, y, z])
Explanation: Tester sur un jour
End of explanation
from datetime import timedelta
def tous_les_jours(year=YEAR):
date = datetime(year, 1, 1)
un_jour = timedelta(days=1)
for i in range(0, 366):
yield date
date += un_jour
if date.year > year: # On est allé trop loin
raise StopIteration
tous = []
for date in tous_les_jours():
x, y, z = date_vers_nombre(date)
p, expr = plus_grand_premier_2([x, y, z])
tous.append(([x, y, z], p, expr))
print("Pour la date {:%d-%m-%Y}, le plus grand nombre premier obtenu est {}, avec l'expression {}.".format(date, p, expr))
max(tous, key=lambda t: t[1])
Explanation: Tester tous les jours de l'année
On peut partir du 1er janvier de cette année, et ajouter des jours un par un.
On utilise un itérateur (avec le mot clé yield), pour pouvoir facilement boucler sur tous les jours de l'année en cours :
End of explanation |
4,366 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Selecting Columns Programmatically Using Column Expressions Tutorial
MLDB provides a complete implementation of the SQL SELECT statement. Most of the functions you are accustomed to using are available in your queries.
MLDB is different from traditional SQL databases in that there is no enforced schema on rows, allowing you to work with millions of columns of sparse data. This makes it easy to load and manipulate sparse datasets, even when there are millions of columns. To reduce the size of your dataset or use only specific variables, we may need to select columns based on specific critera. Column Expressions is an MLDB extension that provides additional control over your column selection. With a column expression, you can programmatically return specific columns with a SQL SELECT statement.
In this tutorial, we will provide examples of <code>COLUMN EXPR</code> within <code>SELECT</code> statements. This tutorial assumes familiarity with Procedures and Datasets. We suggest going through the Procedures and Functions Tutorial and the Loading Data Tutorial beforehand.
Setting up
The notebook cells below use pymldb's Connection class to make REST API calls. You can check out the Using pymldb Tutorial for more details.
Step2: Basic usage example
Let's begin by loading and visualizing our data. We will be using the dataset from the Virtual Manipulation of Datasets Tutorial. We had chosen the tokenize function to count the number of words in the Wikipedia descriptions of several Machine Learning concepts (please check out the tutorial for more details).
Step3: Each word is represented by a column and each Machine Learning concept by a row. We can run a simple SELECT query to take a quick look at the first 5 rows of our dataset.
Step5: There are 286 columns, some of which may or may not be useful to the data analysis at hand. For example, we may want to rebuild a dataset with
Step7: This is very powerful because the LIKE statement in Standard SQL is typically found in row operations and more rarely in column operations. MLDB makes it simple to use such SQL expressions on columns.
Using column expressions to keep columns that appear in multiple descriptions
With Column Expressions, we can select columns based on specific row selection criteria. <code/>COLUMN EXPR</code> will allow us for example to choose words that appear in multiple descriptions. In this case, we filter on words that show up at least 4 times.
To achieve the desired outcome, we use a Built-in Function available in column expressions called rowCount. rowCount iterates through each column and returns the number of rows that have a value for the specific column.
Step8: The results make sense. The words that we found above in the columns are common in Machine Learning concept descriptions. With a plain SQL statement and the rowCount function, we reduced our dataset to include words that appear at least 4 times.
Nested JSON example
Nested JSON objects can have complex schemas, often involving multi-level and multidimensional data structures. In this section we will create a more complex dataset to illustrate ways to simplify data structures and column selection with Built-in Function and Column Expression.
Let's first create an empty dataset called 'toy_example'.
Step9: We will now create one row in the 'toy_example' dataset with the 'row1' JSON object below.
Step10: We will check out our data with a SELECT query.
Step12: There are many elements within the cell above. We will need to better structure elements within the nested JSON object.
Working with nested JSON objects with built-in functions and column expressions
To understand and query nested JSON objects, we will be using a Built-in Function called <code/>parse_json</code> and a Column Expression <code/>columnPathElement</code>.
This is where the parse_json function comes in handy. It will help us turn a multidimensional JSON object into a 2D dataset.
Step14: parse_json is a powerful feature since we can create 2D representations out of multidimensional data. We can read all of the elements of the JSON object on one line. It is also easier to SQL as we will see below.
columnPathElement makes it convenient to navigate specific parts of the data structure. In the next block of code, we will do the following | Python Code:
from pymldb import Connection
mldb = Connection()
Explanation: Selecting Columns Programmatically Using Column Expressions Tutorial
MLDB provides a complete implementation of the SQL SELECT statement. Most of the functions you are accustomed to using are available in your queries.
MLDB is different from traditional SQL databases in that there is no enforced schema on rows, allowing you to work with millions of columns of sparse data. This makes it easy to load and manipulate sparse datasets, even when there are millions of columns. To reduce the size of your dataset or use only specific variables, we may need to select columns based on specific critera. Column Expressions is an MLDB extension that provides additional control over your column selection. With a column expression, you can programmatically return specific columns with a SQL SELECT statement.
In this tutorial, we will provide examples of <code>COLUMN EXPR</code> within <code>SELECT</code> statements. This tutorial assumes familiarity with Procedures and Datasets. We suggest going through the Procedures and Functions Tutorial and the Loading Data Tutorial beforehand.
Setting up
The notebook cells below use pymldb's Connection class to make REST API calls. You can check out the Using pymldb Tutorial for more details.
End of explanation
print mldb.put("/v1/procedures/import_ML_concepts", {
"type":"import.text",
"params": {
"dataFileUrl":"file://mldb/mldb_test_data/MachineLearningConcepts.csv",
"outputDataset":{
"id":"ml_concepts",
"type": "sparse.mutable"
},
"named": "Concepts",
"select":
tokenize(
lower(Text),
{splitChars: ' -''"?!;:/[]*,().',
minTokenLength: 4}) AS *
,
"runOnCreation": True
}
})
Explanation: Basic usage example
Let's begin by loading and visualizing our data. We will be using the dataset from the Virtual Manipulation of Datasets Tutorial. We had chosen the tokenize function to count the number of words in the Wikipedia descriptions of several Machine Learning concepts (please check out the tutorial for more details).
End of explanation
mldb.query("SELECT * FROM ml_concepts LIMIT 5")
Explanation: Each word is represented by a column and each Machine Learning concept by a row. We can run a simple SELECT query to take a quick look at the first 5 rows of our dataset.
End of explanation
mldb.query(
SELECT COLUMN EXPR (WHERE columnName() LIKE '%ing')
FROM ml_concepts
LIMIT 5
)
Explanation: There are 286 columns, some of which may or may not be useful to the data analysis at hand. For example, we may want to rebuild a dataset with:
* verbs and adverbs that end with "ing"
* words that appear at least twice in each of the descriptions of the Machine Learning concepts.
This can be done in a few queries as you will see below.
Using column expressions to keep columns that end with "ing"
Column Expressions provide efficient ways of picking and choosing our columns. For example, we can only choose verbs and adverbs that end with "ing" to understand the overall meaning of a description.
We use the columnName column expression function along with the LIKE SQL expression, as you will see below.
End of explanation
mldb.query(
SELECT COLUMN EXPR (WHERE rowCount() > 4)
FROM ml_concepts
)
Explanation: This is very powerful because the LIKE statement in Standard SQL is typically found in row operations and more rarely in column operations. MLDB makes it simple to use such SQL expressions on columns.
Using column expressions to keep columns that appear in multiple descriptions
With Column Expressions, we can select columns based on specific row selection criteria. <code/>COLUMN EXPR</code> will allow us for example to choose words that appear in multiple descriptions. In this case, we filter on words that show up at least 4 times.
To achieve the desired outcome, we use a Built-in Function available in column expressions called rowCount. rowCount iterates through each column and returns the number of rows that have a value for the specific column.
End of explanation
# create dataset
print mldb.put('/v1/datasets/toy_example', { "type":"sparse.mutable" })
Explanation: The results make sense. The words that we found above in the columns are common in Machine Learning concept descriptions. With a plain SQL statement and the rowCount function, we reduced our dataset to include words that appear at least 4 times.
Nested JSON example
Nested JSON objects can have complex schemas, often involving multi-level and multidimensional data structures. In this section we will create a more complex dataset to illustrate ways to simplify data structures and column selection with Built-in Function and Column Expression.
Let's first create an empty dataset called 'toy_example'.
End of explanation
import json
row1 = {
"name": "Bob",
"address": {"city": "Montreal", "street": "Stanley"},
"sports": ["soccer","hockey"],
"friends": [{"name": "Mich", "age": 25}, {"name": "Jean", "age": 28}]
}
# update dataset by adding a row
mldb.post('/v1/datasets/toy_example/rows', {
"rowName": "row1",
"columns": [["data", json.dumps(row1), 0]]
})
# save changes
mldb.post("/v1/datasets/toy_example/commit")
Explanation: We will now create one row in the 'toy_example' dataset with the 'row1' JSON object below.
End of explanation
mldb.query("SELECT * FROM toy_example")
Explanation: We will check out our data with a SELECT query.
End of explanation
mldb.query(
SELECT parse_json(data, {arrays: 'parse'}) AS *
FROM toy_example
)
Explanation: There are many elements within the cell above. We will need to better structure elements within the nested JSON object.
Working with nested JSON objects with built-in functions and column expressions
To understand and query nested JSON objects, we will be using a Built-in Function called <code/>parse_json</code> and a Column Expression <code/>columnPathElement</code>.
This is where the parse_json function comes in handy. It will help us turn a multidimensional JSON object into a 2D dataset.
End of explanation
mldb.query(
SELECT COLUMN EXPR (WHERE columnPathElement(2) = 'name')
FROM (
SELECT parse_json(data, {arrays: 'parse'}) AS * NAMED rowPath() FROM toy_example
)
)
Explanation: parse_json is a powerful feature since we can create 2D representations out of multidimensional data. We can read all of the elements of the JSON object on one line. It is also easier to SQL as we will see below.
columnPathElement makes it convenient to navigate specific parts of the data structure. In the next block of code, we will do the following:
* use parse_json to parse each data element of the object on one row (same as above)
* select specific cells using columnPathElement where the the column path name at index = 2 is 'name' (note that 'friends' is at index = 0)
End of explanation |
4,367 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classes and Object Oriented Programming
We have looked at functions which take input and return output (or do things to the input). However, sometimes it is useful to think about objects first rather than the actions applied to them.
Think about a polynomial, such as the cubic
\begin{equation}
p(x) = 12 - 14 x + 2 x^3.
\end{equation}
This is one of the standard forms that we would expect to see for a polynomial. We could imagine representing this in Python using a container containing the coefficients, such as
Step1: The order of the polynomial is given by the number of coefficients (minus one), which is given by len(p_normal)-1.
However, there are many other ways it could be written, which are useful in different contexts. For example, we are often interested in the roots of the polynomial, so would want to express it in the form
\begin{equation}
p(x) = 2 (x - 1)(x - 2)(x + 3).
\end{equation}
This allows us to read off the roots directly. We could imagine representing this in Python using a container containing the roots, such as
Step2: combined with a single variable containing the leading term,
Step3: We see that the order of the polynomial is given by the number of roots (and hence by len(p_roots)). This form represents the same polynomial but requires two pieces of information (the roots and the leading coefficient).
The different forms are useful for different things. For example, if we want to add two polynomials the standard form makes it straightforward, but the factored form does not. Conversely, multiplying polynomials in the factored form is easy, whilst in the standard form it is not.
But the key point is that the object - the polynomial - is the same
Step4: We have defined a class, which is a single object that will represent a polynomial. We use the keyword class in the same way that we use the keyword def when defining a function. The definition line ends with a colon, and all the code defining the object is indented by four spaces.
The name of the object - the general class, or type, of the thing that we're defining - is Polynomial. The convention is that class names start with capital letters, but this convention is frequently ignored.
The type of object that we are building on appears in brackets after the name of the object. The most basic thing, which is used most often, is the object type as here.
Class variables are defined in the usual way, but are only visible inside the class. Variables that are set outside of functions, such as explanation above, will be common to all class variables.
Functions are defined inside classes in the usual way (using the def keyword, indented by four additional spaces). They work in a special way
Step5: The first line, p = Polynomial(), creates an instance of the class. That is, it creates a specific Polynomial. It is assigned to the variable named p. We can access class variables using the "dot" notation, so the string can be printed via p.explanation. The method that prints the class variable also uses the "dot" notation, hence p.explain(). The self variable in the definition of the function is the instance itself, p. This is passed through automatically thanks to the dot notation.
Note that we can change class variables in specific instances in the usual way (p.explanation = ... above). This only changes the variable for that instance. To check that, let us define two polynomials
Step6: We can of course make the methods take additional variables. We modify the class (note that we have to completely re-define it each time)
Step7: We then use this, remembering that the self variable is passed through automatically
Step9: At the moment the class is not doing anything interesting. To do something interesting we need to store (and manipulate) relevant variables. The first thing to do is to add those variables when the instance is actually created. We do this by adding a special function (method) which changes how the variables of type Polynomial are created
Step10: This __init__ function is called when a variable is created. There are a number of special class functions, each of which has two underscores before and after the name. This is another Python convention that is effectively a rule
Step12: Another special function that is very useful is __repr__. This gives a representation of the class. In essence, if you ask Python to print a variable, it will print the string returned by the __repr__ function. We can use this to create a simple string representation of the polynomial
Step14: The final special function we'll look at (although there are many more, many of which may be useful) is __mul__. This allows Python to multiply two variables together. With this we can take the product of two polynomials
Step16: We now have a simple class that can represent polynomials and multiply them together, whilst printing out a simple string form representing itself. This can obviously be extended to be much more useful.
Inheritance
As we can see above, building a complete class from scratch can be lengthy and tedious. If there is another class that does much of what we want, we can build on top of that. This is the idea behind inheritance.
In the case of the Polynomial we declared that it started from the object class in the first line defining the class
Step17: Variables of the Monomial class are also variables of the Polynomial class, so can use all the methods and functions from the Polynomial class automatically
Step19: We note that these functions, methods and variables may not be exactly right, as they are given for the general Polynomial class, not by the specific Monomial class. If we redefine these functions and variables inside the Monomial class, they will override those defined in the Polynomial class. We do not have to override all the functions and variables, just the parts we want to change
Step20: This has had no effect on the original Polynomial class and variables, which can be used as before
Step21: And, as Monomial variables are Polynomials, we can multiply them together to get a Polynomial
Step23: In fact, we can be a bit smarter than this. Note that the __init__ function of the Monomial class is identical to that of the Polynomial class, just with the leading_term set explicitly to 1. Rather than duplicating the code and modifying a single value, we can call the __init__ function of the Polynomial class directly. This is because the Monomial class is built on the Polynomial class, so knows about it. We regenerate the class, but only change the __init__ function | Python Code:
p_normal = (12, -14, 0, 2)
Explanation: Classes and Object Oriented Programming
We have looked at functions which take input and return output (or do things to the input). However, sometimes it is useful to think about objects first rather than the actions applied to them.
Think about a polynomial, such as the cubic
\begin{equation}
p(x) = 12 - 14 x + 2 x^3.
\end{equation}
This is one of the standard forms that we would expect to see for a polynomial. We could imagine representing this in Python using a container containing the coefficients, such as:
End of explanation
p_roots = (1, 2, -3)
Explanation: The order of the polynomial is given by the number of coefficients (minus one), which is given by len(p_normal)-1.
However, there are many other ways it could be written, which are useful in different contexts. For example, we are often interested in the roots of the polynomial, so would want to express it in the form
\begin{equation}
p(x) = 2 (x - 1)(x - 2)(x + 3).
\end{equation}
This allows us to read off the roots directly. We could imagine representing this in Python using a container containing the roots, such as:
End of explanation
p_leading_term = 2
Explanation: combined with a single variable containing the leading term,
End of explanation
class Polynomial(object):
explanation = "I am a polynomial"
def explain(self):
print(self.explanation)
Explanation: We see that the order of the polynomial is given by the number of roots (and hence by len(p_roots)). This form represents the same polynomial but requires two pieces of information (the roots and the leading coefficient).
The different forms are useful for different things. For example, if we want to add two polynomials the standard form makes it straightforward, but the factored form does not. Conversely, multiplying polynomials in the factored form is easy, whilst in the standard form it is not.
But the key point is that the object - the polynomial - is the same: the representation may appear different, but it's the object itself that we really care about. So we want to represent the object in code, and work with that object.
Classes
Python, and other languages that include object oriented concepts (which is most modern languages) allow you to define and manipulate your own objects. Here we will define a polynomial object step by step.
End of explanation
p = Polynomial()
print(p.explanation)
p.explain()
p.explanation = "I change the string"
p.explain()
Explanation: We have defined a class, which is a single object that will represent a polynomial. We use the keyword class in the same way that we use the keyword def when defining a function. The definition line ends with a colon, and all the code defining the object is indented by four spaces.
The name of the object - the general class, or type, of the thing that we're defining - is Polynomial. The convention is that class names start with capital letters, but this convention is frequently ignored.
The type of object that we are building on appears in brackets after the name of the object. The most basic thing, which is used most often, is the object type as here.
Class variables are defined in the usual way, but are only visible inside the class. Variables that are set outside of functions, such as explanation above, will be common to all class variables.
Functions are defined inside classes in the usual way (using the def keyword, indented by four additional spaces). They work in a special way: they are not called directly, but only when you have a member of the class. This is what the self keyword does: it takes the specific instance of the class and uses its data. Class functions are often called methods.
Let's see how this works on a specific example:
End of explanation
p = Polynomial()
p.explanation = "Changed the string again"
q = Polynomial()
p.explanation = "Changed the string a third time"
p.explain()
q.explain()
Explanation: The first line, p = Polynomial(), creates an instance of the class. That is, it creates a specific Polynomial. It is assigned to the variable named p. We can access class variables using the "dot" notation, so the string can be printed via p.explanation. The method that prints the class variable also uses the "dot" notation, hence p.explain(). The self variable in the definition of the function is the instance itself, p. This is passed through automatically thanks to the dot notation.
Note that we can change class variables in specific instances in the usual way (p.explanation = ... above). This only changes the variable for that instance. To check that, let us define two polynomials:
End of explanation
class Polynomial(object):
explanation = "I am a polynomial"
def explain_to(self, caller):
print("Hello, {}. {}.".format(caller,self.explanation))
Explanation: We can of course make the methods take additional variables. We modify the class (note that we have to completely re-define it each time):
End of explanation
r = Polynomial()
r.explain_to("Alice")
Explanation: We then use this, remembering that the self variable is passed through automatically:
End of explanation
class Polynomial(object):
Representing a polynomial.
explanation = "I am a polynomial"
def __init__(self, roots, leading_term):
self.roots = roots
self.leading_term = leading_term
self.order = len(roots)
def explain_to(self, caller):
print("Hello, {}. {}.".format(caller,self.explanation))
print("My roots are {}.".format(self.roots))
Explanation: At the moment the class is not doing anything interesting. To do something interesting we need to store (and manipulate) relevant variables. The first thing to do is to add those variables when the instance is actually created. We do this by adding a special function (method) which changes how the variables of type Polynomial are created:
End of explanation
p = Polynomial(p_roots, p_leading_term)
p.explain_to("Alice")
q = Polynomial((1,1,0,-2), -1)
q.explain_to("Bob")
Explanation: This __init__ function is called when a variable is created. There are a number of special class functions, each of which has two underscores before and after the name. This is another Python convention that is effectively a rule: functions surrounded by two underscores have special effects, and will be called by other Python functions internally. So now we can create a variable that represents a specific polynomial by storing its roots and the leading term:
End of explanation
class Polynomial(object):
Representing a polynomial.
explanation = "I am a polynomial"
def __init__(self, roots, leading_term):
self.roots = roots
self.leading_term = leading_term
self.order = len(roots)
def __repr__(self):
string = str(self.leading_term)
for root in self.roots:
if root == 0:
string = string + "x"
elif root > 0:
string = string + "(x - {})".format(root)
else:
string = string + "(x + {})".format(-root)
return string
def explain_to(self, caller):
print("Hello, {}. {}.".format(caller,self.explanation))
print("My roots are {}.".format(self.roots))
p = Polynomial(p_roots, p_leading_term)
print(p)
q = Polynomial((1,1,0,-2), -1)
print(q)
Explanation: Another special function that is very useful is __repr__. This gives a representation of the class. In essence, if you ask Python to print a variable, it will print the string returned by the __repr__ function. We can use this to create a simple string representation of the polynomial:
End of explanation
class Polynomial(object):
Representing a polynomial.
explanation = "I am a polynomial"
def __init__(self, roots, leading_term):
self.roots = roots
self.leading_term = leading_term
self.order = len(roots)
def __repr__(self):
string = str(self.leading_term)
for root in self.roots:
if root == 0:
string = string + "x"
elif root > 0:
string = string + "(x - {})".format(root)
else:
string = string + "(x + {})".format(-root)
return string
def __mul__(self, other):
roots = self.roots + other.roots
leading_term = self.leading_term * other.leading_term
return Polynomial(roots, leading_term)
def explain_to(self, caller):
print("Hello, {}. {}.".format(caller,self.explanation))
print("My roots are {}.".format(self.roots))
p = Polynomial(p_roots, p_leading_term)
q = Polynomial((1,1,0,-2), -1)
r = p*q
print(r)
Explanation: The final special function we'll look at (although there are many more, many of which may be useful) is __mul__. This allows Python to multiply two variables together. With this we can take the product of two polynomials:
End of explanation
class Monomial(Polynomial):
Representing a monomial, which is a polynomial with leading term 1.
def __init__(self, roots):
self.roots = roots
self.leading_term = 1
self.order = len(roots)
Explanation: We now have a simple class that can represent polynomials and multiply them together, whilst printing out a simple string form representing itself. This can obviously be extended to be much more useful.
Inheritance
As we can see above, building a complete class from scratch can be lengthy and tedious. If there is another class that does much of what we want, we can build on top of that. This is the idea behind inheritance.
In the case of the Polynomial we declared that it started from the object class in the first line defining the class: class Polynomial(object). But we can build on any class, by replacing object with something else. Here we will build on the Polynomial class that we've started with.
A monomial is a polynomial whose leading term is simply 1. A monomial is a polynomial, and could be represented as such. However, we could build a class that knows that the leading term is always 1: there may be cases where we can take advantage of this additional simplicity.
We build a new monomial class as follows:
End of explanation
m = Monomial((-1, 4, 9))
m.explain_to("Caroline")
print(m)
Explanation: Variables of the Monomial class are also variables of the Polynomial class, so can use all the methods and functions from the Polynomial class automatically:
End of explanation
class Monomial(Polynomial):
Representing a monomial, which is a polynomial with leading term 1.
explanation = "I am a monomial"
def __init__(self, roots):
self.roots = roots
self.leading_term = 1
self.order = len(roots)
def __repr__(self):
string = ""
for root in self.roots:
if root == 0:
string = string + "x"
elif root > 0:
string = string + "(x - {})".format(root)
else:
string = string + "(x + {})".format(-root)
return string
m = Monomial((-1, 4, 9))
m.explain_to("Caroline")
print(m)
Explanation: We note that these functions, methods and variables may not be exactly right, as they are given for the general Polynomial class, not by the specific Monomial class. If we redefine these functions and variables inside the Monomial class, they will override those defined in the Polynomial class. We do not have to override all the functions and variables, just the parts we want to change:
End of explanation
s = Polynomial((2, 3), 4)
s.explain_to("David")
print(s)
Explanation: This has had no effect on the original Polynomial class and variables, which can be used as before:
End of explanation
t = m*s
t.explain_to("Erik")
print(t)
Explanation: And, as Monomial variables are Polynomials, we can multiply them together to get a Polynomial:
End of explanation
class Monomial(Polynomial):
Representing a monomial, which is a polynomial with leading term 1.
explanation = "I am a monomial"
def __init__(self, roots):
Polynomial.__init__(self, roots, 1)
def __repr__(self):
string = ""
for root in self.roots:
if root == 0:
string = string + "x"
elif root > 0:
string = string + "(x - {})".format(root)
else:
string = string + "(x + {})".format(-root)
return string
v = Monomial((2, -3))
v.explain_to("Fred")
print(v)
Explanation: In fact, we can be a bit smarter than this. Note that the __init__ function of the Monomial class is identical to that of the Polynomial class, just with the leading_term set explicitly to 1. Rather than duplicating the code and modifying a single value, we can call the __init__ function of the Polynomial class directly. This is because the Monomial class is built on the Polynomial class, so knows about it. We regenerate the class, but only change the __init__ function:
End of explanation |
4,368 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This script is for retrieving images based on sketch query
Step1: caffe
First, we need to import caffe. You'll need to have caffe installed, as well as python interface for caffe.
Step2: Now we can load up the network. You can change the path to your own network here. Make sure to use the matching deploy prototxt files and change the target layer to your layer name.
Step3: Retrieving images
The following script show how to use our network to do the retrieval. The easiest way to use the script is to simply put every images you want to retrieve in one folder and modify 'photo_paths' to point to your folder. Then change 'sketch_path' to point to the sketch you want to use as a query.
Extracting image feats
Step4: Show top 5 retrieval results | Python Code:
import numpy as np
from pylab import *
%matplotlib inline
import os
import sys
Explanation: This script is for retrieving images based on sketch query
End of explanation
#TODO: specify your caffe root folder here
caffe_root = "X:\caffe_siggraph/caffe-windows-master"
sys.path.insert(0, caffe_root+'/python')
import caffe
Explanation: caffe
First, we need to import caffe. You'll need to have caffe installed, as well as python interface for caffe.
End of explanation
#TODO: change to your own network and deploying file
PRETRAINED_FILE = '../models/triplet_googlenet/triplet_googlenet_finegrain_final.caffemodel'
sketch_model = '../models/triplet_googlenet/googlenet_sketchdeploy.prototxt'
image_model = '../models/triplet_googlenet/googlenet_imagedeploy.prototxt'
caffe.set_mode_gpu()
#caffe.set_mode_cpu()
sketch_net = caffe.Net(sketch_model, PRETRAINED_FILE, caffe.TEST)
img_net = caffe.Net(image_model, PRETRAINED_FILE, caffe.TEST)
sketch_net.blobs.keys()
#TODO: set output layer name. You can use sketch_net.blobs.keys() to list all layer
output_layer_sketch = 'pool5/7x7_s1_s'
output_layer_image = 'pool5/7x7_s1_p'
#set the transformer
transformer = caffe.io.Transformer({'data': np.shape(sketch_net.blobs['data'].data)})
transformer.set_mean('data', np.array([104, 117, 123]))
transformer.set_transpose('data',(2,0,1))
transformer.set_channel_swap('data', (2,1,0))
transformer.set_raw_scale('data', 255.0)
Explanation: Now we can load up the network. You can change the path to your own network here. Make sure to use the matching deploy prototxt files and change the target layer to your layer name.
End of explanation
#TODO: specify photo folder for the retrieval
photo_paths = 'C:\Users\Patsorn\Documents/notebook_backup/SBIR/retrieval/'
#load up images
img_list = os.listdir(photo_paths)
N = np.shape(img_list)[0]
print 'Retrieving from', N,'photos'
#extract feature for all images
feats = []
for i,path in enumerate(img_list):
imgname = path.split('/')[-1]
imgname = imgname.split('.jpg')[0]
imgcat = path.split('/')[0]
print '\r',str(i+1)+'/'+str(N)+ ' '+'Extracting ' +path+'...',
full_path = photo_paths + path
img = (transformer.preprocess('data', caffe.io.load_image(full_path.rstrip())))
img_in = np.reshape([img],np.shape(sketch_net.blobs['data'].data))
out_img = img_net.forward(data=img_in)
out_img = np.copy(out_img[output_layer_image])
feats.append(out_img)
print 'done',
np.shape(feats)
feats = np.resize(feats,[np.shape(feats)[0],np.shape(feats)[2]]) #quick fixed for size
#build nn pool
from sklearn.neighbors import NearestNeighbors,LSHForest
nbrs = NearestNeighbors(n_neighbors=np.size(feats,0), algorithm='brute',metric='cosine').fit(feats)
Explanation: Retrieving images
The following script show how to use our network to do the retrieval. The easiest way to use the script is to simply put every images you want to retrieve in one folder and modify 'photo_paths' to point to your folder. Then change 'sketch_path' to point to the sketch you want to use as a query.
Extracting image feats
End of explanation
#Load up sketch query
sketch_path = "X:\data_for_research\sketch_dataset\png/giraffe/7366.png"
sketch_in = (transformer.preprocess('data', caffe.io.load_image(sketch_path)))
sketch_in = np.reshape([sketch_in],np.shape(sketch_net.blobs['data'].data))
query = sketch_net.forward(data=sketch_in)
query=np.copy(query[output_layer_sketch])
#get nn
distances, indices = nbrs.kneighbors(np.reshape(query,[np.shape(query)[1]]))
#show query
f = plt.figure(0)
plt.imshow(plt.imread(sketch_path))
plt.axis('off')
#show results
for i in range(1,5,1):
f = plt.figure(i)
img = plt.imread(photo_paths+img_list[indices[0][i-1]])
plt.imshow(img)
plt.axis('off')
plt.show(block=False)
Explanation: Show top 5 retrieval results
End of explanation |
4,369 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recon Cartographer
A script to transform a photo of a region sticker discovered through a recon action into a grayscale template that is convenient for printing. Different arrangements of supply lines can be experimented with by drawing on the printout ahead of making the decision within the game.
Step1: Color threshold
Convert the color or RGB image into HSV color format in order to threshold based upon color and brightness.
* Hue contains information about the color, such as a particular shade of blue or orange.
* Saturation contains information about how much background color there is. The highest level of saturation will give the purest color and there is no background. The lowest level of saturation will look white, because the color doesn't much stand out beyond any other color.
* Value contains information about the brightness of a color. Low values will be black and high values will be intense and bright.
The image is converted into a pandas dataframe for the convenience of plotting histograms with the seaborn python library.
Step2: Based on the histogram of the hue, threshold the hue such that only the yellowish colors remain.
Step3: Add the cities back using a hough transform. | Python Code:
%matplotlib inline
import io
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas
import seaborn as sns
import skimage
import skimage.color
import skimage.data
import skimage.feature
import skimage.filters
import skimage.future
import skimage.io
import skimage.morphology
import skimage.segmentation
import skimage.transform
from google.cloud import vision
from google.cloud.vision import types
# first_recon.png was captured using an iPhone 7 Plus in a room illuminated with daylight. No flash.
path_im = "first_recon.png"
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]=r"C:\Users\karhohs\Downloads\bgb-jupyter-0eee48249fae.json"
im = skimage.io.imread(path_im)
skimage.io.imshow(im)
path_im = "first_recon_scale1.png"
with io.open(path_im, 'rb') as image_file:
content = image_file.read()
image = types.Image(content=content)
client = vision.ImageAnnotatorClient()
response = client.label_detection(image=image)
labels = response.label_annotations
print('Labels:')
for label in labels:
print(label.ListFields)
Explanation: Recon Cartographer
A script to transform a photo of a region sticker discovered through a recon action into a grayscale template that is convenient for printing. Different arrangements of supply lines can be experimented with by drawing on the printout ahead of making the decision within the game.
End of explanation
im_hsv = skimage.color.rgb2hsv(im)
im_hsv_dict = {}
im_hsv_dict["hue"] = im_hsv[:,:,0].flatten()
im_hsv_dict["sat"] = im_hsv[:,:,1].flatten()
im_hsv_dict["val"] = im_hsv[:,:,2].flatten()
df_hsv = pandas.DataFrame.from_dict(im_hsv_dict)
sns.set(style="ticks", color_codes=True)
# Set up the matplotlib figure
f, axes = plt.subplots(1, 3, figsize=(20, 8), sharex=True)
sns.despine(left=True)
# hue
dplot_hue = sns.distplot(df_hsv["hue"], color="b", kde=False, ax=axes[0])
p_num = len(dplot_hue.patches)
cmap_hsv = plt.get_cmap("hsv", 50)
hsv_array = cmap_hsv(range(p_num))
for ind, p in enumerate(dplot_hue.patches):
p.set_facecolor(hsv_array[ind])
p.set_alpha(1.0)
# sat
dplot_hue = sns.distplot(df_hsv["sat"], color="k", kde=False, ax=axes[1])
# val
dplot_val = sns.distplot(df_hsv["val"], color="k", kde=False, ax=axes[2])
sns.palplot(hsv_array)
Explanation: Color threshold
Convert the color or RGB image into HSV color format in order to threshold based upon color and brightness.
* Hue contains information about the color, such as a particular shade of blue or orange.
* Saturation contains information about how much background color there is. The highest level of saturation will give the purest color and there is no background. The lowest level of saturation will look white, because the color doesn't much stand out beyond any other color.
* Value contains information about the brightness of a color. Low values will be black and high values will be intense and bright.
The image is converted into a pandas dataframe for the convenience of plotting histograms with the seaborn python library.
End of explanation
im2 = im_hsv[:,:,0]
im2 = im2 < 0.3
skimage.io.imshow(im2)
Explanation: Based on the histogram of the hue, threshold the hue such that only the yellowish colors remain.
End of explanation
im_s = im_hsv[:,:,1]
im_s = skimage.morphology.erosion(im_s, skimage.morphology.selem.disk(11))
im_edge = skimage.filters.sobel(im_s)
thresh = skimage.filters.threshold_otsu(im_edge)
im_edge = im_edge > thresh
contours = skimage.measure.find_contours(skimage.img_as_float(im_edge), 0.99)
im_contour = skimage.img_as_uint(np.zeros_like(im_s))
for ind, obj in enumerate(contours):
for xy in obj:
im_contour[xy[0].astype(int), xy[1].astype(int)] = ind + 1
props = skimage.measure.regionprops(im_contour)
contour_props = {}
contour_props["area"] = [p["area"] for p in props]
contour_props["eccentricity"] = [p["eccentricity"] for p in props]
df_contour = pandas.DataFrame.from_dict(contour_props)
sns.distplot(df_contour["eccentricity"])
df_circular = df_contour.loc[(df_contour["area"] > 1000)]
candidate_circles = df_circular.index.tolist()
candidate_contours = [contours[i] for i in candidate_circles]
sns.distplot(df_circular["area"])
fig, ax = plt.subplots()
ax.imshow(np.zeros_like(im_s))
for n, contour in enumerate(candidate_contours):
ax.plot(contour[:, 1], contour[:, 0], linewidth=2)
ax.axis('image')
ax.set_xticks([])
ax.set_yticks([])
plt.axis()
plt.show()
im_gray = skimage.color.rgb2gray(im)
im_gray_small = skimage.transform.rescale(im2,0.125)
im_edge = skimage.filters.prewitt(im_gray_small)
im_edge = skimage.morphology.dilation(im_edge)
hough_radii = np.arange(15, 40, 10)
hough_res = skimage.transform.hough_circle(im_gray_small, 20)
accums, cx, cy, radii = skimage.transform.hough_circle_peaks(hough_res, hough_radii, total_num_peaks=3)
radii
skimage.io.imshow(im_edge)
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(10, 4))
image = skimage.color.gray2rgb(im_gray_small)
for center_y, center_x, radius in zip(cy, cx, radii):
circy, circx = skimage.draw.circle_perimeter(center_y, center_x, radius)
image[circy, circx] = (220, 20, 20)
ax.imshow(image)
plt.show()
img = im
labels1 = skimage.segmentation.slic(img, compactness=30, n_segments=400)
out1 = skimage.color.label2rgb(labels1, img, kind='avg')
g = skimage.future.graph.rag_mean_color(img, labels1, mode='similarity')
labels2 = skimage.future.graph.cut_normalized(labels1, g)
out2 = skimage.color.label2rgb(labels2, img, kind='avg')
fig, ax = plt.subplots(nrows=2, sharex=True, sharey=True, figsize=(6, 8))
ax[0].imshow(out1)
ax[1].imshow(out2)
for a in ax:
a.axis('off')
plt.tight_layout()
segments = skimage.segmentation.felzenszwalb(im, scale=500.0, sigma=3.0, min_size=5)
skimage.io.imshow(segments)
segments = skimage.segmentation.active_contour(im)
Explanation: Add the cities back using a hough transform.
End of explanation |
4,370 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing initial sampling methods on integer space
Holger Nahrstaedt 2020 Sigurd Carlsen October 2019
.. currentmodule
Step1: Random sampling
Step2: Sobol'
Step3: Classic latin hypercube sampling
Step4: Centered latin hypercube sampling
Step5: Maximin optimized hypercube sampling
Step6: Correlation optimized hypercube sampling
Step7: Ratio optimized hypercube sampling
Step8: Halton sampling
Step9: Hammersly sampling
Step10: Grid sampling
Step11: Pdist boxplot of all methods
This boxplot shows the distance between all generated points using
Euclidian distance. The higher the value, the better the sampling method.
It can be seen that random has the worst performance | Python Code:
print(__doc__)
import numpy as np
np.random.seed(1234)
import matplotlib.pyplot as plt
from skopt.space import Space
from skopt.sampler import Sobol
from skopt.sampler import Lhs
from skopt.sampler import Halton
from skopt.sampler import Hammersly
from skopt.sampler import Grid
from scipy.spatial.distance import pdist
def plot_searchspace(x, title):
fig, ax = plt.subplots()
plt.plot(np.array(x)[:, 0], np.array(x)[:, 1], 'bo', label='samples')
plt.plot(np.array(x)[:, 0], np.array(x)[:, 1], 'bs', markersize=40, alpha=0.5)
# ax.legend(loc="best", numpoints=1)
ax.set_xlabel("X1")
ax.set_xlim([0, 5])
ax.set_ylabel("X2")
ax.set_ylim([0, 5])
plt.title(title)
ax.grid(True)
n_samples = 10
space = Space([(0, 5), (0, 5)])
Explanation: Comparing initial sampling methods on integer space
Holger Nahrstaedt 2020 Sigurd Carlsen October 2019
.. currentmodule:: skopt
When doing baysian optimization we often want to reserve some of the
early part of the optimization to pure exploration. By default the
optimizer suggests purely random samples for the first n_initial_points
(10 by default). The downside to this is that there is no guarantee that
these samples are spread out evenly across all the dimensions.
Sampling methods as Latin hypercube, Sobol', Halton and Hammersly
take advantage of the fact that we know beforehand how many random
points we want to sample. Then these points can be "spread out" in
such a way that each dimension is explored.
See also the example on a real space
sphx_glr_auto_examples_initial_sampling_method.py
End of explanation
x = space.rvs(n_samples)
plot_searchspace(x, "Random samples")
pdist_data = []
x_label = []
print("empty fields: %d" % (36 - np.size(np.unique(x, axis=0), 0)))
pdist_data.append(pdist(x).flatten())
x_label.append("random")
Explanation: Random sampling
End of explanation
sobol = Sobol()
x = sobol.generate(space.dimensions, n_samples)
plot_searchspace(x, "Sobol'")
print("empty fields: %d" % (36 - np.size(np.unique(x, axis=0), 0)))
pdist_data.append(pdist(x).flatten())
x_label.append("sobol'")
Explanation: Sobol'
End of explanation
lhs = Lhs(lhs_type="classic", criterion=None)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'classic LHS')
print("empty fields: %d" % (36 - np.size(np.unique(x, axis=0), 0)))
pdist_data.append(pdist(x).flatten())
x_label.append("lhs")
Explanation: Classic latin hypercube sampling
End of explanation
lhs = Lhs(lhs_type="centered", criterion=None)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'centered LHS')
print("empty fields: %d" % (36 - np.size(np.unique(x, axis=0), 0)))
pdist_data.append(pdist(x).flatten())
x_label.append("center")
Explanation: Centered latin hypercube sampling
End of explanation
lhs = Lhs(criterion="maximin", iterations=10000)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'maximin LHS')
print("empty fields: %d" % (36 - np.size(np.unique(x, axis=0), 0)))
pdist_data.append(pdist(x).flatten())
x_label.append("maximin")
Explanation: Maximin optimized hypercube sampling
End of explanation
lhs = Lhs(criterion="correlation", iterations=10000)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'correlation LHS')
print("empty fields: %d" % (36 - np.size(np.unique(x, axis=0), 0)))
pdist_data.append(pdist(x).flatten())
x_label.append("corr")
Explanation: Correlation optimized hypercube sampling
End of explanation
lhs = Lhs(criterion="ratio", iterations=10000)
x = lhs.generate(space.dimensions, n_samples)
plot_searchspace(x, 'ratio LHS')
print("empty fields: %d" % (36 - np.size(np.unique(x, axis=0), 0)))
pdist_data.append(pdist(x).flatten())
x_label.append("ratio")
Explanation: Ratio optimized hypercube sampling
End of explanation
halton = Halton()
x = halton.generate(space.dimensions, n_samples)
plot_searchspace(x, 'Halton')
print("empty fields: %d" % (36 - np.size(np.unique(x, axis=0), 0)))
pdist_data.append(pdist(x).flatten())
x_label.append("halton")
Explanation: Halton sampling
End of explanation
hammersly = Hammersly()
x = hammersly.generate(space.dimensions, n_samples)
plot_searchspace(x, 'Hammersly')
print("empty fields: %d" % (36 - np.size(np.unique(x, axis=0), 0)))
pdist_data.append(pdist(x).flatten())
x_label.append("hammersly")
Explanation: Hammersly sampling
End of explanation
grid = Grid(border="include", use_full_layout=False)
x = grid.generate(space.dimensions, n_samples)
plot_searchspace(x, 'Grid')
print("empty fields: %d" % (36 - np.size(np.unique(x, axis=0), 0)))
pdist_data.append(pdist(x).flatten())
x_label.append("grid")
Explanation: Grid sampling
End of explanation
fig, ax = plt.subplots()
ax.boxplot(pdist_data)
plt.grid(True)
plt.ylabel("pdist")
_ = ax.set_ylim(0, 6)
_ = ax.set_xticklabels(x_label, rotation=45, fontsize=8)
Explanation: Pdist boxplot of all methods
This boxplot shows the distance between all generated points using
Euclidian distance. The higher the value, the better the sampling method.
It can be seen that random has the worst performance
End of explanation |
4,371 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Period - Magnitude Relation in Cepheid Stars
Cepheids are stars whose brightness oscillates with a stable period that appears to be strongly correlated with their luminosity (or absolute magnitude).
A lot of monitoring data - repeated imaging and subsequent "photometry" of the star - can provide a measurement of the absolute magnitude (if we know the distance to it's host galaxy) and the period of the oscillation.
Let's look at some Cepheid measurements reported by Riess et al (2011). Like the correlation function summaries, they are in the form of datapoints with error bars, where it is not clear how those error bars were derived (or what they mean).
Our goal is to infer the parameters of a simple relationship between Cepheid period and, in the first instance, apparent magnitude.
Step1: A Look at Each Host Galaxy's Cepheids
Let's read in all the data, and look at each galaxy's Cepheid measurements separately. Instead of using pandas, we'll write our own simple data structure, and give it a custom plotting method so we can compare the different host galaxies' datasets.
Step2: OK, now we are all set up! Let's plot one of the datasets.
Step3: Q
Step4: Q
Step5: Now, let's set up a suitable parameter grid and compute the posterior PDF!
Step6: Now, plot, with confidence contours
Step7: Are these inferred parameters sensible?
Let's read off a plausible (a,b) pair and overlay the model period-magnitude relation on the data.
Step8: OK, this looks good! Later in the course we will do some more extensive model checking.
Summarizing our Inferences
Let's compute the 1D marginalized posterior PDFs for $a$ and for $b$, and report the median and "68% credible interval" (defined as the region of 1D parameter space enclosing 68% of the posterior probability). | Python Code:
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (15.0, 8.0)
Explanation: A Period - Magnitude Relation in Cepheid Stars
Cepheids are stars whose brightness oscillates with a stable period that appears to be strongly correlated with their luminosity (or absolute magnitude).
A lot of monitoring data - repeated imaging and subsequent "photometry" of the star - can provide a measurement of the absolute magnitude (if we know the distance to it's host galaxy) and the period of the oscillation.
Let's look at some Cepheid measurements reported by Riess et al (2011). Like the correlation function summaries, they are in the form of datapoints with error bars, where it is not clear how those error bars were derived (or what they mean).
Our goal is to infer the parameters of a simple relationship between Cepheid period and, in the first instance, apparent magnitude.
End of explanation
# First, we need to know what's in the data file.
!head -15 R11ceph.dat
class Cepheids(object):
def __init__(self,filename):
# Read in the data and store it in this master array:
self.data = np.loadtxt(filename)
self.hosts = self.data[:,1].astype('int').astype('str')
# We'll need the plotting setup to be the same each time we make a plot:
colornames = ['red','orange','yellow','green','cyan','blue','violet','magenta','gray']
self.colors = dict(zip(self.list_hosts(), colornames))
self.xlimits = np.array([0.3,2.3])
self.ylimits = np.array([30.0,17.0])
return
def list_hosts(self):
# The list of (9) unique galaxy host names:
return np.unique(self.hosts)
def select(self,ID):
# Pull out one galaxy's data from the master array:
index = (self.hosts == str(ID))
self.mobs = self.data[index,2]
self.merr = self.data[index,3]
self.logP = np.log10(self.data[index,4])
return
def plot(self,X):
# Plot all the points in the dataset for host galaxy X.
ID = str(X)
self.select(ID)
plt.rc('xtick', labelsize=16)
plt.rc('ytick', labelsize=16)
plt.errorbar(self.logP, self.mobs, yerr=self.merr, fmt='.', ms=7, lw=1, color=self.colors[ID], label='NGC'+ID)
plt.xlabel('$\\log_{10} P / {\\rm days}$',fontsize=20)
plt.ylabel('${\\rm magnitude (AB)}$',fontsize=20)
plt.xlim(self.xlimits)
plt.ylim(self.ylimits)
plt.title('Cepheid Period-Luminosity (Riess et al 2011)',fontsize=20)
return
def overlay_straight_line_with(self,a=0.0,b=24.0):
# Overlay a straight line with gradient a and intercept b.
x = self.xlimits
y = a*x + b
plt.plot(x, y, 'k-', alpha=0.5, lw=2)
plt.xlim(self.xlimits)
plt.ylim(self.ylimits)
return
def add_legend(self):
plt.legend(loc='upper left')
return
data = Cepheids('R11ceph.dat')
print(data.colors)
Explanation: A Look at Each Host Galaxy's Cepheids
Let's read in all the data, and look at each galaxy's Cepheid measurements separately. Instead of using pandas, we'll write our own simple data structure, and give it a custom plotting method so we can compare the different host galaxies' datasets.
End of explanation
data.plot(4258)
# for ID in data.list_hosts():
# data.plot(ID)
data.overlay_straight_line_with(a=-2.0,b=24.0)
data.add_legend()
Explanation: OK, now we are all set up! Let's plot one of the datasets.
End of explanation
# import cepheids_pgm
# cepheids_pgm.simple()
from IPython.display import Image
Image(filename="cepheids_pgm.png")
Explanation: Q: Is the Cepheid Period-Luminosity relation likely to be well-modeled by a power law ?
Is it easy to find straight lines that "fit" all the data from each host? And do we get the same "fit" for each host?
Inferring the Period-Magnitude Relation
Let's try inferring the parameters $a$ and $b$ of the following linear relation:
$m = a\;\log_{10} P + b$
We have data consisting of observed magnitudes with quoted uncertainties, of the form
$m^{\rm obs} = 24.51 \pm 0.31$ at $\log_{10} P = \log_{10} (13.0/{\rm days})$
Let's draw a PGM for this, imagining our way through what we would do to generate a mock dataset like the one we have.
End of explanation
def log_likelihood(logP,mobs,merr,a,b):
return -0.5*np.sum((mobs - a*logP -b)**2/(merr**2))
def log_prior(a,b):
amin,amax = -10.0,10.0
bmin,bmax = 10.0,30.0
if (a > amin)*(a < amax)*(b > bmin)*(b < bmax):
logp = np.log(1.0/(amax-amin)) + np.log(1.0/(bmax-bmin))
else:
logp = -np.inf
return logp
def log_posterior(logP,mobs,merr,a,b):
return log_likelihood(logP,mobs,merr,a,b) + log_prior(a,b)
Explanation: Q: What are reasonable assumptions about the sampling distribution for the $k^{\rm th}$ datapoint, ${\rm Pr}(m^{\rm obs}_k|m_k,H)$?
We were given points ($m^{\rm obs}_k$) with error bars ($\sigma_k$), which suggests a Gaussian sampling distribution (as was suggested in Session 1):
${\rm Pr}(m^{\rm obs}_k|m_k,\sigma_k,H) = \frac{1}{Z} \exp{-\frac{(m^{\rm obs}_k - m_k)^2}{2\sigma_k^2}}$
Then, we might suppose that the measurements of each Cepheid start are independent of each other, so that we can define predicted and observed data vectors $m$ and $m^{\rm obs}$ (plus a corresponding observational uncertainty vector $\sigma$) via:
${\rm Pr}(m^{\rm obs}|m,\sigma,H) = \prod_k {\rm Pr}(m^{\rm obs}_k|m_k,\sigma_k,H)$
Q: What is the conditional PDF ${\rm Pr}(m_k|a,b,\log{P_k},H)$?
Our relationship between the intrinsic magnitude and the log period is linear and deterministic, indicating the following delta-function PDF:
${\rm Pr}(m_k|a,b,\log{P_k},H) = \delta(m_k - a\log{P_k} - b)$
Q: What is the resulting joint likelihood, ${\rm Pr}(m^{\rm obs}|a,b,H)$?
The factorisation of the joint PDF for everything inside the plate that is illustrated by the PGM is:
${\rm Pr}(m^{\rm obs}|m,\sigma,H)\;{\rm Pr}(m|a,b,H) = \prod_k {\rm Pr}(m^{\rm obs}_k|m_k,\sigma_k,H)\;\delta(m_k - a\log{P_k} - b)$
The intrinsic magnitudes of each Cepheid ($m$) are not interesting, and so we marginalize them out:
${\rm Pr}(m^{\rm obs}|a,b,H) = \int {\rm Pr}(m^{\rm obs}|m,\sigma,H)\;{\rm Pr}(m|a,b,H)\; dm$
so that ${\rm Pr}(m^{\rm obs}|a,b,H) = \prod_k {\rm Pr}(m^{\rm obs}_k|[a\log{P_k} + b],\sigma,H)$
Q: What is the log likelihood?
$\log {\rm Pr}(m^{\rm obs}|a,b,H) = \sum_k \log {\rm Pr}(m^{\rm obs}_k|[a\log{P_k} + b],\sigma,H)$
which, substituting in our Gaussian form, gives us:
$\log {\rm Pr}(m^{\rm obs}|a,b,H) = {\rm constant} - 0.5 \sum_k \frac{(m^{\rm obs}_k - a\log{P_k} - b)^2}{\sigma_k^2}$
This sum is often called $\chi^2$ ("chi-squared"), and you may have seen it before. It's an effective "misfit" statistic, quantifying the difference between observed and predicted data - and under the assumptions outlined here, it's twice the log likelihood (up to a constant).
Q: What could be reasonable assumptions for the prior ${\rm Pr}(a,b|H)$?
For now, we can (continue to) assume a uniform distribution for each of $a$ and $b$ - in the homework, you can investigate some alternatives.
${\rm Pr}(a|H) = \frac{1.0}{a_{\rm max} - a_{\rm min}}\;\;{\rm for}\;\; a_{\rm min} < a < a_{\rm max}$
${\rm Pr}(b|H) = \frac{1.0}{b_{\rm max} - b_{\rm min}}\;\;{\rm for}\;\; b_{\rm min} < b < b_{\rm max}$
We should now be able to code up functions for the log likelihood, log prior and log posterior, such that we can evaluate them on a 2D parameter grid. Let's fill them in:
End of explanation
# Select a Cepheid dataset:
data.select(4258)
# Set up parameter grids:
npix = 100
amin,amax = -4.0,-2.0
bmin,bmax = 25.0,27.0
agrid = np.linspace(amin,amax,npix)
bgrid = np.linspace(bmin,bmax,npix)
logprob = np.zeros([npix,npix])
# Loop over parameters, computing unnormlized log posterior PDF:
for i,a in enumerate(agrid):
for j,b in enumerate(bgrid):
logprob[j,i] = log_posterior(data.logP,data.mobs,data.merr,a,b)
# Normalize and exponentiate to get posterior density:
Z = np.max(logprob)
prob = np.exp(logprob - Z)
norm = np.sum(prob)
prob /= norm
Explanation: Now, let's set up a suitable parameter grid and compute the posterior PDF!
End of explanation
sorted = np.sort(prob.flatten())
C = sorted.cumsum()
# Find the pixel values that lie at the levels that contain
# 68% and 95% of the probability:
lvl68 = np.min(sorted[C > (1.0 - 0.68)])
lvl95 = np.min(sorted[C > (1.0 - 0.95)])
plt.imshow(prob, origin='lower', cmap='Blues', interpolation='none', extent=[amin,amax,bmin,bmax])
plt.contour(prob,[lvl68,lvl95],colors='black',extent=[amin,amax,bmin,bmax])
plt.grid()
plt.xlabel('slope a')
plt.ylabel('intercept b / AB magnitudes')
Explanation: Now, plot, with confidence contours:
End of explanation
data.plot(4258)
data.overlay_straight_line_with(a=-3.0,b=26.3)
data.add_legend()
Explanation: Are these inferred parameters sensible?
Let's read off a plausible (a,b) pair and overlay the model period-magnitude relation on the data.
End of explanation
prob_a_given_data = np.sum(prob,axis=0) # Approximate the integral as a sum
prob_b_given_data = np.sum(prob,axis=1) # Approximate the integral as a sum
print(prob_a_given_data.shape, np.sum(prob_a_given_data))
# Plot 1D distributions:
fig,ax = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(15, 6)
plt.subplots_adjust(wspace=0.2)
left = ax[0].plot(agrid, prob_a_given_data)
ax[0].set_title('${\\rm Pr}(a|d)$')
ax[0].set_xlabel('slope $a$')
ax[0].set_ylabel('Posterior probability density')
right = ax[1].plot(bgrid, prob_b_given_data)
ax[1].set_title('${\\rm Pr}(b|d)$')
ax[0].set_xlabel('intercept $b$ / AB magnitudes')
ax[1].set_ylabel('Posterior probability density')
# Compress each PDF into a median and 68% credible interval, and report:
def compress_1D_pdf(x,pr,ci=68,dp=1):
# Interpret credible interval request:
low = (1.0 - ci/100.0)/2.0 # 0.16 for ci=68
high = 1.0 - low # 0.84 for ci=68
# Find cumulative distribution and compute percentiles:
cumulant = pr.cumsum()
pctlow = x[cumulant>low].min()
median = x[cumulant>0.50].min()
pcthigh = x[cumulant>high].min()
# Convert to error bars, and format a string:
errplus = np.abs(pcthigh - median)
errminus = np.abs(median - pctlow)
report = "$ "+str(round(median,dp))+"^{+"+str(round(errplus,dp))+"}_{-"+str(round(errminus,dp))+"} $"
return report
print("a = ",compress_1D_pdf(agrid,prob_a_given_data,ci=68,dp=2))
print("b = ",compress_1D_pdf(bgrid,prob_b_given_data,ci=68,dp=2))
Explanation: OK, this looks good! Later in the course we will do some more extensive model checking.
Summarizing our Inferences
Let's compute the 1D marginalized posterior PDFs for $a$ and for $b$, and report the median and "68% credible interval" (defined as the region of 1D parameter space enclosing 68% of the posterior probability).
End of explanation |
4,372 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Collection
Step1: Ordinal Genres
Below, we make the genres ordinal to fit in the random forest classifiers. We add a new column to our dataframe to do so, write a function to populate it, and run it across the dataframe.
Step2: We add in some boolean genre classifiers to make our analysis more fine-grained. Rather than saying "we predict this video is country with 50% confidence", we could say "we predict this video is not edm with 90% confidence" and so on.
Step3: Test and Train Sets
We create our training and test sets by splitting all_genres by genre, and making 10 of each genre train and 10 test. We aggregate by genre to make our full train and full test sets, each containing 50 records of various genres.
Step4: Generating Random Forest - Viewer Statistics
We start generating our random forests, and output a relative accuracy and a confusion matrix. In this first one, we simply factor in non-color variables (rating, likes, dislikes, length and viewcount), and run it across all records to predict an ordinal genre value.
Step5: As shown above, this method yields relatively poor results. This is because there's no distinct clusters being created by our random forest, and simple viewer statistics tell us nothing about what kind of video we're watching. However, we see that country, rap and pop are initially somewhat distinct (diagonal is the highest value), and rock and edm are getting mistaken for one another. Let's see if we can't make something of this.
Random Forest - Only Color Statistics
Below, we do the same random forest as above, but going strictly off of average frame color for the video.
We found the most commonly appearing color in each frame and called it the 'frame mode'. We then took all of the frame modes and found the 10 most common of them. Those became the 'color data' we use to analyze videos.
Step6: This actually yields worse results than just the viewer statistics, because the color of a video by itself does not determine the genre. If rappers only had red in their videos and rockers only had black this might be somewhat accurate, but that's just not the case. But, what if we pair these findings with our initial viewer statistics?
Random Forest - All Features
Step7: Singling Out Pop and Rap
Scores are expectedly low. It seems as if we're trying to make the classifier do way too much work, and are giving it very mediocre data to go off of. Recall that we're actually trying to determine WHICH genre a video is by the above code, not whether or not a video is of ONE specific genre. This brings back the binary classifiers that we created above, let's put those to use to see if we can improve these scores.
We try pop and rap first, since they seem to be the most distinct by what we've gathered above.
Step8: What we're seeing above is a confusion matrix that, based on our training data, predicts whether or not a video in the test set is a pop video or not. In the "predicted" row, 0 means it predicts it's not a pop video, and that the 1 is. Likewise with the actual, 0 shows that the video actually wasn't a pop video, and the 1 shows that it was.
The confusion matrix above is our first effort at utilizing these binary classifiers. Most of our videos aren't pop videos, and the model did a good job of picking out those that aren't pop. However, we could use some improvement in the realm of "false negatives", where the model classified a video as not pop when it actually was.
We do these tests 50 times for sake of average score.
Rather than hard-coding each time we wanted to run something for average, we wrote a function that does it for us. All we have to do is pass in the boolean classifier in quotes ("is_rock", etc.), and the number of iterations that we want. Results are displayed below.
Step9: The following creates several files that describe our classifiers. Our website will later
Step10: We ran the above test with all genres, and as shown in above analysis, our country and edm typically have very low accuracy. We've seen above that edm and rock videos are getting mixed up with one another, so we assume that something is characteristic of these 2 genres that's not of everything else. We take out the edm values from our training and test datasets, hoping to improve accuracy.
Step11: So, what does this tell us? Based on our training data, we have the best chance of accurately classifying something as pop or not pop (under these conditions).
We want to find out which 2 are the most distinct, so we can make build our model based on that classification.
Step12: Rock and EDM have suprisingly distinct classifiers. We should dive into the videos and see what this means.
Step13: Selecting Most Valuable Features per Genre - Rock | Python Code:
import pandas as pd
from os import path
from sklearn.ensemble import RandomForestClassifier
import numpy as np
from sklearn.ensemble import ExtraTreesClassifier
import sklearn
# Edit path if need be (shouldn't need to b/c we all have the same folder structure)
CSV_PATH_1 = '../Videos/all_data'
CSV_PATH_2 = '../Videos2/all_data2'
FILE_EXTENSION = '_all.csv'
GENRES = ['country', 'edm', 'pop', 'rap', 'rock']
# Containers for the data frames
genre_dfs = {}
all_genres = None
# Read in the 5 genre's of CV's
for genre in GENRES:
genre_csv_path_1 = path.join(CSV_PATH_1, genre) + FILE_EXTENSION
genre_csv_path_2 = path.join(CSV_PATH_2, genre) + FILE_EXTENSION
df_1 = pd.read_csv(genre_csv_path_1)
df_2 = pd.read_csv(genre_csv_path_2)
df_1 = df_1.drop('Unnamed: 0',1)
df_2 = df_2.drop('Unnamed: 0',1)
df_combined = pd.concat([df_1,df_2],ignore_index=True)
genre_dfs[genre] = df_combined
all_genres = pd.concat(genre_dfs.values())
all_genres.head()
# genre_dfs is now a dictionary that contains the 5 different data frames
# all_genres is a dataframe that contains all of the data
Explanation: Data Collection
End of explanation
def genre_to_ordinal(genre_in):
if(genre_in == "country"):
return 0
elif(genre_in == "pop"):
return 1
elif(genre_in == "rock"):
return 2
elif(genre_in == "edm"):
return 3
elif(genre_in == "rap"):
return 4
else:
return genre_in
all_genres['genre_ordinal'] = all_genres.genre.apply(genre_to_ordinal)
Explanation: Ordinal Genres
Below, we make the genres ordinal to fit in the random forest classifiers. We add a new column to our dataframe to do so, write a function to populate it, and run it across the dataframe.
End of explanation
# Adding is_country flag
def is_country(genre_in):
if(genre_in == "country"):
return 1
else:
return 0
all_genres['is_country'] = all_genres.genre.apply(is_country)
# Adding is_country flag
def is_rock(genre_in):
if(genre_in == "rock"):
return 1
else:
return 0
all_genres['is_rock'] = all_genres.genre.apply(is_rock)
# Adding is_edm flag
def is_edm(genre_in):
if(genre_in == "edm"):
return 1
else:
return 0
all_genres['is_edm'] = all_genres.genre.apply(is_edm)
# Adding is_rap flag
def is_rap(genre_in):
if(genre_in == "rap"):
return 1
else:
return 0
all_genres['is_rap'] = all_genres.genre.apply(is_rap)
# Adding is_country flag
def is_pop(genre_in):
if(genre_in == "pop"):
return 1
else:
return 0
all_genres['is_pop'] = all_genres.genre.apply(is_pop)
Explanation: We add in some boolean genre classifiers to make our analysis more fine-grained. Rather than saying "we predict this video is country with 50% confidence", we could say "we predict this video is not edm with 90% confidence" and so on.
End of explanation
# Subset all_genres to group by individual genres
country_records = all_genres[all_genres["genre"] == "country"]
rock_records = all_genres[all_genres["genre"] == "rock"]
pop_records = all_genres[all_genres["genre"] == "pop"]
edm_records = all_genres[all_genres["genre"] == "edm"]
rap_records = all_genres[all_genres["genre"] == "rap"]
# From the subsets above, create train and test sets from each
country_train = country_records.head(len(country_records) / 2)
country_test = country_records.tail(len(country_records) / 2)
rock_train = rock_records.head(len(rock_records) / 2)
rock_test = rock_records.tail(len(rock_records) / 2)
pop_train = pop_records.head(len(pop_records) / 2)
pop_test = pop_records.tail(len(pop_records) / 2)
edm_train = edm_records.head(len(edm_records) / 2)
edm_test = edm_records.tail(len(edm_records) / 2)
rap_train = rap_records.head(len(rap_records) / 2)
rap_test = rap_records.tail(len(rap_records) / 2)
# Create big training and big test set for analysis
training_set = pd.concat([country_train,rock_train,pop_train,edm_train,rap_train])
test_set = pd.concat([country_test,rock_test,pop_test,edm_test,rap_test])
training_set = training_set.fillna(0)
test_set = test_set.fillna(0)
print "Training Records:\t" , len(training_set)
print "Test Records:\t\t" , len(test_set)
# training_set.head()
Explanation: Test and Train Sets
We create our training and test sets by splitting all_genres by genre, and making 10 of each genre train and 10 test. We aggregate by genre to make our full train and full test sets, each containing 50 records of various genres.
End of explanation
# Predicting based solely on non-color features, using RF
clf = RandomForestClassifier(n_estimators=11)
meta_data_features = ['rating', 'likes','dislikes','length','viewcount']
y, _ = pd.factorize(training_set['genre_ordinal'])
clf = clf.fit(training_set[meta_data_features], y)
z, _ = pd.factorize(test_set['genre_ordinal'])
print clf.score(test_set[meta_data_features],z)
pd.crosstab(test_set.genre_ordinal, clf.predict(test_set[meta_data_features]),rownames=["Actual"], colnames=["Predicted"])
Explanation: Generating Random Forest - Viewer Statistics
We start generating our random forests, and output a relative accuracy and a confusion matrix. In this first one, we simply factor in non-color variables (rating, likes, dislikes, length and viewcount), and run it across all records to predict an ordinal genre value.
End of explanation
def gen_new_headers(old_headers):
headers = ['colors_' + str(x+1) + '_' for x in range(10)]
h = []
for x in headers:
h.append(x + 'red')
h.append(x + 'blue')
h.append(x + 'green')
return old_headers + h + ['genre']
clf = RandomForestClassifier(n_estimators=11)
color_features = gen_new_headers([])[:-1]
# Predicting based solely on colors
y, _ = pd.factorize(training_set['genre_ordinal'])
clf = clf.fit(training_set[color_features], y)
z, _ = pd.factorize(test_set['genre_ordinal'])
print clf.score(test_set[color_features],z)
pd.crosstab(test_set.genre_ordinal, clf.predict(test_set[color_features]),rownames=["Actual"], colnames=["Predicted"])
Explanation: As shown above, this method yields relatively poor results. This is because there's no distinct clusters being created by our random forest, and simple viewer statistics tell us nothing about what kind of video we're watching. However, we see that country, rap and pop are initially somewhat distinct (diagonal is the highest value), and rock and edm are getting mistaken for one another. Let's see if we can't make something of this.
Random Forest - Only Color Statistics
Below, we do the same random forest as above, but going strictly off of average frame color for the video.
We found the most commonly appearing color in each frame and called it the 'frame mode'. We then took all of the frame modes and found the 10 most common of them. Those became the 'color data' we use to analyze videos.
End of explanation
clf = RandomForestClassifier(n_estimators=11)
all_features = meta_data_features + color_features
# Predicting based on colors and non-color features
y, _ = pd.factorize(training_set['genre_ordinal'])
clf = clf.fit(training_set[all_features], y)
z, _ = pd.factorize(test_set['genre_ordinal'])
print clf.score(test_set[all_features],z)
pd.crosstab(test_set.genre_ordinal, clf.predict(test_set[all_features]),rownames=["Actual"], colnames=["Predicted"])
Explanation: This actually yields worse results than just the viewer statistics, because the color of a video by itself does not determine the genre. If rappers only had red in their videos and rockers only had black this might be somewhat accurate, but that's just not the case. But, what if we pair these findings with our initial viewer statistics?
Random Forest - All Features
End of explanation
clf = RandomForestClassifier(n_estimators=11)
all_features = meta_data_features + color_features
print all_features
# Predicting based on colors and non-color features
y, _ = pd.factorize(training_set['is_pop'])
clf = clf.fit(training_set[all_features], y)
z, _ = pd.factorize(test_set['is_pop'])
print clf.score(test_set[all_features],z)
pd.crosstab(test_set.is_pop, clf.predict(test_set[all_features]),rownames=["Actual"], colnames=["Predicted"])
clf = RandomForestClassifier(n_estimators=11)
all_features = meta_data_features + color_features
# Predicting based on colors and non-color features
y, _ = pd.factorize(training_set['is_rap'])
clf = clf.fit(training_set[all_features], y)
z, _ = pd.factorize(test_set['is_rap'])
print clf.score(test_set[all_features],z)
pd.crosstab(test_set.is_rap, clf.predict(test_set[all_features]),rownames=["Actual"], colnames=["Predicted"])
Explanation: Singling Out Pop and Rap
Scores are expectedly low. It seems as if we're trying to make the classifier do way too much work, and are giving it very mediocre data to go off of. Recall that we're actually trying to determine WHICH genre a video is by the above code, not whether or not a video is of ONE specific genre. This brings back the binary classifiers that we created above, let's put those to use to see if we can improve these scores.
We try pop and rap first, since they seem to be the most distinct by what we've gathered above.
End of explanation
def multi_RF_averages(is_genre,num_iterations):
clf = RandomForestClassifier(n_estimators=11)
loop_indices = range(0,num_iterations)
cumsum = 0
for i in loop_indices:
y, _ = pd.factorize(training_set[is_genre])
clf = clf.fit(training_set[all_features], y)
z, _ = pd.factorize(test_set[is_genre])
cumsum = cumsum + clf.score(test_set[all_features],z)
print "Average Score for",len(loop_indices),is_genre,"iterations:", cumsum/len(loop_indices)
return clf
pop_class = multi_RF_averages("is_pop",50)
rap_class = multi_RF_averages("is_rap",50)
rock_class = multi_RF_averages("is_rock",50)
edm_class = multi_RF_averages("is_edm",50)
country_class = multi_RF_averages("is_country",50)
Explanation: What we're seeing above is a confusion matrix that, based on our training data, predicts whether or not a video in the test set is a pop video or not. In the "predicted" row, 0 means it predicts it's not a pop video, and that the 1 is. Likewise with the actual, 0 shows that the video actually wasn't a pop video, and the 1 shows that it was.
The confusion matrix above is our first effort at utilizing these binary classifiers. Most of our videos aren't pop videos, and the model did a good job of picking out those that aren't pop. However, we could use some improvement in the realm of "false negatives", where the model classified a video as not pop when it actually was.
We do these tests 50 times for sake of average score.
Rather than hard-coding each time we wanted to run something for average, we wrote a function that does it for us. All we have to do is pass in the boolean classifier in quotes ("is_rock", etc.), and the number of iterations that we want. Results are displayed below.
End of explanation
from sklearn.externals import joblib
# only use these to generate pickle files for website
# joblib.dump(pop_class, 'classifiers/pop_class.pkl')
# joblib.dump(rap_class, 'classifiers/rap_class.pkl')
# joblib.dump(rock_class, 'classifiers/rock_class.pkl')
# joblib.dump(edm_class, 'classifiers/edm_class.pkl')
# joblib.dump(country_class, 'classifiers/country_class.pkl')
Explanation: The following creates several files that describe our classifiers. Our website will later
End of explanation
# Removing EDM for better analysis - makes is_pop and is_rap much more accurate
training_set = pd.concat([country_train,rock_train,pop_train,rap_train])
test_set = pd.concat([country_test,rock_test,pop_test,rap_test])
multi_RF_averages("is_pop",50)
multi_RF_averages("is_rap",50)
multi_RF_averages("is_rock",50)
multi_RF_averages("is_edm",50)
multi_RF_averages("is_country",50)
Explanation: We ran the above test with all genres, and as shown in above analysis, our country and edm typically have very low accuracy. We've seen above that edm and rock videos are getting mixed up with one another, so we assume that something is characteristic of these 2 genres that's not of everything else. We take out the edm values from our training and test datasets, hoping to improve accuracy.
End of explanation
training_set = pd.concat([country_train,rock_train,edm_train,rap_train,pop_train])
test_set = pd.concat([rock_test])
multi_RF_averages("is_rock",50)
test_set = pd.concat([rap_test])
multi_RF_averages("is_rap",50)
test_set = pd.concat([country_test])
multi_RF_averages("is_country",50)
test_set = pd.concat([pop_test])
multi_RF_averages("is_pop",50)
test_set = pd.concat([edm_test])
multi_RF_averages("is_edm",50)
Explanation: So, what does this tell us? Based on our training data, we have the best chance of accurately classifying something as pop or not pop (under these conditions).
We want to find out which 2 are the most distinct, so we can make build our model based on that classification.
End of explanation
test_set = pd.concat([edm_test,rock_test])
multi_RF_averages("is_edm",50)
multi_RF_averages("is_rock",50)
Explanation: Rock and EDM have suprisingly distinct classifiers. We should dive into the videos and see what this means.
End of explanation
model = ExtraTreesClassifier()
training_set = pd.concat([country_train,pop_train,rap_train,rock_train,edm_train])
y, _ = pd.factorize(training_set['is_rock'])
model.fit(training_set[all_features], y)
# display the relative importance of each attribute
print model.feature_importances_
df = pd.DataFrame()
df['index'] = all_features
y, _ = pd.factorize(training_set['is_rap'])
model.fit(training_set[all_features], y)
df['rap'] = model.feature_importances_
y, _ = pd.factorize(training_set['is_rock'])
model.fit(training_set[all_features], y)
df['rock'] = model.feature_importances_
y, _ = pd.factorize(training_set['is_country'])
model.fit(training_set[all_features], y)
df['country'] = model.feature_importances_
y, _ = pd.factorize(training_set['is_edm'])
model.fit(training_set[all_features], y)
df['edm'] = model.feature_importances_
y, _ = pd.factorize(training_set['is_pop'])
model.fit(training_set[all_features], y)
df['pop'] = model.feature_importances_
df = df.set_index('index')
df = df.transpose()
df.head()
lol =
lol = df.values.tolist()
cols = []
for x in df.columns:
cols.append(x)
import plotly.offline as py # a little wordplay
import plotly.graph_objs as go
offline.init_notebook_mode()
title = 'Feature Importance By Genre'
labels = [ ]
mode_size = [8, 8, 12, 8]
line_size = [2, 2, 4, 2]
x_data = cols
y_data = df.values.tolist()
traces = []
for i in range(0, 4):
traces.append(go.Scatter(
x=x_data,
y=y_data[i],
mode='lines',
connectgaps=True,
))
layout = go.Layout(
yaxis=dict(
showgrid=False,
zeroline=False,
showline=False,
showticklabels=False,
),
autosize=False,
margin=dict(
autoexpand=True,
l=100,
r=20,
t=110,
),
showlegend=False,
)
annotations = []
# Adding labels
for y_trace, label in zip(y_data, labels):
# labeling the left_side of the plot
annotations.append(dict(xref='paper', x=0.05, y=y_trace[0],
xanchor='right', yanchor='middle',
text=label + ' {}%'.format(y_trace[0]),
font=dict(family='Arial',
size=16,
),
showarrow=False))
# labeling the right_side of the plot
annotations.append(dict(xref='paper', x=0.95, y=y_trace[11],
xanchor='left', yanchor='middle',
text='{}%'.format(y_trace[11]),
font=dict(family='Arial',
size=16,
),
showarrow=False))
# Title
annotations.append(dict(xref='paper', yref='paper', x=0.0, y=1.05,
xanchor='left', yanchor='bottom',
text='Feature Importance By Genre',
font=dict(family='Arial',
size=30,
),
showarrow=False))
# Source
# annotations.append(dict(xref='paper', yref='paper', x=0.5, y=-0.1,
# xanchor='center', yanchor='top',
# text='Source: PewResearch Center & ' +
# 'Storytelling with data',
# font=dict(family='Arial',
# size=12,
# ),
# showarrow=False))
layout['annotations'] = annotations
fig = go.Figure(data=traces, layout=layout)
py.iplot(fig, filename='news-source')
import seaborn as sns
sns.set_style("whitegrid")
ax = sns.pointplot(x="likes", y="rating",data=df)
sns.plt.show()
import seaborn as sns
sns.set_style("whitegrid")
tips = sns.load_dataset("tips")
print tips
ax = sns.pointplot(x="time", y="total_bill", data=tips)
sns.plt.show()
Explanation: Selecting Most Valuable Features per Genre - Rock
End of explanation |
4,373 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Question 1
Step1: 1.1
b. Find the average of a list of numbers using a for loop
Step2: 1.1
c. Write a program that prints string in reverse. Print character by character
Input
Step3: 1.2
Write a Python program to count the number of even and odd numbers from a series of numbers.
<br>Hint
Step4: 1.3
Check if given list of strings have ECORI site motif and print value that doesn't contain the motif until two strings with the motif are found
motif = "GAATTC" (5' for ECORI restriction site)
Output
Step5: 1.4
Write a Python program that prints all the numbers in range from 0 to 10 except 5 and 10.
<br> Hint
Step6: 1.5 (Multi-Part)
Next series of tasks about lists and list manipulations
Step7: b. Use the print() function to print your list.
Step8: c. Use the print() function to print out the middle element.
Step9: d. Now replace the middle element with a different item, your favorite song, or song bird.
Step10: e. Use the same print statement from b. to print your new list. Check out the differences.
Step11: f. Add a new element to the end. Read about append().
Step12: g. Add a new element to the beginning. Read about insert().
Step13: h. Add a new element somewhere other than the beginning or the end.
Step14: 1.6
Write a script that splits a string into a list
Step15: Extra Pratice
Step16: Question 2
Step17: a. look up the motif for a particular SacII enzyme
Step18: b. add below two enzymes and their motifs to dictionary
<br>
KasI
Step19: 2.2
Suppose dna is a string variable that contains only 'A','C','G' or 'T' characters.
Write code to find and print the character and frequency (max_freq) of the most frequent character in string dna?
<br>
Step20: 2.3
If you create a set using a DNA sequence, what will you get back? Try it with this sequence | Python Code:
maxNumber = 0
numberList = [15,4,26,1,9,21,3,6,13]
for each in numberList:
if each>maxNumber:
maxNumber = each
print("The largest number in the list is {0}".format(maxNumber))
Explanation: Question 1:
For basic operation using list, tuple and dictionaries
1.1
a. Finds the largest of a list of numbers.
<br>Hint: Set up the list as the first line in your program.
End of explanation
runningTotal = 0
listOfNumbers = [4,7,9,1,8,6]
for each in listOfNumbers:
runningTotal = runningTotal + each
# each time round the loop add the next item to the running total
average = runningTotal/len(listOfNumbers)
# the average is the runningTotal at the end / how many numbers
print(listOfNumbers)
print("The average of these numbers is {0:.2f}".format(average))
Explanation: 1.1
b. Find the average of a list of numbers using a for loop
End of explanation
word = "Python"
#print(len(word))
for char in range(len(word) - 1, -1, -1): # range(start=5, end=-1, de -1)
print(word[char])
Explanation: 1.1
c. Write a program that prints string in reverse. Print character by character
Input: Python
Expected Output:
n
o
h
t
y
P
Hint: can use range, len functions<br>
P y t h o n
0 1 2 3 4 5
-6 -5 -4 -3 -2 -1
End of explanation
numbers = (1, 2, 3, 4, 5, 6, 7, 8, 9) # Declaring the tuple
count_odd = 0
count_even = 0
#type your code here
for x in numbers:
if not x % 2:
count_even+=1
else:
count_odd+=1
print("Number of even numbers :",count_even)
print("Number of odd numbers :",count_odd)
Explanation: 1.2
Write a Python program to count the number of even and odd numbers from a series of numbers.
<br>Hint: review of for loop, if statements
End of explanation
motif = "GAATTC"
count = 0
dna_strings = ['AGTGAACCGTCAGATCCGCTAGCGCGAATTC','GGAGACCGACACCCTCCTGCTATGGGTGCTGCTGCTC','TGGGTGCCCGGCAGCACCGGCGACGCACCGGTCGC',
'CACCATGGTGAGCAAGGGCGAGGAGAATAACATGGCC','ATCATCAAGGAGTTCATGCGCTTCAAGAATTC','CATGGAGGGCTCCGTGAACGGCCACGAGTTCGAGA'
,'TCGAGGGCGAGGGCGAGGGCCGCCCCTACGAGGCCTT']
#type your code
for item in dna_strings:
if(item.find(motif) >= 1):
count+=1
if(count==2):
print("Two strings in given list contain the motif")
break;
else:
print(item ,': doesn\'t contain the motif')
Explanation: 1.3
Check if given list of strings have ECORI site motif and print value that doesn't contain the motif until two strings with the motif are found
motif = "GAATTC" (5' for ECORI restriction site)
Output:
<br>AGTGAACCGTCAGATCCGCTAGCGCGAATTC doesn't contain the motif
GGAGACCGACACCCTCCTGCTATGGGTGCTGCTGCTC doesn't contain the motif
TGGGTGCCCGGCAGCACCGGCGACGCACCGGTCGC doesn't contain the motif
CACCATGGTGAGCAAGGGCGAGGAGAATAACATGGCC doesn't contain the motif
Two strings in given list contain the motif
End of explanation
#type your code here
for value in range(10):
if (value == 5 or value==10):
continue
print(value,end=' ')
print("\n")
Explanation: 1.4
Write a Python program that prints all the numbers in range from 0 to 10 except 5 and 10.
<br> Hint: use continue
End of explanation
my_favorites=['Music', 'Movies', 'Coding', 'Biology', 'Python']
Explanation: 1.5 (Multi-Part)
Next series of tasks about lists and list manipulations:
<br>a. Create a list of 5 of your favorite things.
End of explanation
print(my_favorites)
Explanation: b. Use the print() function to print your list.
End of explanation
print(my_favorites[2])
Explanation: c. Use the print() function to print out the middle element.
End of explanation
my_favorites[2]='European robin'
Explanation: d. Now replace the middle element with a different item, your favorite song, or song bird.
End of explanation
print(my_favorites)
Explanation: e. Use the same print statement from b. to print your new list. Check out the differences.
End of explanation
my_favorites.append('Monkeys')
Explanation: f. Add a new element to the end. Read about append().
End of explanation
my_favorites.insert(0, 'Evolution')
Explanation: g. Add a new element to the beginning. Read about insert().
End of explanation
my_favorites.insert(3, 'Coffee')
Explanation: h. Add a new element somewhere other than the beginning or the end.
End of explanation
#type your code
hominins='sapiens, erectus, neanderthalensis'
print(hominins)
hominin_individuals=hominins.split(',')
print('hominin_individuals')
hominin_individuals=sorted(hominin_individuals)
print("List: ", hominin_individuals)
hominin_individuals=sorted(hominin_individuals, key=len)
print(hominin_individuals)
Explanation: 1.6
Write a script that splits a string into a list:
Save the string sapiens, erectus, neanderthalensis as a variable.
Print the string.
Split the string into individual words and print the result of the split. (Think about the ', '.)
Store the resulting list in a new variable.
Print the list.
Sort the list alphabetically and print (hint: lookup the function sorted()).
Sort the list by length of each string and print. (The shortest string should be first). Check out documentation of the key argument.`
End of explanation
sequences=['ATGCCCGGCCCGGC','GCGTGCTAGCAATACGATAAACCGG', 'ATATATATCGAT','ATGGGCCC']
seq_lengths=[(seq, len(seq)) for seq in sequences]
print(seq_lengths)
Explanation: Extra Pratice: 1.7
Use list comprehension to generate a list of tuples. The tuples should contain sequences and lengths from the previous problem. Print out the length and the sequence (i.e., "4\tATGC\n").
End of explanation
enzymes = { 'EcoRI':'GAATTC','AvaII':'GGACC', 'BisI':'GCATGCGC' , 'SacII': r'CCGCGG','BamHI': 'GGATCC'}
print(enzymes)
Explanation: Question 2: Dictionaries and Set
2.1
Create a dictionary store DNA restriction enzyme names and their motifs from:
<br>https://www.neb.com/tools-and-resources/selection-charts/alphabetized-list-of-recognition-specificities
<br>eg:
EcoRI = GAATTC
AvaII = GGACC
BisI = GGACC
End of explanation
print(enzymes['SacII'])
Explanation: a. look up the motif for a particular SacII enzyme
End of explanation
enzymes['KasI'] = 'GGCGCC'
enzymes['AscI'] = 'GGCGCGCC'
print(enzymes)
Explanation: b. add below two enzymes and their motifs to dictionary
<br>
KasI: GGCGCC
AscI: GGCGCGCC
EciI: GGCGGA
End of explanation
dna = 'AAATTCGTGACTGTAA'
dna_counts= {'T':dna.count('T'),'C':dna.count('C'),'G':dna.count('G'),'A':dna.count('A')}
print(dna_counts)
max_freq= sorted(dna_counts.values())[-1]
print(max_freq)
Explanation: 2.2
Suppose dna is a string variable that contains only 'A','C','G' or 'T' characters.
Write code to find and print the character and frequency (max_freq) of the most frequent character in string dna?
<br>
End of explanation
DNA='GATGGGATTGGGGTTTTCCCCTCCCATGTGCTCAAGACTGGCGCTAAAAGTTTTGAGCTTCTCAAAAGTCTAGAGCCACCGTCCAGGGAGCAGGTAGCTGCTGGGCTCCGGGGACACTTTGCGTTCGGGCTGGGAGCGTGCTTTCCACGACGGTGACACGCTTCCCTGGATTGGCAGCCAGACTGCCTTCCGGGTCACTGCCATGGAGGAGCCGCAGTCAGATCCTAGCGTCGAGCCCCCTCTGAGTCAGGAAACATTTTCAGACCTATGGAAACTACTTCCTGAAAACAACGTTCTGTCCCCCTTGCCGTCCCAAGCAATGGATGATTTGATGCTGTCCCCGGACGATATTGAACAATGGTTCACTGAAGACCCAGGTCCAGATGAAGCTCCCAGAATTCGCCAGAGGCTGCTCCCCCCGTGGCCCCTGCACCAGCAGCTCCTACACCGGCGGCCCCTGCACCAGCCCCCTCCTGGCCCCTGTCATCTTCTGTCCCTTCCCAGAAAACCTACCAGGGCAGCTACGGTTTCCGTCTGGGCTTCTTGCATTCTGGGACAGCCAAGTCTGTGACTTGCACGTACTCCCCTGCCCTCAACAAGATGTTTTGCCAACTGGCCAAGACCTGCCCTGTGCAGCTGTGGGTTGATTCCACACCCCCGCCCGGCACCCGCGTCCGCGCCATGGCCATCTACAAGCAGTCACAGCACATGACGGAGGTTGTGAGGCGCTGCCCCCACCATGAGCGCTGCTCAGATAGCGATGGTCTGGCCCCTCCTCAGCATCTTATCCGAGTGGAAGGAAATTTGCGTGTGGAGTATTTGGATGACAGAAACACTTTTCGTGGGGTTTTCCCCTCCCATGTGCTCAAGACTGGCGCTAAAAGTTTTGAGCTTCTCAAAAGTCTAGAGCCACCGTCCAGGGAGCAGGTAGCTGCTGGGCTCCGGGGACACTTTGCGTTCGGGCTGGGAGCGTGCTTTCCACGACGGTGACACGCTTCCCTGGATTGGCAGCCAGACTGCCTTCCGGGTCACTGCCATGGAGGAGCCGCAGTCAGATCCTAGCGTCGAGCCCCCTCTGAGTCAGGAAACATTTTCAGACCTATGGAAACTACTTCCTGAAAACAACGTTCTGTCCCCCTTGCCGTCCCAAGCAATGGATGATTTGATGCTGTCCCCGGACGATATTGAACAATGGTTCACTGAAGACCCAGGTCCAGATGAAGCTCCCAGAATTCGCCAGAGGCTGCTCCCCCCGTGGCCCCTGCACCAGCAGCTCCTACACCGGCGGCCCCTGCACCAGCCCCCTCCTGGCCCCTGTCATCTTCTGTCCCTTCCCAGAAAACCTACCAGGGCAGCTACGGTTTCCGTCTGGGCTTCTTGCATTCTGGGACAGCCAAGTCTGTGACTTGCACGTACTCCCCTGCCCTCAACAAGATGTTTTGCCAACTGGCCAAGACCTGCCCTGTGCAGCTGTGGGTTGATTCCACACCCCCGCCCGGCACCCGCGTCCGCGCCATGGCCATCTACAAGCAGTCACAGCACATGACGGAGGTTGTGAGGCGCTGCCCCCACCATGAGCGCTGCTCAGATAGCGATGGTCTGGCCCCTCCTCAGCATCTTATCCGAGTGGAAGGAAATTTGCGTGTGGAGTATTTGGATGAC'
DNA_set = set(DNA)
print('DNA_set contains {}'.format(DNA_set))
Explanation: 2.3
If you create a set using a DNA sequence, what will you get back? Try it with this sequence:
GATGGGATTGGGGTTTTCCCCTCCCATGTGCTCAAGACTGGCGCTAAAAGTTTTGAGCTTCTCAAAAGTCTAGAGCCACCGTCCAGGGAGCAGGTAGCTGCTGGGCTCCGGGGACACTTTGCGTTCGGGCTGGGAGCGTGCTTTCCACGACGGTGACACGCTTCCCTGGATTGGCAGCCAGACTGCCTTCCGGGTCACTGCCATGGAGGAGCCGCAGTCAGATCCTAGCGTCGAGCCCCCTCTGAGTCAGGAAACATTTTCAGACCTATGGAAACTACTTCCTGAAAACAACGTTCTGTCCCCCTTGCCGTCCCAAGCAATGGATGATTTGATGCTGTCCCCGGACGATATTGAACAATGGTTCACTGAAGACCCAGGTCCAGATGAAGCTCCCAGAATTCGCCAGAGGCTGCTCCCCCCGTGGCCCCTGCACCAGCAGCTCCTACACCGGCGGCCCCTGCACCAGCCCCCTCCTGGCCCCTGTCATCTTCTGTCCCTTCCCAGAAAACCTACCAGGGCAGCTACGGTTTCCGTCTGGGCTTCTTGCATTCTGGGACAGCCAAGTCTGTGACTTGCACGTACTCCCCTGCCCTCAACAAGATGTTTTGCCAACTGGCCAAGACCTGCCCTGTGCAGCTGTGGGTTGATTCCACACCCCCGCCCGGCACCCGCGTCCGCGCCATGGCCATCTACAAGCAGTCACAGCACATGACGGAGGTTGTGAGGCGCTGCCCCCACCATGAGCGCTGCTCAGATAGCGATGGTCTGGCCCCTCCTCAGCATCTTATCCGAGTGGAAGGAAATTTGCGTGTGGAGTATTTGGATGACAGAAACACTTTTCGTGGGGTTTTCCCCTCCCATGTGCTCAAGACTGGCGCTAAAAGTTTTGAGCTTCTCAAAAGTCTAGAGCCACCGTCCAGGGAGCAGGTAGCTGCTGGGCTCCGGGGACACTTTGCGTTCGGGCTGGGAGCGTGCTTTCCACGACGGTGACACGCTTCCCTGGATTGGCAGCCAGACTGCCTTCCGGGTCACTGCCATGGAGGAGCCGCAGTCAGATCCTAGCGTCGAGCCCCCTCTGAGTCAGGAAACATTTTCAGACCTATGGAAACTACTTCCTGAAAACAACGTTCTGTCCCCCTTGCCGTCCCAAGCAATGGATGATTTGATGCTGTCCCCGGACGATATTGAACAATGGTTCACTGAAGACCCAGGTCCAGATGAAGCTCCCAGAATTCGCCAGAGGCTGCTCCCCCCGTGGCCCCTGCACCAGCAGCTCCTACACCGGCGGCCCCTGCACCAGCCCCCTCCTGGCCCCTGTCATCTTCTGTCCCTTCCCAGAAAACCTACCAGGGCAGCTACGGTTTCCGTCTGGGCTTCTTGCATTCTGGGACAGCCAAGTCTGTGACTTGCACGTACTCCCCTGCCCTCAACAAGATGTTTTGCCAACTGGCCAAGACCTGCCCTGTGCAGCTGTGGGTTGATTCCACACCCCCGCCCGGCACCCGCGTCCGCGCCATGGCCATCTACAAGCAGTCACAGCACATGACGGAGGTTGTGAGGCGCTGCCCCCACCATGAGCGCTGCTCAGATAGCGATGGTCTGGCCCCTCCTCAGCATCTTATCCGAGTGGAAGGAAATTTGCGTGTGGAGTATTTGGATGAC
End of explanation |
4,374 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook is to aid in the development of a complete market simulator.
Step1: Let's first create a quantization function
Step2: Let's create an Indicator and extract some values
Step3: Normally, the data to pass to the extractor will be all the data, for one symbol, during a period of some days.
Step4: Another Indicator
Step5: Let's create a function to enumerate states from a vectorial state.
Step6: Let's generate the q_values for the q_levels
Step7: To make it easier to work with states, the "quantize" function was changed. Now it returns the number of interval in which the real value lies, instead of the "approximated, quantized value".
Step8: Let's now test the bidirectional dictionary to store the states and state vectors.
Step9: Let's create a reward function
Step10: Let's run a complete Environment-Agent pair. | Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
Explanation: This notebook is to aid in the development of a complete market simulator.
End of explanation
levels = [-13.5, -10.0, -1.0, 2.0, 3.0]
real_value = -6.7
temp_list = levels + [real_value]
temp_list
temp_list.sort()
temp_list
sorted_index = temp_list.index(real_value)
if sorted_index == 0:
q_value = levels[0]
elif sorted_index == len(temp_list)-1:
q_value = levels[-1]
else:
q_value = (temp_list[sorted_index-1] + temp_list[sorted_index+1])/2
q_value
def quantize(real_value, levels):
temp_list = levels + [real_value]
temp_list.sort()
sorted_index = temp_list.index(real_value)
if sorted_index == 0:
q_value = levels[0]
elif sorted_index == len(temp_list)-1:
q_value = levels[-1]
else:
q_value = (temp_list[sorted_index-1] + temp_list[sorted_index+1])/2
return q_value
levels
x = arange(-20,20,0.2)
x_df = pd.DataFrame(x, columns=['real_value'])
x_df
len(x_df.values.tolist())
from functools import partial
# x_df.apply(lambda x:print('{} \n {}'.format(x,'-'*20)), axis=1)
x_df['q_value'] = x_df.apply(lambda x: partial(quantize, levels=levels)(x[0]), axis=1)
x_df.head()
plt.plot(x_df['real_value'], x_df['q_value'])
Explanation: Let's first create a quantization function
End of explanation
data_df = pd.read_pickle('../../data/data_df.pkl')
first_date = data_df.index.get_level_values(0)[0]
first_date
one_input_df = data_df.loc[first_date,:]
one_input_df
Explanation: Let's create an Indicator and extract some values
End of explanation
num_days = 50
end_date = data_df.index.get_level_values(0).unique()[num_days-1]
sym_data = data_df['MSFT'].unstack()
sym_data.head()
batch_data = sym_data[first_date:end_date]
batch_data.shape
from recommender.indicator import Indicator
arange(0,1e4,1)
batch_data.head()
ind1 = Indicator(lambda x: x['Close'].mean(), arange(0,10000,100).tolist(), batch_data)
ind1.extracted_data
ind1.extract(batch_data)
ind1.q_levels
Explanation: Normally, the data to pass to the extractor will be all the data, for one symbol, during a period of some days.
End of explanation
ind2 = Indicator(lambda x: (x['Volume']/x['Close']).max(), arange(0,1e8,1e6).tolist(), batch_data)
ind2.extract(batch_data)
(batch_data['Volume']/batch_data['Close']).max()
ind3 = Indicator(lambda x: x['High'].min(), arange(0,100,1).tolist(), batch_data)
ind3.extract(batch_data)
Explanation: Another Indicator
End of explanation
indicators = [ind1, ind2, ind3]
vect_state = tuple(map(lambda x: x.extract(batch_data), indicators))
vect_state
Explanation: Let's create a function to enumerate states from a vectorial state.
End of explanation
len(ind1.q_levels)
q_values = [ind1.q_levels[0]] + ((np.array(ind1.q_levels[1:]) + np.array(ind1.q_levels[:-1])) / 2).tolist() + [ind1.q_levels[-1]]
q_values[:10]
len(q_values)
Explanation: Let's generate the q_values for the q_levels
End of explanation
len(ind1.q_levels)
import itertools as it
states_list = list(it.product(np.arange(len(ind1.q_levels)), np.arange(len(ind2.q_levels)), np.arange(len(ind3.q_levels))))
len(states_list)
states_list
states_list.index((5,1,13))
indicators = {'ind1': ind1,
'ind2': ind2,
'ind3': ind3}
states = list(it.product(*map(lambda x: arange(len(x.q_levels)), indicators.values())))
states
len(states)
states.index((5,1,13))
Explanation: To make it easier to work with states, the "quantize" function was changed. Now it returns the number of interval in which the real value lies, instead of the "approximated, quantized value".
End of explanation
states_list = states.copy()
state_vectors = dict(enumerate(states_list))
state_vectors
states = dict(zip(state_vectors.values(), state_vectors.keys()))
states
import random
index = random.randint(0, len(states))
index
states[state_vectors[index]] == index
state_vectors[states[state_vectors[index]]] == state_vectors[index]
rand_vec = tuple(np.random.randint(0,100,3))
rand_vec
state_vectors[states[rand_vec]] == rand_vec
Explanation: Let's now test the bidirectional dictionary to store the states and state vectors.
End of explanation
from recommender.environment import Environment
from recommender.order import Order
env = Environment(data_df, indicators)
env.state_vectors
len(env.indicators['ind2'].q_levels)
env.states[(1,100,1)]
old_pos_df = env.portfolio.get_positions()
reward, new_state = env.get_consequences([Order(['AAPL',Order.BUY, 100]),
Order(['AAPL',Order.SELL, 45]),
Order(['AAPL', Order.BUY, 10])])
new_pos_df = env.portfolio.get_positions()
old_pos_df
new_pos_df
import recommender.portfolio as port
def reward_value_change(old_pos_df, new_pos_df):
return new_pos_df[port.VALUE].sum() - old_pos_df[port.VALUE].sum()
def reward_cash_change(old_pos_df, new_pos_df):
return new_pos_df.loc[port.CASH, port.VALUE] - old_pos_df.loc[port.CASH, port.VALUE]
reward_value_change(old_pos_df, new_pos_df)
reward_cash_change(old_pos_df, new_pos_df)
Explanation: Let's create a reward function
End of explanation
data_df.shape
from recommender.agent import Agent
NUM_ACTIONS = 3 # BUY, SELL, NOTHING
SYMBOL = 'AAPL'
indicators = {'ind1':}
env = Environment(data_df, indicators=indicators, symbol=SYMBOL)
agent = Agent(num_states,
num_actions,
alpha=0.2,
gamma=0.9,
random_actions_rate=0.9,
random_actions_decrease=0.999,
dyna_iterations=0,
verbose=False)
(self,
data_df,
indicators=None,
initial_cap=1000,
leverage_limit=3.0,
reward_fun=None,
symbol='AAPL')
Explanation: Let's run a complete Environment-Agent pair.
End of explanation |
4,375 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Week9
Step5: 数据集
在训练我们的GAN网络之前, 先介绍一下本次实验可训练GAN的数据集,我们提供了两个数据集来供大家进行尝试数据集.
- MNIST手写体3类数据集,这里为了加快我们的训练速度,我们提供了一个简化版本的只包含数字0,2的2类MNIST数据集,每类各1000张.图片为28*28的单通道灰度图(我们将其resize到32*32),对于GAN而言,我们不需要测试集.我们本次实验主要使用该数据集作为主要的训练数据集.
室内家具数据集.为了加快我们的训练速度,我们将其做了删减处理,仅包含chair等一个类,共500张.图片为32*32的3通道彩色图片.
下面是两个加载数据集的函数.注意我们将所有图片normalize到了[-1,1]之间.
Step7: (无需阅读理解)运行下面2个cell的代码来查看两个数据集中的20张随机真实图片.
Step9: 下面代码实现GAN在一个epoch内的训练过程.
大体而言,GAN的训练过程分为两步,首先将随机噪声z喂给G,生成图片,然后将真实图片和G生成的图片喂给D,然后使用对应的loss函数反向传播优化D.然后再次使用G生成图片,并喂给D,并使用对应的loss函数反向传播优化G.
下面的图片是普通的GAN在G和D上的优化目标
Step10: 当模型训练后,我们需要查看此时G生成的图片效果,下面的visualize_results代码便实现了这块内容.注意,我们生成的图片都在[-1,1],因此,我们需要将图片反向归一化(denorm)到[0,1].
Step11: 万事具备,接下来让我们来尝试这训练一个基本的GAN网络吧.这里实现run_gan函数来调用train以及visualize_results来训练我们的GAN.
Step12: 设置好超参数就可以开始训练!让我们尝试用它来训练2类的mnist数据集
Step13: 训练完后,让我们来看一下G生成的图片效果,可以看到即使是一个简单的GAN在这种简单的数据集上的生成效果还是不错的,虽然仍然存在不少瑕疵,比如说我们可以看到生成的图片上的数字有很多奇怪的雪花等等.
让我们看一下G和D的loss变化曲线(运行下方语句.)
Step14: 作业
Step15: 同样的,我们使用同样的mnist数据集对DCGAN进行训练.
Step16: 可以看到,DCGAN的生成图片质量比起只有线性层的GAN要好不少.接下来,让我们尝试使用家具数据集来训练DCGAN.
Step18: LSGAN
LSGAN(Least Squares GAN)将loss函数改为了 L2损失.G和D的优化目标如下图所示,
作业
Step19: 完成上方代码后,使用所写的L2Loss在mnist数据集上训练DCGAN.
Step21: WGAN
GAN依然存在着训练不稳定,模式崩溃(collapse mode,可以理解为生成的图片多样性极低)的问题(我们的数据集不一定能体现出来).WGAN(Wasserstein GAN)将传统GAN中拟合的JS散度改为Wasserstein距离.WGAN一定程度上解决了GAN训练不稳定以及模式奔溃的问题.
WGAN的判别器的优化目标变为,在满足Lipschitz连续的条件(我们可以限制w不超过某个范围来满足)下,最大化
而它会近似于真实分布与生成分布之间的Wasserstein距离.所以我们D和G的loss函数变为
Step22: 接下来让我们使用写好的run_wgan来跑我们的家具(椅子)数据集,看看效果如何.
Step23: 由WGAN的原理我们知道,D_loss的相反数可以表示生成数据分布与真实分布的Wasserstein距离,其数值越小,表明两个分布越相似,GAN训练得越好.它的值给我们训练GAN提供了一个指标.
运行下方代码观察wgan的loss曲线,可以看到,总体上,D_loss的相反数随着epoch数增加逐渐下降,同时生成的数据也越来越逼近真实数据,这与wgan的原理是相符合的.
Step24: 接下来运行下面两个cell的代码,集中展示wgan的参数分布.
Step25: 可以看到,参数都被截断在[-c, c]之间,大部分参数集中在-c和c附近.
作业
Step26: WGAN-GP(improved wgan)
在WGAN中,需要进行截断, 在实验中发现: 对于比较深的WAGN,它不容易收敛。
大致原因如下:
1. 实验发现最后大多数的权重都在-c 和c上,这就意味了大部分权重只有两个可能数,这太简单了,作为一个深度神经网络来说,这实在是对它强大的拟合能力的浪费.
2. 实验发现容易导致梯度消失或梯度爆炸。判别器是一个多层网络,如果把clip的值设得稍微小了一点,每经过一层网络,梯度就变小一点点,多层之后就会指数衰减;反之,则容易导致梯度爆炸.
所以WGAN-GP使用了Gradient penalty(梯度惩罚)来代替clip.
因为Lipschitz限制是要求判别器的梯度不超过K,所以可以直接使用一个loss term来实现这一点,所以改进后D的优化目标改进为如下
Step27: 同理,观察loss曲线和D上的参数分布. | Python Code:
import torch
torch.cuda.set_device(2)
import torch
import numpy as np
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
%matplotlib inline
class Generator(nn.Module):
def __init__(self, image_size=32, latent_dim=100, output_channel=1):
image_size: image with and height
latent dim: the dimension of random noise z
output_channel: the channel of generated image, for example, 1 for gray image, 3 for RGB image
super(Generator, self).__init__()
self.latent_dim = latent_dim
self.output_channel = output_channel
self.image_size = image_size
# Linear layer: latent_dim -> 128 -> 256 -> 512 -> 1024 -> output_channel * image_size * image_size -> Tanh
self.model = nn.Sequential(
nn.Linear(latent_dim, 128),
nn.BatchNorm1d(128),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(128, 256),
nn.BatchNorm1d(256),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(256, 512),
nn.BatchNorm1d(512),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(512, 1024),
nn.BatchNorm1d(1024),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(1024, output_channel * image_size * image_size),
nn.Tanh()
)
def forward(self, z):
img = self.model(z)
img = img.view(img.size(0), self.output_channel, self.image_size, self.image_size)
return img
class Discriminator(nn.Module):
def __init__(self, image_size=32, input_channel=1):
image_size: image with and height
input_channel: the channel of input image, for example, 1 for gray image, 3 for RGB image
super(Discriminator, self).__init__()
self.image_size = image_size
self.input_channel = input_channel
# Linear layer: input_channel * image_size * image_size -> 1024 -> 512 -> 256 -> 1 -> Sigmoid
self.model = nn.Sequential(
nn.Linear(input_channel * image_size * image_size, 1024),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(1024, 512),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(512, 256),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(256, 1),
nn.Sigmoid(),
)
def forward(self, img):
img_flat = img.view(img.size(0), -1)
out = self.model(img_flat)
return out
Explanation: Week9: GAN
实验要求与基本流程
实验要求
结合理论课内容,深入理解GAN(Generative Adversarial Networks,生成对抗网络)的原理与训练过程.了解GAN网络结构的演变过程与几个基本的GAN的原理(如DCGAN,wGAN等.)
阅读实验指导书的实验内容,按照提示运行以及补充实验代码,或者简要回答问题.提交作业时,保留实验结果.
实验流程
GAN的网络结构与训练
DCGAN
LSGAN
WGAN
WGAN-GP
GAN(Generative Adversarial Networks)
让我们先来看一个只是用线性层的生成对抗网络(GAN),来简单了解一下GAN的基本网络结构与训练过程.
这个GAN网络结构分为两部分,生成器网络Generator和判别器网络Discriminator.
- 生成器Generator将随机生成的噪声z通过多个线性层生成图片,注意生成器的最后一层是Tanh,所以我们生成的图片的取值范围为[-1,1],同理,我们会将真实图片归一化(normalize)到[-1,1].
- 而判别器Discriminator是一个二分类器,通过多个线性层得到一个概率值来判别图片是"真实"或者是"生成"的,所以在Discriminator的最后是一个sigmoid,来得到图片是"真实"的概率.
在所有的网络结构中我们都使用了LeakyReLU作为激活函数,除了G与D的最后一层,同时,我们在层与层之间我们还加入了BatchNormalization.
End of explanation
def load_mnist_data():
load mnist(0,1,2) dataset
transform = torchvision.transforms.Compose([
# transform to 1-channel gray image since we reading image in RGB mode
transforms.Grayscale(1),
# resize image from 28 * 28 to 32 * 32
transforms.Resize(32),
transforms.ToTensor(),
# normalize with mean=0.5 std=0.5
transforms.Normalize(mean=(0.5, ),
std=(0.5, ))
])
train_dataset = torchvision.datasets.ImageFolder(root='./data/mnist', transform=transform)
return train_dataset
def load_furniture_data():
load furniture dataset
transform = torchvision.transforms.Compose([
transforms.ToTensor(),
# normalize with mean=0.5 std=0.5
transforms.Normalize(mean=(0.5, 0.5, 0.5),
std=(0.5, 0.5, 0.5))
])
train_dataset = torchvision.datasets.ImageFolder(root='./data/household_furniture', transform=transform)
return train_dataset
Explanation: 数据集
在训练我们的GAN网络之前, 先介绍一下本次实验可训练GAN的数据集,我们提供了两个数据集来供大家进行尝试数据集.
- MNIST手写体3类数据集,这里为了加快我们的训练速度,我们提供了一个简化版本的只包含数字0,2的2类MNIST数据集,每类各1000张.图片为28*28的单通道灰度图(我们将其resize到32*32),对于GAN而言,我们不需要测试集.我们本次实验主要使用该数据集作为主要的训练数据集.
室内家具数据集.为了加快我们的训练速度,我们将其做了删减处理,仅包含chair等一个类,共500张.图片为32*32的3通道彩色图片.
下面是两个加载数据集的函数.注意我们将所有图片normalize到了[-1,1]之间.
End of explanation
def denorm(x):
# denormalize
out = (x + 1) / 2
return out.clamp(0, 1)
from utils import show
you can pass code in this cell
# show mnist real data
train_dataset = load_mnist_data()
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=20, shuffle=True)
show(torchvision.utils.make_grid(denorm(next(iter(trainloader))[0]), nrow=5))
# show furniture real data
train_dataset = load_furniture_data()
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=20, shuffle=True)
show(torchvision.utils.make_grid(denorm(next(iter(trainloader))[0]), nrow=5))
Explanation: (无需阅读理解)运行下面2个cell的代码来查看两个数据集中的20张随机真实图片.
End of explanation
def train(trainloader, G, D, G_optimizer, D_optimizer, loss_func, device, z_dim):
train a GAN with model G and D in one epoch
Args:
trainloader: data loader to train
G: model Generator
D: model Discriminator
G_optimizer: optimizer of G(etc. Adam, SGD)
D_optimizer: optimizer of D(etc. Adam, SGD)
loss_func: loss function to train G and D. For example, Binary Cross Entropy(BCE) loss function
device: cpu or cuda device
z_dim: the dimension of random noise z
# set train mode
D.train()
G.train()
D_total_loss = 0
G_total_loss = 0
for i, (x, _) in enumerate(trainloader):
# real label and fake label
y_real = torch.ones(x.size(0), 1).to(device)
y_fake = torch.zeros(x.size(0), 1).to(device)
x = x.to(device)
z = torch.rand(x.size(0), z_dim).to(device)
# update D network
# D optimizer zero grads
D_optimizer.zero_grad()
# D real loss from real images
d_real = D(x)
d_real_loss = loss_func(d_real, y_real)
# D fake loss from fake images generated by G
g_z = G(z)
d_fake = D(g_z)
d_fake_loss = loss_func(d_fake, y_fake)
# D backward and step
d_loss = d_real_loss + d_fake_loss
d_loss.backward()
D_optimizer.step()
# update G network
# G optimizer zero grads
G_optimizer.zero_grad()
# G loss
g_z = G(z)
d_fake = D(g_z)
g_loss = loss_func(d_fake, y_real)
# G backward and step
g_loss.backward()
G_optimizer.step()
D_total_loss += d_loss.item()
G_total_loss += g_loss.item()
return D_total_loss / len(trainloader), G_total_loss / len(trainloader)
Explanation: 下面代码实现GAN在一个epoch内的训练过程.
大体而言,GAN的训练过程分为两步,首先将随机噪声z喂给G,生成图片,然后将真实图片和G生成的图片喂给D,然后使用对应的loss函数反向传播优化D.然后再次使用G生成图片,并喂给D,并使用对应的loss函数反向传播优化G.
下面的图片是普通的GAN在G和D上的优化目标:
值得注意的是,上述图片描述的是G和D的优化目标,而在具体实现过程中,我们实现loss函数来达到优化目标.对于上图中D与G的优化目标我们可以使用Binary Cross Entroy损失函数来实现:
$$
BCEloss(p_i,y_i)= -(y_i\log{p_i}+(1−y_i)\log{(1−p_i)})
$$
$p_i$, $y_i$分别是模型的预测值与图片的真实标签(1为真,0为假).因此,对于D,最大化其优化目标可以通过最小化一个BCEloss来实现,其真实图片$x\sim{P_r}$的标签设置为1,而生成图片$z\sim{P(z)}$的标签设置为0.我们可以看到这样的损失函数相当于对D的优化目标加上负号.
而对于G,也通过最小化一个BCEloss来实现,即将生成图片$z\sim{P(z)}$的标签设置为1即可,我们可以看到这样的损失函数与其优化目标是一致的.
End of explanation
def visualize_results(G, device, z_dim, result_size=20):
G.eval()
z = torch.rand(result_size, z_dim).to(device)
g_z = G(z)
show(torchvision.utils.make_grid(denorm(g_z.detach().cpu()), nrow=5))
Explanation: 当模型训练后,我们需要查看此时G生成的图片效果,下面的visualize_results代码便实现了这块内容.注意,我们生成的图片都在[-1,1],因此,我们需要将图片反向归一化(denorm)到[0,1].
End of explanation
def run_gan(trainloader, G, D, G_optimizer, D_optimizer, loss_func, n_epochs, device, latent_dim):
d_loss_hist = []
g_loss_hist = []
for epoch in range(n_epochs):
d_loss, g_loss = train(trainloader, G, D, G_optimizer, D_optimizer, loss_func, device,
z_dim=latent_dim)
print('Epoch {}: Train D loss: {:.4f}, G loss: {:.4f}'.format(epoch, d_loss, g_loss))
d_loss_hist.append(d_loss)
g_loss_hist.append(g_loss)
if epoch == 0 or (epoch + 1) % 10 == 0:
visualize_results(G, device, latent_dim)
return d_loss_hist, g_loss_hist
Explanation: 万事具备,接下来让我们来尝试这训练一个基本的GAN网络吧.这里实现run_gan函数来调用train以及visualize_results来训练我们的GAN.
End of explanation
# hyper params
# z dim
latent_dim = 100
# image size and channel
image_size=32
image_channel=1
# Adam lr and betas
learning_rate = 0.0002
betas = (0.5, 0.999)
# epochs and batch size
n_epochs = 100
batch_size = 32
# device : cpu or cuda:0/1/2/3
device = torch.device('cuda:2')
# mnist dataset and dataloader
train_dataset = load_mnist_data()
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
# use BCELoss as loss function
bceloss = nn.BCELoss().to(device)
# G and D model
G = Generator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel).to(device)
D = Discriminator(image_size=image_size, input_channel=image_channel).to(device)
# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)
d_loss_hist, g_loss_hist = run_gan(trainloader, G, D, G_optimizer, D_optimizer, bceloss,
n_epochs, device, latent_dim)
Explanation: 设置好超参数就可以开始训练!让我们尝试用它来训练2类的mnist数据集
End of explanation
from utils import loss_plot
loss_plot(d_loss_hist, g_loss_hist)
Explanation: 训练完后,让我们来看一下G生成的图片效果,可以看到即使是一个简单的GAN在这种简单的数据集上的生成效果还是不错的,虽然仍然存在不少瑕疵,比如说我们可以看到生成的图片上的数字有很多奇怪的雪花等等.
让我们看一下G和D的loss变化曲线(运行下方语句.)
End of explanation
from utils import initialize_weights
class DCGenerator(nn.Module):
def __init__(self, image_size=32, latent_dim=64, output_channel=1):
super(DCGenerator, self).__init__()
self.image_size = image_size
self.latent_dim = latent_dim
self.output_channel = output_channel
self.init_size = image_size // 8
# fc: Linear -> BN -> ReLU
self.fc = nn.Sequential(
nn.Linear(latent_dim, 512 * self.init_size ** 2),
nn.BatchNorm1d(512 * self.init_size ** 2),
nn.ReLU(inplace=True)
)
# deconv: ConvTranspose2d(4, 2, 1) -> BN -> ReLU ->
# ConvTranspose2d(4, 2, 1) -> BN -> ReLU ->
# ConvTranspose2d(4, 2, 1) -> Tanh
self.deconv = nn.Sequential(
nn.ConvTranspose2d(512, 256, 4, stride=2, padding=1),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.ConvTranspose2d(256, 128, 4, stride=2, padding=1),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.ConvTranspose2d(128, output_channel, 4, stride=2, padding=1),
nn.Tanh(),
)
initialize_weights(self)
def forward(self, z):
out = self.fc(z)
out = out.view(out.shape[0], 512, self.init_size, self.init_size)
img = self.deconv(out)
return img
class DCDiscriminator(nn.Module):
def __init__(self, image_size=32, input_channel=1, sigmoid=True):
super(DCDiscriminator, self).__init__()
self.image_size = image_size
self.input_channel = input_channel
self.fc_size = image_size // 8
# conv: Conv2d(3,2,1) -> LeakyReLU
# Conv2d(3,2,1) -> BN -> LeakyReLU
# Conv2d(3,2,1) -> BN -> LeakyReLU
self.conv = nn.Sequential(
nn.Conv2d(input_channel, 128, 3, 2, 1),
nn.LeakyReLU(0.2),
nn.Conv2d(128, 256, 3, 2, 1),
nn.BatchNorm2d(256),
nn.LeakyReLU(0.2),
nn.Conv2d(256, 512, 3, 2, 1),
nn.BatchNorm2d(512),
nn.LeakyReLU(0.2),
)
# fc: Linear -> Sigmoid
self.fc = nn.Sequential(
nn.Linear(512 * self.fc_size * self.fc_size, 1),
)
if sigmoid:
self.fc.add_module('sigmoid', nn.Sigmoid())
initialize_weights(self)
def forward(self, img):
out = self.conv(img)
out = out.view(out.shape[0], -1)
out = self.fc(out)
return out
Explanation: 作业:
观察G与D的loss曲线,与之前的训练的CNN的loss曲线相比,有什么不同?试简要回答你觉得可能产生这样的不同的原因.
答:
CNN的loss曲线通常是一开始下降得很快,等到迭代多次之后,loss曲线下降的速度将变慢,并且可能产生振荡,loss值可能上升。这是因为CNN训练一开始距离极值还很远,训练效率高。等到多次迭代之后,CNN可能已经接近拟合,甚至可能出现过拟合,导致loss值振荡。
而生成网络和判别网络的loss曲线与CNN的曲线都有明显区别,生成网络的loss值是逐渐变大,判别网络的loss值是逐渐变小。一开始,生成网络的loss值急剧上升,因为一开始生成网络受真实图片的影响小。随着迭代次数的增多,判别网络判断真假图的能力增大,生成网络的loss值迅速增大,而判别网络的loss值下降。
迭代多次之后,生成网络和判别网络开始对抗,生成网络的loss值可能因此有下降的概率,而判别网络也因此有上升的概率。因为生成网络生成假图以假乱真的能力增大,判别网络判别真假图的能力也增大,两者开始对抗。
在长期的若干次迭代中,生成网络的loss值趋向于增大,而判别网络趋向于下降。因为数据集的数量是有限的,网络最终必然倾向于拟合。loss曲线中判别器下降而生成器上升,是因为判别网络的判别能力增大的速度比生成网络的能力要快,也可能是生成网络已经达到拟合。
DCGAN
在DCGAN(Deep Convolution GAN)中,最大的改变是使用了CNN代替全连接层.在生成器G中,使用stride为2的转置卷积来生成图片同时扩大图片尺寸,而在判别器D中,使用stride为2的卷积来将图片进行卷积并下采样.除此之外,DCGAN加入了在层与层之间BatchNormalization(虽然我们在普通的GAN中就已经添加),在G中使用ReLU作为激活函数,而在D中使用LeakyReLU作为激活函数.
End of explanation
# hyper params
# z dim
latent_dim = 100
# image size and channel
image_size=32
image_channel=1
# Adam lr and betas
learning_rate = 0.0002
betas = (0.5, 0.999)
# epochs and batch size
n_epochs = 100
batch_size = 32
# device : cpu or cuda:0/1/2/3
device = torch.device('cuda:2')
# mnist dataset and dataloader
train_dataset = load_mnist_data()
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
# use BCELoss as loss function
bceloss = nn.BCELoss().to(device)
# G and D model, use DCGAN
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel).to(device)
D = DCDiscriminator(image_size=image_size, input_channel=image_channel).to(device)
# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)
d_loss_hist, g_loss_hist = run_gan(trainloader, G, D, G_optimizer, D_optimizer, bceloss,
n_epochs, device, latent_dim)
loss_plot(d_loss_hist, g_loss_hist)
Explanation: 同样的,我们使用同样的mnist数据集对DCGAN进行训练.
End of explanation
# RGB image channel = 3
image_channel=3
# epochs
n_epochs = 300
batch_size = 32
image_size=32
latent_dim = 100
device = torch.device('cuda:2')
learning_rate = 0.0002
betas = (0.5, 0.999)
bceloss = nn.BCELoss().to(device)
# mnist dataset and dataloader
train_dataset = load_furniture_data()
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
# G and D model, use DCGAN
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel).to(device)
D = DCDiscriminator(image_size=image_size, input_channel=image_channel).to(device)
# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)
d_loss_hist, g_loss_hist = run_gan(trainloader, G, D, G_optimizer, D_optimizer, bceloss,
n_epochs, device, latent_dim)
loss_plot(d_loss_hist, g_loss_hist)
Explanation: 可以看到,DCGAN的生成图片质量比起只有线性层的GAN要好不少.接下来,让我们尝试使用家具数据集来训练DCGAN.
End of explanation
class L2Loss(nn.Module):
def __init__(self):
super(L2Loss, self).__init__()
def forward(self, input_, target):
input_: (batch_size*1)
target: (batch_size*1) labels, 1 or 0
return ((input_ - target) ** 2).mean()
Explanation: LSGAN
LSGAN(Least Squares GAN)将loss函数改为了 L2损失.G和D的优化目标如下图所示,
作业:
在这里,请在下方补充L2Loss的代码来实现L2损失来优化上面的目标.并使用这个loss函数在mnist数据集上训练LSGAN,并显示训练的效果图片及loss变化曲线.
提示:忽略上图的1/2.L2损失即MSEloss(均方误差),传入两个参数input_是指判别器D预测为"真实"的概率值(size为batch_size*1),target为标签1或0(size为batch_size*1).只允许使用pytorch和python的运算实现(不能直接调用MSEloss)
End of explanation
# hyper params
# z dim
latent_dim = 100
# image size and channel
image_size=32
image_channel=1
# Adam lr and betas
learning_rate = 0.0002
betas = (0.5, 0.999)
# epochs and batch size
n_epochs = 100
batch_size = 32
# device : cpu or cuda:0/1/2/3
device = torch.device('cuda:2')
# mnist dataset and dataloader
train_dataset = load_mnist_data()
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
# use L2Loss as loss function
l2loss = L2Loss().to(device)
# G and D model, use DCGAN
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel).to(device)
D = DCDiscriminator(image_size=image_size, input_channel=image_channel).to(device)
# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)
d_loss_hist, g_loss_hist = run_gan(trainloader, G, D, G_optimizer, D_optimizer, l2loss, n_epochs, device,
latent_dim)
loss_plot(d_loss_hist, g_loss_hist)
Explanation: 完成上方代码后,使用所写的L2Loss在mnist数据集上训练DCGAN.
End of explanation
def wgan_train(trainloader, G, D, G_optimizer, D_optimizer, device, z_dim, n_d=2, weight_clip=0.01):
n_d: the number of iterations of D update per G update iteration
weight_clip: the clipping parameters
D.train()
G.train()
D_total_loss = 0
G_total_loss = 0
for i, (x, _) in enumerate(trainloader):
x = x.to(device)
# update D network
# D optimizer zero grads
D_optimizer.zero_grad()
# D real loss from real images
d_real = D(x)
d_real_loss = - d_real.mean()
# D fake loss from fake images generated by G
z = torch.rand(x.size(0), z_dim).to(device)
g_z = G(z)
d_fake = D(g_z)
d_fake_loss = d_fake.mean()
# D backward and step
d_loss = d_real_loss + d_fake_loss
d_loss.backward()
D_optimizer.step()
# D weight clip
for params in D.parameters():
params.data.clamp_(-weight_clip, weight_clip)
D_total_loss += d_loss.item()
# update G network
if (i + 1) % n_d == 0:
# G optimizer zero grads
G_optimizer.zero_grad()
# G loss
g_z = G(z)
d_fake = D(g_z)
g_loss = - d_fake.mean()
# G backward and step
g_loss.backward()
G_optimizer.step()
G_total_loss += g_loss.item()
return D_total_loss / len(trainloader), G_total_loss * n_d / len(trainloader)
def run_wgan(trainloader, G, D, G_optimizer, D_optimizer, n_epochs, device, latent_dim, n_d, weight_clip):
d_loss_hist = []
g_loss_hist = []
for epoch in range(n_epochs):
d_loss, g_loss = wgan_train(trainloader, G, D, G_optimizer, D_optimizer, device,
z_dim=latent_dim, n_d=n_d, weight_clip=weight_clip)
print('Epoch {}: Train D loss: {:.4f}, G loss: {:.4f}'.format(epoch, d_loss, g_loss))
d_loss_hist.append(d_loss)
g_loss_hist.append(g_loss)
if epoch == 0 or (epoch + 1) % 10 == 0:
visualize_results(G, device, latent_dim)
return d_loss_hist, g_loss_hist
Explanation: WGAN
GAN依然存在着训练不稳定,模式崩溃(collapse mode,可以理解为生成的图片多样性极低)的问题(我们的数据集不一定能体现出来).WGAN(Wasserstein GAN)将传统GAN中拟合的JS散度改为Wasserstein距离.WGAN一定程度上解决了GAN训练不稳定以及模式奔溃的问题.
WGAN的判别器的优化目标变为,在满足Lipschitz连续的条件(我们可以限制w不超过某个范围来满足)下,最大化
而它会近似于真实分布与生成分布之间的Wasserstein距离.所以我们D和G的loss函数变为:
具体到在实现上,WGAN主要有3点改变:
- 判别器D最后一层去掉sigmoid
- 生成器G和判别器的loss不使用log
- 每次更新判别器D后,将参数的绝对值截断到某一个固定常数c
所以我们主要重写了WGAN的训练函数,在这里,网络结构使用去除Sigmoid的DCGAN(注意初始化D时将sigmoid设置为False来去掉最后一层sigmoid).
下面是WGAN的代码实现.加入了两个参数,n_d表示每训练一次G训练D的次数,weight_clip表示截断的常数.
End of explanation
# hyper params
# z dim
latent_dim = 100
# image size and channel
image_size=32
image_channel=3
# Adam lr and betas
learning_rate = 0.0002
betas = (0.5, 0.999)
# epochs and batch size
n_epochs = 300
batch_size = 32
# n_d: the number of iterations of D update per G update iteration
n_d = 2
weight_clip=0.01
# device : cpu or cuda:0/1/2/3
device = torch.device('cuda:2')
# mnist dataset and dataloader
train_dataset = load_furniture_data()
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
# G and D model, use DCGAN, note that sigmoid is removed in D
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel).to(device)
D = DCDiscriminator(image_size=image_size, input_channel=image_channel, sigmoid=False).to(device)
# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)
d_loss_hist, g_loss_hist = run_wgan(trainloader, G, D, G_optimizer, D_optimizer, n_epochs, device,
latent_dim, n_d, weight_clip)
Explanation: 接下来让我们使用写好的run_wgan来跑我们的家具(椅子)数据集,看看效果如何.
End of explanation
loss_plot(d_loss_hist, g_loss_hist)
Explanation: 由WGAN的原理我们知道,D_loss的相反数可以表示生成数据分布与真实分布的Wasserstein距离,其数值越小,表明两个分布越相似,GAN训练得越好.它的值给我们训练GAN提供了一个指标.
运行下方代码观察wgan的loss曲线,可以看到,总体上,D_loss的相反数随着epoch数增加逐渐下降,同时生成的数据也越来越逼近真实数据,这与wgan的原理是相符合的.
End of explanation
from utils import show_weights_hist
def show_d_params(D):
plist = []
for params in D.parameters():
plist.extend(params.cpu().data.view(-1).numpy())
show_weights_hist(plist)
show_d_params(D)
Explanation: 接下来运行下面两个cell的代码,集中展示wgan的参数分布.
End of explanation
n_d = 1
# G and D model, use DCGAN, note that sigmoid is removed in D
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel).to(device)
D = DCDiscriminator(image_size=image_size, input_channel=image_channel, sigmoid=False).to(device)
# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)
d_loss_hist, g_loss_hist = run_wgan(trainloader, G, D, G_optimizer, D_optimizer, n_epochs, device,
latent_dim, n_d, weight_clip)
loss_plot(d_loss_hist, g_loss_hist)
n_d = 3
# G and D model, use DCGAN, note that sigmoid is removed in D
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel).to(device)
D = DCDiscriminator(image_size=image_size, input_channel=image_channel, sigmoid=False).to(device)
# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)
d_loss_hist, g_loss_hist = run_wgan(trainloader, G, D, G_optimizer, D_optimizer, n_epochs, device,
latent_dim, n_d, weight_clip)
loss_plot(d_loss_hist, g_loss_hist)
n_d = 5
# G and D model, use DCGAN, note that sigmoid is removed in D
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel).to(device)
D = DCDiscriminator(image_size=image_size, input_channel=image_channel, sigmoid=False).to(device)
# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)
d_loss_hist, g_loss_hist = run_wgan(trainloader, G, D, G_optimizer, D_optimizer, n_epochs, device,
latent_dim, n_d, weight_clip)
loss_plot(d_loss_hist, g_loss_hist)
Explanation: 可以看到,参数都被截断在[-c, c]之间,大部分参数集中在-c和c附近.
作业:
尝试使用n_d设置为5, 3, 1等,再次训练wGAN,n_d为多少时的结果最好?
答:
n_d为1时结果最好,虽然说n_d代表的是每训练一次G就训练多少次D,但在网络中是先训练D的,也就是每n_d批数据,才训练一次G。n_d为1时,G的训练次数是最多的。
End of explanation
import torch.autograd as autograd
def wgan_gp_train(trainloader, G, D, G_optimizer, D_optimizer, device, z_dim, lambda_=10, n_d=2):
D.train()
G.train()
D_total_loss = 0
G_total_loss = 0
for i, (x, _) in enumerate(trainloader):
x = x.to(device)
# update D network
# D optimizer zero grads
D_optimizer.zero_grad()
# D real loss from real images
d_real = D(x)
d_real_loss = - d_real.mean()
# D fake loss from fake images generated by G
z = torch.rand(x.size(0), z_dim).to(device)
g_z = G(z)
d_fake = D(g_z)
d_fake_loss = d_fake.mean()
# D gradient penalty
# a random number epsilon
epsilon = torch.rand(x.size(0), 1, 1, 1).cuda()
x_hat = epsilon * x + (1 - epsilon) * g_z
x_hat.requires_grad_(True)
y_hat = D(x_hat)
# computes the sum of gradients of y_hat with regard to x_hat
gradients = autograd.grad(outputs=y_hat, inputs=x_hat, grad_outputs=torch.ones(y_hat.size()).cuda(),
create_graph=True, retain_graph=True, only_inputs=True)[0]
# computes gradientpenalty
gradient_penalty = torch.mean((gradients.view(gradients.size()[0], -1).norm(p=2, dim=1) - 1) ** 2)
# D backward and step
d_loss = d_real_loss + d_fake_loss + lambda_ * gradient_penalty
d_loss.backward()
D_optimizer.step()
D_total_loss += d_loss.item()
# update G network
# G optimizer zero grads
if (i + 1) % n_d == 0:
G_optimizer.zero_grad()
# G loss
g_z = G(z)
d_fake = D(g_z)
g_loss = - d_fake.mean()
# G backward and step
g_loss.backward()
G_optimizer.step()
G_total_loss += g_loss.item()
return D_total_loss / len(trainloader), G_total_loss * n_d / len(trainloader)
# hyper params
# z dim
latent_dim = 100
# image size and channel
image_size=32
image_channel=3
# Adam lr and betas
learning_rate = 0.0002
betas = (0.5, 0.999)
# epochs and batch size
n_epochs = 300
batch_size = 32
# device : cpu or cuda:0/1/2/3
device = torch.device('cuda:2')
# n_d: train D
n_d = 2
lambda_ = 10
# mnist dataset and dataloader
train_dataset = load_furniture_data()
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
# G and D model, use DCGAN, note that sigmoid is removed in D
G = DCGenerator(image_size=image_size, latent_dim=latent_dim, output_channel=image_channel).to(device)
D = DCDiscriminator(image_size=image_size, input_channel=image_channel, sigmoid=False).to(device)
# G and D optimizer, use Adam or SGD
G_optimizer = optim.Adam(G.parameters(), lr=learning_rate, betas=betas)
D_optimizer = optim.Adam(D.parameters(), lr=learning_rate, betas=betas)
d_loss_hist = []
g_loss_hist = []
for epoch in range(n_epochs):
d_loss, g_loss = wgan_gp_train(trainloader, G, D, G_optimizer, D_optimizer, device,
z_dim=latent_dim, lambda_=lambda_, n_d=n_d)
print('Epoch {}: Train D loss: {:.4f}, G loss: {:.4f}'.format(epoch, d_loss, g_loss))
d_loss_hist.append(d_loss)
g_loss_hist.append(g_loss)
if epoch == 0 or (epoch + 1) % 10 == 0:
visualize_results(G, device, latent_dim)
Explanation: WGAN-GP(improved wgan)
在WGAN中,需要进行截断, 在实验中发现: 对于比较深的WAGN,它不容易收敛。
大致原因如下:
1. 实验发现最后大多数的权重都在-c 和c上,这就意味了大部分权重只有两个可能数,这太简单了,作为一个深度神经网络来说,这实在是对它强大的拟合能力的浪费.
2. 实验发现容易导致梯度消失或梯度爆炸。判别器是一个多层网络,如果把clip的值设得稍微小了一点,每经过一层网络,梯度就变小一点点,多层之后就会指数衰减;反之,则容易导致梯度爆炸.
所以WGAN-GP使用了Gradient penalty(梯度惩罚)来代替clip.
因为Lipschitz限制是要求判别器的梯度不超过K,所以可以直接使用一个loss term来实现这一点,所以改进后D的优化目标改进为如下:
下面是WGAN-GP的具体代码实现,同WGAN,我们也只实现了他的训练代码,而模型我们直接使用DCGAN的模型.
End of explanation
loss_plot(d_loss_hist, g_loss_hist)
show_d_params(D)
Explanation: 同理,观察loss曲线和D上的参数分布.
End of explanation |
4,376 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Gaussian Mixture Models and Expectation Maximisation in Shogun
By Heiko Strathmann - [email protected] - http
Step2: Set up the model in Shogun
Step3: Sampling from mixture models
Sampling is extremely easy since every instance of the <a href="http
Step4: Evaluating densities in mixture Models
Next, let us visualise the density of the joint model (which is a convex sum of the densities of the individual distributions). Note the similarity between the calls since all distributions implement the <a href="http
Step5: Density estimating with mixture models
Now let us draw samples from the mixture model itself rather than from individual components. This is the situation that usually occurs in practice
Step6: Imagine you did not know the true generating process of this data. What would you think just looking at it? There are clearly at least two components (or clusters) that might have generated this data, but three also looks reasonable. So let us try to learn a Gaussian mixture model on those.
Step7: So far so good, now lets plot the density of this GMM using the code from above
Step8: It is also possible to access the individual components of the mixture distribution. In our case, we can for example draw 95% ellipses for each of the Gaussians using the method from above. We will do this (and more) below.
On local minima of EM
It seems that three comonents give a density that is closest to the original one. While two components also do a reasonable job here, it might sometimes happen (<a href="http
Step9: Clustering with mixture models
Recall that our initial goal was not to visualise mixture models (although that is already pretty cool) but to find clusters in a given set of points. All we need to do for this is to evaluate the log-likelihood of every point under every learned component and then pick the largest one. Shogun can do both. Below, we will illustrate both cases, obtaining a cluster index, and evaluating the log-likelihood for every point under each component.
Step10: These are clusterings obtained via the true mixture model and the one learned via EM. There is a slight subtlety here
Step11: Note how the lower left and middle cluster are overlapping in the sense that points at their intersection have similar likelihoods. If you do not care at all about this and are just interested in a partitioning of the space, simply choose the maximum.
Below we plot the space partitioning for a hard clustering. | Python Code:
%pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
# import all Shogun classes
from shogun import *
from matplotlib.patches import Ellipse
# a tool for visualisation
def get_gaussian_ellipse_artist(mean, cov, nstd=1.96, color="red", linewidth=3):
Returns an ellipse artist for nstd times the standard deviation of this
Gaussian, specified by mean and covariance
# compute eigenvalues (ordered)
vals, vecs = eigh(cov)
order = vals.argsort()[::-1]
vals, vecs = vals[order], vecs[:, order]
theta = numpy.degrees(arctan2(*vecs[:, 0][::-1]))
# width and height are "full" widths, not radius
width, height = 2 * nstd * sqrt(vals)
e = Ellipse(xy=mean, width=width, height=height, angle=theta, \
edgecolor=color, fill=False, linewidth=linewidth)
return e
Explanation: Gaussian Mixture Models and Expectation Maximisation in Shogun
By Heiko Strathmann - [email protected] - http://github.com/karlnapf - http://herrstrathmann.de.
Based on the GMM framework of the Google summer of code 2011 project of Alesis Novik - https://github.com/alesis
This notebook is about learning and using Gaussian <a href="https://en.wikipedia.org/wiki/Mixture_model">Mixture Models</a> (GMM) in Shogun. Below, we demonstrate how to use them for sampling, for density estimation via <a href="https://en.wikipedia.org/wiki/Expectation-maximization_algorithm">Expectation Maximisation (EM)</a>, and for <a href="https://en.wikipedia.org/wiki/Data_clustering">clustering</a>.
Note that Shogun's interfaces for mixture models are deprecated and are soon to be replace by more intuitive and efficient ones. This notebook contains some python magic at some places to compensate for this. However, all computations are done within Shogun itself.
Finite Mixture Models (skip if you just want code examples)
We begin by giving some intuition about mixture models. Consider an unobserved (or latent) discrete random variable taking $k$ states $s$ with probabilities $\text{Pr}(s=i)=\pi_i$ for $1\leq i \leq k$, and $k$ random variables $x_i|s_i$ with arbritary densities or distributions, which are conditionally independent of each other given the state of $s$. In the finite mixture model, we model the probability or density for a single point $x$ begin generated by the weighted mixture of the $x_i|s_i$
$$
p(x)=\sum_{i=1}^k\text{Pr}(s=i)p(x)=\sum_{i=1}^k \pi_i p(x|s)
$$
which is simply the marginalisation over the latent variable $s$. Note that $\sum_{i=1}^k\pi_i=1$.
For example, for the Gaussian mixture model (GMM), we get (adding a collection of parameters $\theta:={\boldsymbol{\mu}i, \Sigma_i}{i=1}^k$ that contains $k$ mean and covariance parameters of single Gaussian distributions)
$$
p(x|\theta)=\sum_{i=1}^k \pi_i \mathcal{N}(\boldsymbol{\mu}_i,\Sigma_i)
$$
Note that any set of probability distributions on the same domain can be combined to such a mixture model. Note again that $s$ is an unobserved discrete random variable, i.e. we model data being generated from some weighted combination of baseline distributions. Interesting problems now are
Learning the weights $\text{Pr}(s=i)=\pi_i$ from data
Learning the parameters $\theta$ from data for a fixed family of $x_i|s_i$, for example for the GMM
Using the learned model (which is a density estimate) for clustering or classification
All of these problems are in the context of unsupervised learning since the algorithm only sees the plain data and no information on its structure.
Expectation Maximisation
<a href="https://en.wikipedia.org/wiki/Expectation-maximization_algorithm">Expectation Maximisation (EM)</a> is a powerful method to learn any form of latent models and can be applied to the Gaussian mixture model case. Standard methods such as Maximum Likelihood are not straightforward for latent models in general, while EM can almost always be applied. However, it might converge to local optima and does not guarantee globally optimal solutions (this can be dealt with with some tricks as we will see later). While the general idea in EM stays the same for all models it can be used on, the individual steps depend on the particular model that is being used.
The basic idea in EM is to maximise a lower bound, typically called the free energy, on the log-likelihood of the model. It does so by repeatedly performing two steps
The E-step optimises the free energy with respect to the latent variables $s_i$, holding the parameters $\theta$ fixed. This is done via setting the distribution over $s$ to the posterior given the used observations.
The M-step optimises the free energy with respect to the paramters $\theta$, holding the distribution over the $s_i$ fixed. This is done via maximum likelihood.
It can be shown that this procedure never decreases the likelihood and that stationary points (i.e. neither E-step nor M-step produce changes) of it corresponds to local maxima in the model's likelihood. See references for more details on the procedure, and how to obtain a lower bound on the log-likelihood. There exist many different flavours of EM, including variants where only subsets of the model are iterated over at a time. There is no learning rate such as step size or similar, which is good and bad since convergence can be slow.
Mixtures of Gaussians in Shogun
The main class for GMM in Shogun is <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGMM.html">CGMM</a>, which contains an interface for setting up a model and sampling from it, but also to learn the model (the $\pi_i$ and parameters $\theta$) via EM. It inherits from the base class for distributions in Shogun, <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1Distribution.html">Distribution</a>, and combines multiple single distribution instances to a mixture.
We start by creating a GMM instance, sampling from it, and computing the log-likelihood of the model for some points, and the log-likelihood of each individual component for some points. All these things are done in two dimensions to be able to plot them, but they generalise to higher (or lower) dimensions easily.
Let's sample, and illustrate the difference of knowing the latent variable indicating the component or not.
End of explanation
# create mixture of three Gaussians
num_components=3
num_max_samples=100
gmm=GMM(num_components)
dimension=2
# set means (TODO interface should be to construct mixture from individuals with set parameters)
means=zeros((num_components, dimension))
means[0]=[-5.0, -4.0]
means[1]=[7.0, 3.0]
means[2]=[0, 0.]
[gmm.set_nth_mean(means[i], i) for i in range(num_components)]
# set covariances
covs=zeros((num_components, dimension, dimension))
covs[0]=array([[2, 1.3],[.6, 3]])
covs[1]=array([[1.3, -0.8],[-0.8, 1.3]])
covs[2]=array([[2.5, .8],[0.8, 2.5]])
[gmm.set_nth_cov(covs[i],i) for i in range(num_components)]
# set mixture coefficients, these have to sum to one (TODO these should be initialised automatically)
weights=array([0.5, 0.3, 0.2])
gmm.put('m_coefficients', weights)
Explanation: Set up the model in Shogun
End of explanation
# now sample from each component seperately first, the from the joint model
colors=["red", "green", "blue"]
for i in range(num_components):
# draw a number of samples from current component and plot
num_samples=int(rand()*num_max_samples)+1
# emulate sampling from one component (TODO fix interface of GMM to handle this)
w=zeros(num_components)
w[i]=1.
gmm.put('m_coefficients', w)
# sample and plot (TODO fix interface to have loop within)
X=array([gmm.sample() for _ in range(num_samples)])
plot(X[:,0], X[:,1], "o", color=colors[i])
# draw 95% elipsoid for current component
gca().add_artist(get_gaussian_ellipse_artist(means[i], covs[i], color=colors[i]))
_=title("%dD Gaussian Mixture Model with %d components" % (dimension, num_components))
# since we used a hack to sample from each component
gmm.put('m_coefficients', weights)
Explanation: Sampling from mixture models
Sampling is extremely easy since every instance of the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1Distribution.html">Distribution</a> class in Shogun allows to sample from it (if implemented)
End of explanation
# generate a grid over the full space and evaluate components PDF
resolution=100
Xs=linspace(-10,10, resolution)
Ys=linspace(-8,6, resolution)
pairs=asarray([(x,y) for x in Xs for y in Ys])
D=asarray([gmm.cluster(pairs[i])[3] for i in range(len(pairs))]).reshape(resolution,resolution)
figure(figsize=(18,5))
subplot(1,2,1)
pcolor(Xs,Ys,D)
xlim([-10,10])
ylim([-8,6])
title("Log-Likelihood of GMM")
subplot(1,2,2)
pcolor(Xs,Ys,exp(D))
xlim([-10,10])
ylim([-8,6])
_=title("Likelihood of GMM")
Explanation: Evaluating densities in mixture Models
Next, let us visualise the density of the joint model (which is a convex sum of the densities of the individual distributions). Note the similarity between the calls since all distributions implement the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1Distribution.html">Distribution</a> interface, including the mixture.
End of explanation
# sample and plot (TODO fix interface to have loop within)
X=array([gmm.sample() for _ in range(num_max_samples)])
plot(X[:,0], X[:,1], "o")
_=title("Samples from GMM")
Explanation: Density estimating with mixture models
Now let us draw samples from the mixture model itself rather than from individual components. This is the situation that usually occurs in practice: Someone gives you a bunch of data with no labels attached to it all all. Our job is now to find structure in the data, which we will do with a GMM.
End of explanation
def estimate_gmm(X, num_components):
# bring data into shogun representation (note that Shogun data is in column vector form, so transpose)
feat=features(X.T)
gmm_est=GMM(num_components)
gmm_est.set_features(feat)
# learn GMM
gmm_est.train_em()
return gmm_est
Explanation: Imagine you did not know the true generating process of this data. What would you think just looking at it? There are clearly at least two components (or clusters) that might have generated this data, but three also looks reasonable. So let us try to learn a Gaussian mixture model on those.
End of explanation
component_numbers=[2,3]
# plot true likelihood
D_true=asarray([gmm.cluster(pairs[i])[num_components] for i in range(len(pairs))]).reshape(resolution,resolution)
figure(figsize=(18,5))
subplot(1,len(component_numbers)+1,1)
pcolor(Xs,Ys,exp(D_true))
xlim([-10,10])
ylim([-8,6])
title("True likelihood")
for n in range(len(component_numbers)):
# TODO get rid of these hacks and offer nice interface from Shogun
# learn GMM with EM
gmm_est=estimate_gmm(X, component_numbers[n])
# evaluate at a grid of points
D_est=asarray([gmm_est.cluster(pairs[i])[component_numbers[n]] for i in range(len(pairs))]).reshape(resolution,resolution)
# visualise densities
subplot(1,len(component_numbers)+1,n+2)
pcolor(Xs,Ys,exp(D_est))
xlim([-10,10])
ylim([-8,6])
_=title("Estimated likelihood for EM with %d components"%component_numbers[n])
Explanation: So far so good, now lets plot the density of this GMM using the code from above
End of explanation
# function to draw ellipses for all components of a GMM
def visualise_gmm(gmm, color="blue"):
for i in range(gmm.get_num_components()):
component=Gaussian.obtain_from_generic(gmm.get_component(i))
gca().add_artist(get_gaussian_ellipse_artist(component.get_mean(), component.get_cov(), color=color))
# multiple runs to illustrate random initialisation matters
for _ in range(3):
figure(figsize=(18,5))
subplot(1, len(component_numbers)+1, 1)
plot(X[:,0],X[:,1], 'o')
visualise_gmm(gmm_est, color="blue")
title("True components")
for i in range(len(component_numbers)):
gmm_est=estimate_gmm(X, component_numbers[i])
subplot(1, len(component_numbers)+1, i+2)
plot(X[:,0],X[:,1], 'o')
visualise_gmm(gmm_est, color=colors[i])
# TODO add a method to get likelihood of full model, retraining is inefficient
likelihood=gmm_est.train_em()
_=title("Estimated likelihood: %.2f (%d components)"%(likelihood,component_numbers[i]))
Explanation: It is also possible to access the individual components of the mixture distribution. In our case, we can for example draw 95% ellipses for each of the Gaussians using the method from above. We will do this (and more) below.
On local minima of EM
It seems that three comonents give a density that is closest to the original one. While two components also do a reasonable job here, it might sometimes happen (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1KMeans.html">KMeans</a> is used to initialise the cluster centres if not done by hand, using a random cluster initialisation) that the upper two Gaussians are grouped, re-run for a couple of times to see this. This illustrates how EM might get stuck in a local minimum. We will do this below, where it might well happen that all runs produce the same or different results - no guarantees.
Note that it is easily possible to initialise EM via specifying the parameters of the mixture components as did to create the original model above.
One way to decide which of multiple convergenced EM instances to use is to simply compute many of them (with different initialisations) and then choose the one with the largest likelihood. WARNING Do not select the number of components like this as the model will overfit.
End of explanation
def cluster_and_visualise(gmm_est):
# obtain cluster index for each point of the training data
# TODO another hack here: Shogun should allow to pass multiple points and only return the index
# as the likelihood can be done via the individual components
# In addition, argmax should be computed for us, although log-pdf for all components should also be possible
clusters=asarray([argmax(gmm_est.cluster(x)[:gmm.get_num_components()]) for x in X])
# visualise points by cluster
for i in range(gmm.get_num_components()):
indices=clusters==i
plot(X[indices,0],X[indices,1], 'o', color=colors[i])
# learn gmm again
gmm_est=estimate_gmm(X, num_components)
figure(figsize=(18,5))
subplot(121)
cluster_and_visualise(gmm)
title("Clustering under true GMM")
subplot(122)
cluster_and_visualise(gmm_est)
_=title("Clustering under estimated GMM")
Explanation: Clustering with mixture models
Recall that our initial goal was not to visualise mixture models (although that is already pretty cool) but to find clusters in a given set of points. All we need to do for this is to evaluate the log-likelihood of every point under every learned component and then pick the largest one. Shogun can do both. Below, we will illustrate both cases, obtaining a cluster index, and evaluating the log-likelihood for every point under each component.
End of explanation
figure(figsize=(18,5))
for comp_idx in range(num_components):
subplot(1,num_components,comp_idx+1)
# evaluated likelihood under current component
# TODO Shogun should do the loop and allow to specify component indices to evaluate pdf for
# TODO distribution interface should be the same everywhere
component=Gaussian.obtain_from_generic(gmm.get_component(comp_idx))
cluster_likelihoods=asarray([component.compute_PDF(X[i]) for i in range(len(X))])
# normalise
cluster_likelihoods-=cluster_likelihoods.min()
cluster_likelihoods/=cluster_likelihoods.max()
# plot, coloured by likelihood value
cm=get_cmap("jet")
for j in range(len(X)):
color = cm(cluster_likelihoods[j])
plot(X[j,0], X[j,1] ,"o", color=color)
title("Data coloured by likelihood for component %d" % comp_idx)
Explanation: These are clusterings obtained via the true mixture model and the one learned via EM. There is a slight subtlety here: even the model under which the data was generated will not cluster the data correctly if the data is overlapping. This is due to the fact that the cluster with the largest probability is chosen. This doesn't allow for any ambiguity. If you are interested in cases where data overlaps, you should always look at the log-likelihood of the point for each cluster and consider taking into acount "draws" in the decision, i.e. probabilities for two different clusters are equally large.
Below we plot all points, coloured by their likelihood under each component.
End of explanation
# compute cluster index for every point in space
D_est=asarray([gmm_est.cluster(pairs[i])[:num_components].argmax() for i in range(len(pairs))]).reshape(resolution,resolution)
# visualise clustering
cluster_and_visualise(gmm_est)
# visualise space partitioning
pcolor(Xs,Ys,D_est)
Explanation: Note how the lower left and middle cluster are overlapping in the sense that points at their intersection have similar likelihoods. If you do not care at all about this and are just interested in a partitioning of the space, simply choose the maximum.
Below we plot the space partitioning for a hard clustering.
End of explanation |
4,377 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AutoML SDK
Step1: Install the Google cloud-storage library as well.
Step2: Restart the Kernel
Once you've installed the AutoML SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AutoML APIs and Compute Engine APIs.
Google Cloud SDK is already installed in AutoML Notebooks.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for AutoML. We recommend when possible, to choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your GCP account
If you are using AutoML Notebooks, your environment is already
authenticated. Skip this step.
Note
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import AutoML SDK
Import the AutoML SDK into our Python environment.
Step11: AutoML constants
Setup up the following constants for AutoML
Step12: Clients
The AutoML SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (AutoML).
You will use several clients in this tutorial, so set them all up upfront.
(?)
Step13: Example output
Step14: Example output
Step15: Response
Step16: Example output
Step17: projects.locations.datasets.importData
Request
Step18: Example output
Step19: Response
Step20: Example output
Step21: Example output
Step22: Response
Step23: Example output
Step24: Evaluate the model
projects.locations.models.modelEvaluations.list
Call
Step25: Response
Step26: Example output
```
[
{
"name"
Step27: Response
Step28: Example output
Step29: Example output
Step30: Example output
Step31: Example output
Step32: Response
Step33: Example output | Python Code:
! pip3 install -U google-cloud-automl --user
Explanation: AutoML SDK: AutoML image classification model
Installation
Install the latest (preview) version of AutoML SDK.
End of explanation
! pip3 install google-cloud-storage
Explanation: Install the Google cloud-storage library as well.
End of explanation
import os
if not os.getenv("AUTORUN"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the Kernel
Once you've installed the AutoML SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" #@param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AutoML APIs and Compute Engine APIs.
Google Cloud SDK is already installed in AutoML Notebooks.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = 'us-central1' #@param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for AutoML. We recommend when possible, to choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You cannot use a Multi-Regional Storage bucket for training with AutoML. Not all regions provide support for all AutoML services. For the latest support per region, see Region support for AutoML services
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on AutoML, then don't execute this code
if not os.path.exists('/opt/deeplearning/metadata/env_version'):
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth login
Explanation: Authenticate your GCP account
If you are using AutoML Notebooks, your environment is already
authenticated. Skip this step.
Note: If you are on an AutoML notebook and run the cell, the cell knows to skip executing the authentication steps.
End of explanation
BUCKET_NAME = "[your-bucket-name]" #@param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
End of explanation
! gsutil mb -l $REGION gs://$BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al gs://$BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import json
import os
import sys
import time
from google.cloud import automl_v1beta1 as automl
from google.protobuf.json_format import MessageToJson
from google.protobuf.json_format import ParseDict
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import AutoML SDK
Import the AutoML SDK into our Python environment.
End of explanation
# AutoML location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: AutoML constants
Setup up the following constants for AutoML:
PARENT: The AutoML location root path for dataset, model and endpoint resources.
End of explanation
def automl_client():
return automl.AutoMlClient()
def prediction_client():
return automl.PredictionServiceClient()
def operations_client():
return automl.AutoMlClient()._transport.operations_client
clients = {}
clients["automl"] = automl_client()
clients["prediction"] = prediction_client()
clients["operations"] = operations_client()
for client in clients.items():
print(client)
IMPORT_FILE = 'gs://automl-video-demo-data/hmdb_split1.csv'
! gsutil cat $IMPORT_FILE | head -n 10
Explanation: Clients
The AutoML SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (AutoML).
You will use several clients in this tutorial, so set them all up upfront.
(?)
End of explanation
dataset = {
"display_name": "hmdb_" + TIMESTAMP,
"video_classification_dataset_metadata": {}
}
print(MessageToJson(
automl.CreateDatasetRequest(
parent=PARENT,
dataset=dataset
).__dict__["_pb"])
)
Explanation: Example output:
TRAIN,gs://automl-video-demo-data/hmdb_split1_5classes_train_inf.csv
TEST,gs://automl-video-demo-data/hmdb_split1_5classes_test_inf.csv
Create a dataset
projects.locations.datasets.create
Request
End of explanation
request = clients["automl"].create_dataset(
parent=PARENT,
dataset=dataset
)
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"dataset": {
"displayName": "hmdb_20210228225744",
"videoClassificationDatasetMetadata": {}
}
}
Call
End of explanation
result = request
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split('/')[-1]
print(dataset_id)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/datasets/VCN6574174086275006464",
"displayName": "hmdb_20210228225744",
"createTime": "2021-02-28T23:06:43.197904Z",
"etag": "AB3BwFrtf0Yl4fgnXW4leoEEANTAGQdOngyIqdQSJBT9pKEChgeXom-0OyH7dKtfvA4=",
"videoClassificationDatasetMetadata": {}
}
End of explanation
input_config = {
"gcs_source": {
"input_uris": [IMPORT_FILE]
}
}
print(MessageToJson(
automl.ImportDataRequest(
name=dataset_short_id,
input_config=input_config
).__dict__["_pb"])
)
Explanation: projects.locations.datasets.importData
Request
End of explanation
request = clients["automl"].import_data(
name=dataset_id,
input_config=input_config
)
Explanation: Example output:
{
"name": "VCN6574174086275006464",
"inputConfig": {
"gcsSource": {
"inputUris": [
"gs://automl-video-demo-data/hmdb_split1.csv"
]
}
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result))
Explanation: Response
End of explanation
model = {
"display_name": "hmdb_" + TIMESTAMP,
"dataset_id": dataset_short_id,
"video_classification_model_metadata": {}
}
print(MessageToJson(
automl.CreateModelRequest(
parent=PARENT,
model=model
).__dict__["_pb"])
)
Explanation: Example output:
{}
Train a model
projects.locations.models.create
Request
End of explanation
request = clients["automl"].create_model(
parent=PARENT,
model=model
)
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"model": {
"displayName": "hmdb_20210228225744",
"datasetId": "VCN6574174086275006464",
"videoClassificationModelMetadata": {}
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
# The full unique ID for the training pipeline
model_id = result.name
# The short numeric ID for the training pipeline
model_short_id = model_id.split('/')[-1]
print(model_short_id)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/models/VCN6188818900239515648"
}
End of explanation
request = clients["automl"].list_model_evaluations(
parent=model_id
)
Explanation: Evaluate the model
projects.locations.models.modelEvaluations.list
Call
End of explanation
import json
model_evaluations = [
json.loads(MessageToJson(me.__dict__["_pb"])) for me in request
]
# The evaluation slice
evaluation_slice = request.model_evaluation[0].name
print(json.dumps(model_evaluations, indent=2))
Explanation: Response
End of explanation
request = clients["automl"].get_model_evaluation(
name=evaluation_slice
)
Explanation: Example output
```
[
{
"name": "projects/116273516712/locations/us-central1/models/VCN6188818900239515648/modelEvaluations/1998146574672720266",
"createTime": "2021-03-01T01:02:02.452298Z",
"evaluatedExampleCount": 150,
"classificationEvaluationMetrics": {
"auPrc": 1.0,
"confidenceMetricsEntry": [
{
"confidenceThreshold": 0.016075565,
"recall": 1.0,
"precision": 0.2,
"f1Score": 0.33333334
},
{
"confidenceThreshold": 0.017114623,
"recall": 1.0,
"precision": 0.202977,
"f1Score": 0.3374578
},
# REMOVED FOR BREVITY
{
"confidenceThreshold": 0.9299338,
"recall": 0.033333335,
"precision": 1.0,
"f1Score": 0.06451613
}
]
},
"displayName": "golf"
}
]
```
projects.locations.models.modelEvaluations.get
Call
End of explanation
print(MessageToJson(request.__dict__["_pb"]))
Explanation: Response
End of explanation
TRAIN_FILES = "gs://automl-video-demo-data/hmdb_split1_5classes_train_inf.csv"
test_items = ! gsutil cat $TRAIN_FILES | head -n2
cols = str(test_items[0]).split(',')
test_item_1, test_label_1, test_start_1, test_end_1 = str(cols[0]), str(cols[1]), str(cols[2]), str(cols[3])
print(test_item_1, test_label_1)
cols = str(test_items[1]).split(',')
test_item_2, test_label_2, test_start_2, test_end_2 = str(cols[0]), str(cols[1]), str(cols[2]), str(cols[3])
print(test_item_2, test_label_2)
Explanation: Example output:
```
{
"name": "projects/116273516712/locations/us-central1/models/VCN6188818900239515648/modelEvaluations/1998146574672720266",
"createTime": "2021-03-01T01:02:02.452298Z",
"evaluatedExampleCount": 150,
"classificationEvaluationMetrics": {
"auPrc": 1.0,
"confidenceMetricsEntry": [
{
"confidenceThreshold": 0.016075565,
"recall": 1.0,
"precision": 0.2,
"f1Score": 0.33333334
},
{
"confidenceThreshold": 0.017114623,
"recall": 1.0,
"precision": 0.202977,
"f1Score": 0.3374578
},
# REMOVED FOR BREVITY
{
"confidenceThreshold": 0.9299338,
"recall": 0.006666667,
"precision": 1.0,
"f1Score": 0.013245033
}
],
"confusionMatrix": {
"annotationSpecId": [
"175274248095399936",
"2048771693081526272",
"4354614702295220224",
"6660457711508914176",
"8966300720722608128"
],
"row": [
{
"exampleCount": [
30,
0,
0,
0,
0
]
},
{
"exampleCount": [
0,
30,
0,
0,
0
]
},
{
"exampleCount": [
0,
0,
30,
0,
0
]
},
{
"exampleCount": [
0,
0,
0,
30,
0
]
},
{
"exampleCount": [
0,
0,
0,
0,
30
]
}
],
"displayName": [
"ride_horse",
"golf",
"cartwheel",
"pullup",
"kick_ball"
]
}
}
}
```
Make batch predictions
Make the batch input file
To request a batch of predictions from AutoML Video, create a CSV file that lists the Cloud Storage paths to the videos that you want to annotate. You can also specify a start and end time to tell AutoML Video to only annotate a segment (segment-level) of the video. The start time must be zero or greater and must be before the end time. The end time must be greater than the start time and less than or equal to the duration of the video. You can also use inf to indicate the end of a video.
example:
gs://my-videos-vcm/short_video_1.avi,0.0,5.566667
gs://my-videos-vcm/car_chase.avi,0.0,3.933333
End of explanation
import tensorflow as tf
import json
gcs_input_uri = "gs://" + BUCKET_NAME + '/test.csv'
with tf.io.gfile.GFile(gcs_input_uri, 'w') as f:
data = f"{test_item_1}, {test_start_1}, {test_end_1}"
f.write(data + '\n')
data = f"{test_item_2}, {test_start_2}, {test_end_2}"
f.write(data + '\n')
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
Explanation: Example output:
gs://automl-video-demo-data/hmdb51/_Rad_Schlag_die_Bank__cartwheel_f_cm_np1_le_med_0.avi cartwheel
gs://automl-video-demo-data/hmdb51/Acrobacias_de_un_fenomeno_cartwheel_f_cm_np1_ba_bad_8.avi cartwheel
End of explanation
input_config = {
"gcs_source": {
"input_uris": [gcs_input_uri]
}
}
output_config = {
"gcs_destination": {
"output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/"
}
}
batch_prediction = automl.BatchPredictRequest(
name=model_id,
input_config=input_config,
output_config=output_config
)
print(MessageToJson(
batch_prediction.__dict__["_pb"])
)
Explanation: Example output:
gs://migration-ucaip-trainingaip-20210228225744/test.csv
gs://automl-video-demo-data/hmdb51/_Rad_Schlag_die_Bank__cartwheel_f_cm_np1_le_med_0.avi, 0.0, inf
gs://automl-video-demo-data/hmdb51/Acrobacias_de_un_fenomeno_cartwheel_f_cm_np1_ba_bad_8.avi, 0.0, inf
projects.locations.models.batchPredict
Request
End of explanation
request = clients["prediction"].batch_predict(
request=batch_prediction
)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/models/VCN6188818900239515648",
"inputConfig": {
"gcsSource": {
"inputUris": [
"gs://migration-ucaip-trainingaip-20210228225744/test.csv"
]
}
},
"outputConfig": {
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210228225744/batch_output/"
}
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
delete_dataset = True
delete_model = True
delete_bucket = True
# Delete the dataset using the AutoML fully qualified identifier for the dataset
try:
if delete_dataset:
clients['automl'].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the model using the AutoML fully qualified identifier for the model
try:
if delete_model:
clients['automl'].delete_model(name=model_id)
except Exception as e:
print(e)
if delete_bucket and 'BUCKET_NAME' in globals():
! gsutil rm -r gs://$BUCKET_NAME
Explanation: Example output:
{}
Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial.
End of explanation |
4,378 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hack for Heat #7
Step1: I removed entires earlier than 2011 because there are data quality issues (substantially fewer cases)
Step2: Heat complaints by borough
Step3: There were about a million heating complaints over the 3+ years we have data for.
Step4: Per capita
Step5: Complaints by borough over months
First, let's recreate the graph from before
Step6: We remove data where it was unspecified what the boro was
Step7: A non-normalized plot
Step8: Normalizing data | Python Code:
connection = psycopg2.connect('dbname = threeoneone user=threeoneoneadmin password=threeoneoneadmin')
cursor = connection.cursor()
cursor.execute('''SELECT DISTINCT complainttype FROM service;''')
complainttypes = cursor.fetchall()
cursor.execute('''SELECT createddate, borough, complainttype FROM service;''')
borodata = cursor.fetchall()
borodata = pd.DataFrame(borodata)
borodata.head()
borodata.columns = ['Date', 'Boro', 'Comptype']
heatdata = borodata.loc[(borodata['Comptype'] == 'HEATING') | (borodata['Comptype'] == 'HEAT/HOT WATER')]
Explanation: Hack for Heat #7: Heat complaints over time
In the last post, I plotted how the number of complaints differd by borough over time. This time around, I'm going to revisit this process, but this time focusing on heating complaints only (this is Heat Seek, after all).
Loading data:
End of explanation
heatdata = heatdata.loc[heatdata['Date'] > date(2011,3,1)]
Explanation: I removed entires earlier than 2011 because there are data quality issues (substantially fewer cases):
End of explanation
len(heatdata)
Explanation: Heat complaints by borough:
End of explanation
heatbydate = heatdata.groupby(by='Boro').count()
heatbydate
Explanation: There were about a million heating complaints over the 3+ years we have data for.
End of explanation
boropop = {
'MANHATTAN': 1636268,
'BRONX': 1438159,
'BROOKLYN': 2621793,
'QUEENS': 2321580,
'STATEN ISLAND': 473279,
}
heatbydate['Pop'] = [boropop.get(x) for x in heatbydate.index]
heatbydate['CompPerCap'] = heatbydate['Date']/heatbydate['Pop']
heatbydate
Explanation: Per capita:
Again, let's look at how many heat complaints each boro generates per person:
End of explanation
heatdata['Year'] = [x.year for x in heatdata['Date']]
heatdata['Month'] = [x.month for x in heatdata['Date']]
heatdata['Day'] = [x.day for x in heatdata['Date']]
heatdata.head()
Explanation: Complaints by borough over months
First, let's recreate the graph from before:
End of explanation
heatdata = heatdata.loc[heatdata['Boro'] != 'Unspecified']
heatplotdata = heatdata.groupby(by=['Boro', 'Year','Month']).count()
heatplotdata
boros = heatbydate.index
borodict = {x:[] for x in boros}
borodict.pop('Unspecified')
for boro in borodict:
borodict[boro] = list(heatplotdata.xs(boro).Date)
plotdata = np.zeros(len(borodict['BROOKLYN']))
for boro in sorted(borodict.keys()):
plotdata = np.row_stack((plotdata, borodict[boro]))
plotdata = np.delete(plotdata, (0), axis=0)
from matplotlib import patches as mpatches
x = np.arange(len(plotdata[0]))
#crude xlabels
months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
years = ['2011', '2012', '2013', '2014', '2015', '2016']
xlabels = []
for year in years:
for month in months:
xlabels.append("{0} {1}".format(month,year))
xlabels = xlabels[2:-7] #start from march 2011, end may2016
Explanation: We remove data where it was unspecified what the boro was
End of explanation
plotcolors = [(1,0,103),(213,255,0),(255,0,86),(158,0,142),(14,76,161),(255,229,2),(0,95,57),\
(0,255,0),(149,0,58),(255,147,126),(164,36,0),(0,21,68),(145,208,203),(98,14,0)]
#rescaling rgb from 0-255 to 0 to 1
plotcolors = [(color[0]/float(255),color[1]/float(255),color[2]/float(255)) for color in plotcolors]
legendcolors = [mpatches.Patch(color = color) for color in plotcolors]
plt.figure(figsize = (15,10));
plt.stackplot(x,plotdata, colors = plotcolors);
plt.xticks(x,xlabels,rotation=90);
plt.xlim(0,len(xlabels))
plt.legend(legendcolors,sorted(borodict.keys()), bbox_to_anchor=(0.2, 1));
plt.title('Heating Complaints by Borough', size = 24)
plt.ylabel('Number of Complaints',size = 14)
plt.gca().spines['right'].set_visible(False)
plt.gca().spines['top'].set_visible(False)
plt.gca().yaxis.set_ticks_position('left')
plt.gca().xaxis.set_ticks_position('bottom')
Explanation: A non-normalized plot:
The plot below is of raw complaint numbers, and is what we might expect: complaints about eat matter the most during the heating season!
End of explanation
totalcounts = heatdata.groupby(by=['Year', 'Month']).count().Date.values
boros = heatbydate.index
normdict = {x:[] for x in boros}
normdict.pop('Unspecified')
for boro in normdict:
for i in range(0,len(plotdata[1])): # for all the values in each row
normp = float(borodict[boro][i])/float(totalcounts[i])
normdict[boro].append(normp*100)
normplotdata = np.zeros(len(borodict['BROOKLYN']))
for boro in sorted(normdict.keys()):
normplotdata = np.row_stack((normplotdata, normdict[boro]))
normplotdata = np.delete(normplotdata,(0),axis = 0)
plotcolors = [(1,0,103),(213,255,0),(255,0,86),(158,0,142),(14,76,161),(255,229,2),(0,95,57),\
(0,255,0),(149,0,58),(255,147,126),(164,36,0),(0,21,68),(145,208,203),(98,14,0)]
#rescaling rgb from 0-255 to 0 to 1
plotcolors = [(color[0]/float(255),color[1]/float(255),color[2]/float(255)) for color in plotcolors]
legendcolors = [mpatches.Patch(color = color) for color in plotcolors]
plt.figure(figsize = (15,10));
plt.stackplot(x,normplotdata, colors = plotcolors);
plt.xticks(x,xlabels,rotation=90);
plt.xlim(0,len(xlabels))
plt.ylim(0,100)
plt.legend(legendcolors,sorted(normdict.keys()), bbox_to_anchor=(0.2, 1));
plt.title('Heating Complaints by Borough (normalized)', size = 24)
plt.ylabel('% of Complaints',size = 14)
plt.gca().spines['right'].set_visible(False)
plt.gca().spines['top'].set_visible(False)
plt.gca().yaxis.set_ticks_position('left')
plt.gca().xaxis.set_ticks_position('bottom')
Explanation: Normalizing data:
Next, we're going to normalize these data. This allows us to better visualize if the proportion of heating complaints changed by borough, over time (e.g., did one borough generate more/less complaints vs. others over time?).
What we want to do is divide each of the 5 rows by the total number of complaints by that column (the month)
End of explanation |
4,379 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST Digit Recognition - CNN
Step1: Prepare Data
Step2: Define Network
Step3: Train Network
Step4: Visualize with Tensorboard
We have also requested the total_loss and total_accuracy scalars to be logged in our computational graph, so the above charts can also be seen from the built-in tensorboard tool. The scalars are logged to the directory given by LOG_DIR, so we can start the tensorboard tool from the command line | Python Code:
from __future__ import division, print_function
from sklearn.preprocessing import OneHotEncoder
from sklearn.metrics import accuracy_score, confusion_matrix
import numpy as np
import matplotlib.pyplot as plt
import os
import tensorflow as tf
%matplotlib inline
DATA_DIR = "../../data"
TRAIN_FILE = os.path.join(DATA_DIR, "mnist_train.csv")
TEST_FILE = os.path.join(DATA_DIR, "mnist_test.csv")
LOG_DIR = os.path.join(DATA_DIR, "tf-mnist-cnn-logs")
MODEL_FILE = os.path.join(DATA_DIR, "tf-mnist-cnn")
IMG_SIZE = 28
LEARNING_RATE = 0.001
BATCH_SIZE = 128
NUM_CLASSES = 10
NUM_EPOCHS = 5
Explanation: MNIST Digit Recognition - CNN
End of explanation
def parse_file(filename):
xdata, ydata = [], []
fin = open(filename, "rb")
i = 0
for line in fin:
if i % 10000 == 0:
print("{:s}: {:d} lines read".format(
os.path.basename(filename), i))
cols = line.strip().split(",")
ydata.append(int(cols[0]))
xdata.append(np.reshape(np.array([float(x) / 255.
for x in cols[1:]]), (IMG_SIZE, IMG_SIZE, 1)))
i += 1
fin.close()
print("{:s}: {:d} lines read".format(os.path.basename(filename), i))
y = np.array(ydata)
X = np.array(xdata)
return X, y
Xtrain, ytrain = parse_file(TRAIN_FILE)
Xtest, ytest = parse_file(TEST_FILE)
print(Xtrain.shape, ytrain.shape, Xtest.shape, ytest.shape)
def datagen(X, y, batch_size=BATCH_SIZE, num_classes=NUM_CLASSES):
ohe = OneHotEncoder(n_values=num_classes)
while True:
shuffled_indices = np.random.permutation(np.arange(len(y)))
num_batches = len(y) // batch_size
for bid in range(num_batches):
batch_indices = shuffled_indices[bid*batch_size:(bid+1)*batch_size]
Xbatch = np.zeros((batch_size, X.shape[1], X.shape[2], X.shape[3]))
Ybatch = np.zeros((batch_size, num_classes))
for i in range(batch_size):
Xbatch[i] = X[batch_indices[i]]
Ybatch[i] = ohe.fit_transform(y[batch_indices[i]]).todense()
yield Xbatch, Ybatch
self_test_gen = datagen(Xtrain, ytrain)
Xbatch, Ybatch = self_test_gen.next()
print(Xbatch.shape, Ybatch.shape)
Explanation: Prepare Data
End of explanation
with tf.name_scope("data"):
X = tf.placeholder(tf.float32, [None, IMG_SIZE, IMG_SIZE, 1], name="X")
Y = tf.placeholder(tf.float32, [None, NUM_CLASSES], name="Y")
def conv2d(x, W, b, strides=1):
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding="SAME")
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
def maxpool2d(x, k=2):
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
padding="SAME")
def network(x, dropout=0.75):
# CONV-1: 5x5 kernel, channels 1 => 32
W1 = tf.Variable(tf.random_normal([5, 5, 1, 32]))
b1 = tf.Variable(tf.random_normal([32]))
conv1 = conv2d(x, W1, b1)
# MAXPOOL-1
conv1 = maxpool2d(conv1, 2)
# CONV-2: 5x5 kernel, channels 32 => 64
W2 = tf.Variable(tf.random_normal([5, 5, 32, 64]))
b2 = tf.Variable(tf.random_normal([64]))
conv2 = conv2d(conv1, W2, b2)
# MAXPOOL-2
conv2 = maxpool2d(conv2, k=2)
# FC1: input=(None, 7, 7, 64), output=(None, 1024)
flatten = tf.reshape(conv2, [-1, 7*7*64])
W3 = tf.Variable(tf.random_normal([7*7*64, 1024]))
b3 = tf.Variable(tf.random_normal([1024]))
fc1 = tf.add(tf.matmul(flatten, W3), b3)
fc1 = tf.nn.relu(fc1)
# Apply Dropout
fc1 = tf.nn.dropout(fc1, dropout)
# Output, class prediction (1024 => 10)
W4 = tf.Variable(tf.random_normal([1024, NUM_CLASSES]))
b4 = tf.Variable(tf.random_normal([NUM_CLASSES]))
pred = tf.add(tf.matmul(fc1, W4), b4)
return pred
# define network
Y_ = network(X, 0.75)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits=Y_, labels=Y))
optimizer = tf.train.AdamOptimizer(
learning_rate=LEARNING_RATE).minimize(loss)
correct_pred = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
tf.summary.scalar("loss", loss)
tf.summary.scalar("accuracy", accuracy)
# Merge all summaries into a single op
merged_summary_op = tf.summary.merge_all()
Explanation: Define Network
End of explanation
history = []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
# tensorboard viz
logger = tf.summary.FileWriter(LOG_DIR, sess.graph)
train_gen = datagen(Xtrain, ytrain, BATCH_SIZE)
num_batches = len(Xtrain) // BATCH_SIZE
for epoch in range(NUM_EPOCHS):
total_loss, total_acc = 0., 0.
for bid in range(num_batches):
Xbatch, Ybatch = train_gen.next()
_, batch_loss, batch_acc, Ybatch_, summary = sess.run(
[optimizer, loss, accuracy, Y_, merged_summary_op],
feed_dict={X: Xbatch, Y:Ybatch})
# write to tensorboard
logger.add_summary(summary, epoch * num_batches + bid)
# accumulate to print once per epoch
total_acc += batch_acc
total_loss += batch_loss
total_acc /= num_batches
total_loss /= num_batches
print("Epoch {:d}/{:d}: loss={:.3f}, accuracy={:.3f}".format(
(epoch + 1), NUM_EPOCHS, total_loss, total_acc))
saver.save(sess, MODEL_FILE, (epoch + 1))
history.append((total_loss, total_acc))
logger.close()
losses = [x[0] for x in history]
accs = [x[1] for x in history]
plt.subplot(211)
plt.title("Accuracy")
plt.plot(accs)
plt.subplot(212)
plt.title("Loss")
plt.plot(losses)
plt.tight_layout()
plt.show()
Explanation: Train Network
End of explanation
BEST_MODEL = os.path.join(DATA_DIR, "tf-mnist-cnn-5")
saver = tf.train.Saver()
ys, ys_ = [], []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver.restore(sess, BEST_MODEL)
test_gen = datagen(Xtest, ytest, BATCH_SIZE)
val_loss, val_acc = 0., 0.
num_batches = len(Xtrain) // BATCH_SIZE
for _ in range(num_batches):
Xbatch, Ybatch = test_gen.next()
Ybatch_ = sess.run(Y_, feed_dict={X: Xbatch, Y:Ybatch})
ys.extend(np.argmax(Ybatch, axis=1))
ys_.extend(np.argmax(Ybatch_, axis=1))
acc = accuracy_score(ys_, ys)
cm = confusion_matrix(ys_, ys)
print("Accuracy: {:.4f}".format(acc))
print("Confusion Matrix")
print(cm)
Explanation: Visualize with Tensorboard
We have also requested the total_loss and total_accuracy scalars to be logged in our computational graph, so the above charts can also be seen from the built-in tensorboard tool. The scalars are logged to the directory given by LOG_DIR, so we can start the tensorboard tool from the command line:
$ cd ../../data
$ tensorboard --logdir=tf-mnist-cnn-logs
Starting TensorBoard 54 at http://localhost:6006
(Press CTRL+C to quit)
We can then view the [visualizations on tensorboard] (http://localhost:6006)
Evaluate Network
End of explanation |
4,380 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Load-signal" data-toc-modified-id="Load-signal-1"><span class="toc-item-num">1 </span>Load signal</a></span></li><li><span><a href="#Compute-the-roughness" data-toc-modified-id="Compute-the-roughness-2"><span class="toc-item-num">2 </span>Compute the roughness</a></span></li><li><span><a href="#Compute-roughness-from-spectrum" data-toc-modified-id="Compute-roughness-from-spectrum-3"><span class="toc-item-num">3 </span>Compute roughness from spectrum</a></span></li></ul></div>
How to compute acoustic Roughness according to Daniel and Weber method
This tutorial explains how to use MOSQITO to compute the acoustic roughness of a signal according to the methodology from Daniel and Weber. For more information on the implementation and validation of the metric, you can refer to the documentation.
The following commands are used to import the necessary functions.
Step1: Load signal
For this tutorial, the test signal has been generated using the signals_test_generation script. The signal is imported from a .wav file. The tutorial Audio signal basic operations gives more information about the syntax of the import and the other supported file types. You can use any .wav file to perform the tutorial or you can download the signal from MOSQITO that is used in the following.
According to the roughness definition, an amplitude-modulated tone with a carrier frequency of 1 kHz and a modulation frequency of 70 Hz at a level of 60 dB should correspond to a roughness of 1 asper for a modulation depth of 1.
Step2: Compute the roughness
The acoustic Roughness is computed using the following command line. In addition to the signal (as ndarray) and the sampling frequency, the function takes 1 input argument "overlap" that indicates the overlapping coefficient for the time windows of 200ms (default is 0.5).
Step3: The function return the roughness of the signal versus time
Step4: Compute roughness from spectrum
The commands below shows how to compute the roughness from a frequency spectrum either in complex values or amplitude values using the functions from MOSQITO. One should note that only stationary values can be computed from a frequency input.
The input spectrum can be either 1D with size (Nfrequency) or 2D with size (fNrequency x Ntime). The corresponding time axis can be either the same for all the spectra, with size (Nfrequency) or different for each spectrum with size (Nfrequency x Ntime)
Step5: | Python Code:
# Add MOSQITO to the Python path
import sys
sys.path.append('..')
# Import numpy
import numpy as np
# Import plot function
import matplotlib.pyplot as plt
# Import multiple spectrum computation tool
from scipy.signal import stft
# Import mosqito functions
from mosqito.utils import load
from mosqito.sq_metrics import roughness_dw, roughness_dw_freq
# Import MOSQITO color sheme [Optional]
from mosqito import COLORS
# To get inline plots (specific to Jupyter notebook)
%matplotlib notebook
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Load-signal" data-toc-modified-id="Load-signal-1"><span class="toc-item-num">1 </span>Load signal</a></span></li><li><span><a href="#Compute-the-roughness" data-toc-modified-id="Compute-the-roughness-2"><span class="toc-item-num">2 </span>Compute the roughness</a></span></li><li><span><a href="#Compute-roughness-from-spectrum" data-toc-modified-id="Compute-roughness-from-spectrum-3"><span class="toc-item-num">3 </span>Compute roughness from spectrum</a></span></li></ul></div>
How to compute acoustic Roughness according to Daniel and Weber method
This tutorial explains how to use MOSQITO to compute the acoustic roughness of a signal according to the methodology from Daniel and Weber. For more information on the implementation and validation of the metric, you can refer to the documentation.
The following commands are used to import the necessary functions.
End of explanation
# Define path to the .wav file
# To be replaced by your own path
path = "../validations/sq_metrics/roughness_dw/input/Test_signal_fc1000_fmod70.wav"
# load signal
sig, fs = load(path,)
# plot signal
t = np.linspace(0, (len(sig) - 1) / fs, len(sig))
plt.figure(1)
plt.plot(t, sig, color=COLORS[0])
plt.xlabel('Time [s]')
plt.ylabel('Acoustic pressure [Pa]')
plt.xlim((0, 0.05))
Explanation: Load signal
For this tutorial, the test signal has been generated using the signals_test_generation script. The signal is imported from a .wav file. The tutorial Audio signal basic operations gives more information about the syntax of the import and the other supported file types. You can use any .wav file to perform the tutorial or you can download the signal from MOSQITO that is used in the following.
According to the roughness definition, an amplitude-modulated tone with a carrier frequency of 1 kHz and a modulation frequency of 70 Hz at a level of 60 dB should correspond to a roughness of 1 asper for a modulation depth of 1.
End of explanation
r, r_spec, bark, time = roughness_dw(sig, fs, overlap=0)
Explanation: Compute the roughness
The acoustic Roughness is computed using the following command line. In addition to the signal (as ndarray) and the sampling frequency, the function takes 1 input argument "overlap" that indicates the overlapping coefficient for the time windows of 200ms (default is 0.5).
End of explanation
plt.figure(2)
plt.plot(time, r, color=COLORS[0])
plt.ylim(0,1.1)
plt.xlabel("Time [s]")
plt.ylabel("Roughness [Asper]")
Explanation: The function return the roughness of the signal versus time:
End of explanation
# Compute multiple spectra along time
freqs, time, spectrum = stft(sig, fs=fs)
# Compute roughness
R, R_spec, bark = roughness_dw_freq(spectrum,freqs)
# Plot the results
plt.figure(6)
plt.plot(time, R, color=COLORS[0])
plt.ylim(0,1)
plt.xlabel("Time [s]")
plt.ylabel("Roughness [Asper]")
Explanation: Compute roughness from spectrum
The commands below shows how to compute the roughness from a frequency spectrum either in complex values or amplitude values using the functions from MOSQITO. One should note that only stationary values can be computed from a frequency input.
The input spectrum can be either 1D with size (Nfrequency) or 2D with size (fNrequency x Ntime). The corresponding time axis can be either the same for all the spectra, with size (Nfrequency) or different for each spectrum with size (Nfrequency x Ntime)
End of explanation
from datetime import date
print("Tutorial generation date:", date.today().strftime("%B %d, %Y"))
Explanation:
End of explanation |
4,381 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
from collections import Counter
from string import punctuation
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
vocab = set(text)
vocab_to_int= {word:ii for ii,word in enumerate(vocab )}
int_to_vocab = dict(enumerate(vocab))
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
tknz_dict = {'.':'Period',
',':'Comma',
'"':'Quotationmark',
';':'Semicolon',
'!':'Exclamationmark',
'?':'Questionmark',
'(':'LeftParentheses',
')':'RightParentheses',
'--':'Dash',
'\n':'Return'}
return tknz_dict
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
input = tf.placeholder(tf.int32, [None, None],name = 'input' )
targets = tf.placeholder(tf.int32, [None, None])
learningrate = tf.placeholder(tf.float32)
return input, targets, learningrate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
lstm_layers = 2
#keep_prob = 0.75
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
#drop = tf.contrib.rnn.DropoutWrapper(lstm,output_keep_prob = keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([lstm for _ in range(lstm_layers)] )
#cell = tf.contrib.rnn.MultiRNNCell([lstm])
initial_state = cell.zero_state(batch_size,tf.float32)
initial_state = tf.identity(initial_state,name= 'initial_state')
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
e = tf.Variable(tf.random_uniform([vocab_size, embed_dim],-1,1))
embed = tf.nn.embedding_lookup(e,input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell,inputs,dtype = tf.float32)
final_state = tf.identity(final_state, name="final_state")
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, embed_dim)
outputs,final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs,rnn_size,activation_fn = tf.sigmoid )
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
batches_size = batch_size* seq_length
n_batches = len(int_text)//batches_size
batch_x = np.array(int_text[:n_batches* batches_size])
batch_y = np.array(int_text[1:n_batches* batches_size+1])
batch_y[-1] = batch_x[0]
batch_x_reshape = batch_x.reshape(batch_size,-1)
batch_y_reshape = batch_y.reshape(batch_size,-1)
batches = np.zeros([n_batches, 2, batch_size, seq_length])
for i in range(n_batches):
batches[i][0]= batch_x_reshape[ : ,i * seq_length: (i+1)* seq_length]
batches[i][1]= batch_y_reshape[ : ,i * seq_length: (i+1)* seq_length]
return batches
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
# Number of Epochs
num_epochs = 50
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 10
# Embedding Dimension Size
embed_dim = 200
# Sequence Length
seq_length = 50
# Learning Rate
learning_rate = 0.1
# Show stats for every n number of batches
show_every_n_batches = 50
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
inputs = loaded_graph.get_tensor_by_name("input:0")
initial_state = loaded_graph.get_tensor_by_name("initial_state:0")
final_state = loaded_graph.get_tensor_by_name("final_state:0")
probs = loaded_graph.get_tensor_by_name("probs:0")
return inputs, initial_state, final_state, probs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
pick_word =[]
for idx,prob in enumerate(probabilities):
if prob >= 0.05:
pick_word.append(int_to_vocab[idx])
rand = np.random.randint(0, len(pick_word))
return str(pick_word[rand])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
4,382 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Discovery of prime integrals with dCGP
Lets first import dcgpy and pyaudi and set up things as to use dCGP on gduals defined over vectorized floats
Step1: We consider a set of differential equations in the form
Step2: We define 50 random control of points where we check that the prime integral holds
Step3: Simple pendulum
Consider the simple pendulum problem. In particular its differential formulation
Step4: We define 50 random control of points where we check that the prime integral holds
Step5: The two-body problem
Consider the two body problem. In particular its differential formulation in polar coordinates
Step6: We define 50 random control of points where we check that the prime integral holds | Python Code:
from dcgpy import expression_gdual_vdouble as expression
from dcgpy import kernel_set_gdual_vdouble as kernel_set
from pyaudi import gdual_vdouble as gdual
from matplotlib import pyplot as plt
import numpy as np
from numpy import sin, cos
from random import randint, random
np.seterr(all='ignore') # avoids numpy complaining about early on malformed expressions being evalkuated
%matplotlib inline
Explanation: Discovery of prime integrals with dCGP
Lets first import dcgpy and pyaudi and set up things as to use dCGP on gduals defined over vectorized floats
End of explanation
kernels = kernel_set(["sum", "mul", "pdiv", "diff"])() # note the call operator (returns the list of kernels)
dCGP = expression(inputs=3, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213))
Explanation: We consider a set of differential equations in the form:
$$
\left{
\begin{array}{c}
\frac{dx_1}{dt} = f_1(x_1, \cdots, x_n) \
\vdots \
\frac{dx_n}{dt} = f_n(x_1, \cdots, x_n)
\end{array}
\right.
$$
and we search for expressions $P(x_1, \cdots, x_n) = 0$ which we call prime integrals of motion.
The straight forward approach to design such a search would be to represent $P$ via a $dCGP$ program and evolve its chromosome so that the expression, computed along points of some trajectory, evaluates to zero. This naive approach brings to the evolution of trivial programs that are identically zero and that "do not represent the intrinsic reltations between state varaibles" - Schmidt 2009.
Let us, though, differentiate $P$ along a trajectory solution to the ODEs above. We get:
$$
\frac{dP}{dt} = \sum_{i=0}^n \frac{\partial P}{\partial x_i} \frac{dx_i}{dt} = \sum_{i=0}^n \frac{\partial P}{\partial x_i} f_i = 0
$$
we may try to evolve the expression $P$ so that the above relation is satisfied on chosen points (belonging to a real trajectory or just defined on a grid). To avoid evolution to go towards trivial solutions, unlike Schmidt, we suppress all mutations that give raise to expressions for which $\sum_{i=0}^n \left(\frac{\partial P}{\partial x_i}\right)^2 = 0$. That is, expressions that do not depend on the state.
A mass spring system
As a simple example, consider the following mass-spring system.
The ODEs are:
$$\left{
\begin{array}{l}
\dot v = -kx \
\dot x = v
\end{array}\right.
$$
We define a dCGP having three inputs (the state and the constant $k$) and one output ($P$)
End of explanation
n_points = 50
x = []
v = []
k = []
for i in range(n_points):
x.append(random()*2 + 2)
v.append(random()*2 + 2)
k.append(random()*2 + 2)
x = gdual(x,"x",1)
v = gdual(v,"v",1)
k = gdual(k)
def fitness_call(dCGP, x, v, k):
res = dCGP([x,v,k])[0]
dPdx = np.array(res.get_derivative({"dx": 1}))
dPdv = np.array(res.get_derivative({"dv": 1}))
xcoeff = np.array(x.constant_cf)
vcoeff = np.array(v.constant_cf)
kcoeff = np.array(k.constant_cf)
err = dPdx/dPdv - kcoeff * xcoeff / vcoeff
return sum(err * err), 3
# We run an evolutionary strategy ES(1 + offspring)
def run_experiment(max_gen, offsprings, dCGP, x, v, k, screen_output=False):
chromosome = [1] * offsprings
fitness = [1] *offsprings
best_chromosome = dCGP.get()
best_fitness = 1e10
for g in range(max_gen):
for i in range(offsprings):
check = 0
while(check < 1e-3):
dCGP.set(best_chromosome)
dCGP.mutate_active(i+1) # we mutate a number of increasingly higher active genes
fitness[i], check = fitness_call(dCGP, x,v,k)
chromosome[i] = dCGP.get()
for i in range(offsprings):
if fitness[i] <= best_fitness:
if (fitness[i] != best_fitness) and screen_output:
dCGP.set(chromosome[i])
print("New best found: gen: ", g, " value: ", fitness[i], " ", dCGP.simplify(["x","v","k"]))
best_chromosome = chromosome[i]
best_fitness = fitness[i]
if best_fitness < 1e-12:
break
dCGP.set(best_chromosome)
return g, dCGP
# We run nexp experiments to accumulate statistic for the ERT
nexp = 100
offsprings = 10
stop = 2000
res = []
print("restart: \t gen: \t expression:")
for i in range(nexp):
dCGP = expression(inputs=3, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213))
g, dCGP = run_experiment(stop, 10, dCGP, x,v,k, False)
res.append(g)
if g < (stop-1):
print(i, "\t\t", res[i], "\t", dCGP(["x","v","k"]), " a.k.a ", dCGP.simplify(["x","v","k"]))
one_sol = dCGP
res = np.array(res)
ERT = sum(res) / sum(res<(stop-1))
print("ERT Expected run time - avg. number of function evaluations needed: ", ERT * offsprings)
print(one_sol.simplify(["x","v","k"]))
plt.rcParams["figure.figsize"] = [20,20]
one_sol.visualize(["x","v","k"])
Explanation: We define 50 random control of points where we check that the prime integral holds: $x \in [2,4]$, $v \in [2,4]$ and $k \in[2, 4]$
End of explanation
kernels = kernel_set(["sum", "mul", "pdiv", "diff","sin","cos"])() # note the call operator (returns the list of kernels)
dCGP = expression(inputs=3, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213))
Explanation: Simple pendulum
Consider the simple pendulum problem. In particular its differential formulation:
The ODEs are:
$$\left{
\begin{array}{l}
\dot \omega = - \frac gL\sin\theta \
\dot \theta = \omega \
\end{array}\right.
$$
We define a dCGP having three inputs (the state and the constant $\frac gL$) and one output ($P$)
End of explanation
n_points = 50
omega = []
theta = []
c = []
for i in range(n_points):
omega.append(random()*10 - 5)
theta.append(random()*10 - 5)
c.append(random()*10)
omega = gdual(omega,"omega",1)
theta = gdual(theta,"theta",1)
c = gdual(c)
def fitness_call(dCGP, theta, omega, c):
res = dCGP([theta, omega, c])[0]
dPdtheta = np.array(res.get_derivative({"dtheta": 1}))
dPdomega = np.array(res.get_derivative({"domega": 1}))
thetacoeff = np.array(theta.constant_cf)
omegacoeff = np.array(omega.constant_cf)
ccoeff = np.array(c.constant_cf)
err = dPdtheta/dPdomega + (-ccoeff * np.sin(thetacoeff)) / omegacoeff
check = sum(dPdtheta*dPdtheta + dPdomega*dPdomega)
return sum(err * err ), check
# We run an evolutionary strategy ES(1 + offspring)
def run_experiment(max_gen, offsprings, dCGP, theta, omega, c, screen_output=False):
chromosome = [1] * offsprings
fitness = [1] *offsprings
best_chromosome = dCGP.get()
best_fitness = 1e10
for g in range(max_gen):
for i in range(offsprings):
check = 0
while(check < 1e-3):
dCGP.set(best_chromosome)
dCGP.mutate_active(i+1) # we mutate a number of increasingly higher active genes
fitness[i], check = fitness_call(dCGP, theta, omega, c)
chromosome[i] = dCGP.get()
for i in range(offsprings):
if fitness[i] <= best_fitness:
if (fitness[i] != best_fitness) and screen_output:
dCGP.set(chromosome[i])
print("New best found: gen: ", g, " value: ", fitness[i], " ", dCGP.simplify(["theta","omega","c"]))
best_chromosome = chromosome[i]
best_fitness = fitness[i]
if best_fitness < 1e-12:
break
dCGP.set(best_chromosome)
return g, dCGP
# We run nexp experiments to accumulate statistic for the ERT
nexp = 100
offsprings = 10
stop = 2000
res = []
print("restart: \t gen: \t expression:")
for i in range(nexp):
dCGP = expression(inputs=3, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213))
g, dCGP = run_experiment(stop, 10, dCGP, theta, omega, c, False)
res.append(g)
if g < (stop-1):
print(i, "\t\t", res[i], "\t", dCGP(["theta","omega","c"]), " a.k.a ", dCGP.simplify(["theta","omega","c"]))
one_sol = dCGP
res = np.array(res)
ERT = sum(res) / sum(res<(stop-1))
print("ERT Expected run time - avg. number of function evaluations needed: ", ERT * offsprings)
print(one_sol.simplify(["theta","omega","c"]))
plt.rcParams["figure.figsize"] = [20,20]
one_sol.visualize(["theta","omega","c"])
Explanation: We define 50 random control of points where we check that the prime integral holds: $\omega \in [-1, 1]$, $\theta \in [-1, 1]$, and $\frac gL \in [1,2]$
End of explanation
kernels = kernel_set(["sum", "mul", "pdiv", "diff"])() # note the call operator (returns the list of kernels)
dCGP = expression(inputs=3, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213))
Explanation: The two-body problem
Consider the two body problem. In particular its differential formulation in polar coordinates:
The ODEs are:
$$\left{
\begin{array}{l}
\dot v = -\frac\mu{r^2} + r\omega^2 \
\dot \omega = - 2 \frac{v\omega}{r} \
\dot r = v \
\dot \theta = \omega
\end{array}\right.
$$
We define a dCGP having five inputs (the state and the constant $\mu$) and one output ($P$)
End of explanation
n_points = 50
v = []
omega = []
r = []
theta = []
mu = []
for i in range(n_points):
v.append(random()*2 + 2)
omega.append(random()*1 + 1)
r.append(random() + 0.1)
theta.append(random()*2 + 2)
mu.append(random() + 1)
r = gdual(r,"r",1)
omega = gdual(omega,"omega",1)
v = gdual(v,"v",1)
theta = gdual(theta,"theta",1)
mu = gdual(mu)
## Use this fitness if energy conservation is to be found (it basically forces the expression to depend on v)
def fitness_call(dCGP, r, v, theta, omega, mu):
res = dCGP([r, v, theta, omega, mu])[0]
dPdr = np.array(res.get_derivative({"dr": 1}))
dPdv = np.array(res.get_derivative({"dv": 1}))
dPdtheta = np.array(res.get_derivative({"dtheta": 1}))
dPdomega = np.array(res.get_derivative({"domega": 1}))
rcoeff = np.array(r.constant_cf)
vcoeff = np.array(v.constant_cf)
thetacoeff = np.array(theta.constant_cf)
omegacoeff = np.array(omega.constant_cf)
mucoeff = np.array(mu.constant_cf)
err = dPdr / dPdv + (-mucoeff/rcoeff**2 + rcoeff*omegacoeff**2) / vcoeff + dPdtheta / dPdv / vcoeff * omegacoeff + dPdomega / dPdv / vcoeff * (-2*vcoeff*omegacoeff/rcoeff)
check = sum(dPdr*dPdr + dPdv*dPdv + dPdomega*dPdomega + dPdtheta*dPdtheta)
return sum(err * err), check
## Use this fitness if any conservation is to be found (will always converge to angular momentum)
def fitness_call_free(dCGP, r, v, theta, omega, mu):
res = dCGP([r, v, theta, omega, mu])[0]
dPdr = np.array(res.get_derivative({"dr": 1}))
dPdv = np.array(res.get_derivative({"dv": 1}))
dPdtheta = np.array(res.get_derivative({"dtheta": 1}))
dPdomega = np.array(res.get_derivative({"domega": 1}))
rcoeff = np.array(r.constant_cf)
vcoeff = np.array(v.constant_cf)
thetacoeff = np.array(theta.constant_cf)
omegacoeff = np.array(omega.constant_cf)
mucoeff = np.array(mu.constant_cf)
err = dPdr * vcoeff + dPdv * (-mucoeff/rcoeff**2 + rcoeff*omegacoeff**2) + dPdtheta * omegacoeff + dPdomega * (-2*vcoeff*omegacoeff/rcoeff)
check = sum(dPdr*dPdr + dPdv*dPdv +dPdomega*dPdomega+ dPdtheta*dPdtheta)
return sum(err * err ), check
# We run an evolutionary strategy ES(1 + offspring)
def run_experiment(max_gen, offsprings, dCGP, r, v, theta, omega, mu, obj_fun, screen_output=False):
chromosome = [1] * offsprings
fitness = [1] *offsprings
best_chromosome = dCGP.get()
best_fitness = 1e10
for g in range(max_gen):
for i in range(offsprings):
check = 0
while(check < 1e-3):
dCGP.set(best_chromosome)
dCGP.mutate_active(i+1) # we mutate a number of increasingly higher active genes
fitness[i], check = obj_fun(dCGP, r, v, theta, omega, mu)
chromosome[i] = dCGP.get()
for i in range(offsprings):
if fitness[i] <= best_fitness:
if (fitness[i] != best_fitness) and screen_output:
dCGP.set(chromosome[i])
print("New best found: gen: ", g, " value: ", fitness[i], " ", dCGP.simplify(["r","v","theta","omega","mu"]))
best_chromosome = chromosome[i]
best_fitness = fitness[i]
if best_fitness < 1e-12:
break
dCGP.set(best_chromosome)
return g, dCGP
# We run nexp experiments to accumulate statistic for the ERT (angular momentum case)
nexp = 100
offsprings = 10
stop = 2000 #100000
res = []
print("restart: \t gen: \t expression:")
for i in range(nexp):
dCGP = expression(inputs=5, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213))
g, dCGP = run_experiment(stop, 10, dCGP, r, v, theta, omega, mu, fitness_call_free, False)
res.append(g)
if g < (stop-1):
print(i, "\t\t", res[i], "\t", dCGP(["r","v","theta","omega","mu"]), " a.k.a ", dCGP.simplify(["r","v","theta","omega","mu"]))
one_sol = dCGP
res = np.array(res)
ERT = sum(res) / sum(res<(stop-1))
print("ERT Expected run time - avg. number of function evaluations needed: ", ERT * offsprings)
print(one_sol.simplify(["r","v","theta","omega","mu"]))
plt.rcParams["figure.figsize"] = [20,20]
one_sol.visualize(["r","v","theta","omega","mu"])
# We run nexp experiments to accumulate statistic for the ERT (angular momentum case)
nexp = 100
offsprings = 10
stop = 100000
res = []
print("restart: \t gen: \t expression:")
for i in range(nexp):
dCGP = expression(inputs=5, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213))
g, dCGP = run_experiment(stop, 10, dCGP, r, v, theta, omega, mu, fitness_call, False)
res.append(g)
if g < (stop-1):
print(i, "\t\t", res[i], "\t", dCGP(["r","v","theta","omega","mu"]), " a.k.a ", dCGP.simplify(["r","v","theta","omega","mu"]))
one_sol = dCGP
res = np.array(res)
ERT = sum(res) / sum(res<(stop-1))
print("ERT Expected run time - avg. number of function evaluations needed: ", ERT * offsprings)
Explanation: We define 50 random control of points where we check that the prime integral holds: $r \in [0.1,1.1]$, $v \in [2,4]$, $\omega \in [1,2]$ and $\theta \in[2, 4]$ and $\mu \in [1,2]$
End of explanation |
4,383 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Machine Learning
Author
Step1: Ok, let's get the data, then and have a look at some examples. It seems that there is a lot of variation for some numbers there. Can you make a decision which number you see wih high confidence for each of the examples?
Step2: Feature Generation
This tutorial works with classical machine learning algorithms. That means, we first have to find suitable features, and then apply a classifier on those features.
A classifier usually requires a vector of numbers (so called feature vectors) and an associated label (also called target) for each item we want to classify. In our example, we have to magically transform each image into a feature vector. For the sake of simplicity we generate a feature vector of dimension $w\times h$, where $w$ and $h$ are the width and the height of each image. Then we copy the pixel values of the image row-wise to the feature vector. This means, the pixel a location $(i,j)$ in the image ends up at location $i*w + j$ in the feature vector. As our input data is 28x28 pixels this gives us a feature vector with 784 entries for each image. Each value in the feature vector is between 0 and 255, with 0 indicating a black pixel and 255 a white one.
Step3: You might have wondered why there are two variables X_train and X_test. This is because the library we loaded the data set from already provides a split of the data into training and test set. Let's have a look at the sizes of both. We can see (below), that the training set has 60,000 feature vectors and the testing set has 10,000 feature vectors. The dimensions of the feature vectors is 784. Mathematically, both are matrices and can be written as $X_{train}^{(60000,784)}$ and $X_{test}^{(10000,784)}$. For fun, we will also have a look at the first entry in the training data set. Do you have a chance to identify the digit from that?
Note, that in the printout one example looks like a matrix, but it is not! It's just python's way of printing a very long vector in a nice way. You can identify the start and end of a vector by the brackets.
Step4: Training the Naive Bayes Classifier
Now we are nearly ready to train our first classifier. One thing still needs to be said. Classification is a supervised machine learning task. This means, we give the classifier a feature vector together with the desired output (the target). The targets were also loaded from the original data set and reside in the vector $y_{train}$. Putting things together, the classifier gets a matrix, which contains one row for each image and as many columns as we have features. And it also gets a vector of targets, that is as long as we have images. Thus the number of rows in $X_{train}$ is equal to the length of $y_{train}$. Isn't this neat?
Now we finally can train our first model (a model is a trained classifer). The scikit learn library in python uses standard interfaces to all classifiers. This means, no matter which classifier you want to use, the functions you have to call are always named the same (but they might have different parameters).
Step5: Evaluating the Naive Bayes classifier
Ok, nice. We have trained a model. In the code, the model is called clf_nb. But, is it a good model? To answer this, we need to evaluate the model on data it has not yet seen, that is on X_test and the respective labels y_test.
We do this in two steps
Step6: Training the decision tree classifier
We train the decision tree classifier in a similar manner as we trained the Naive Bayes classifier. Note, that the function calls are equivalent.
Note
Step7: Evaluating the decision tree classifier
Now, let's see how well this model performs on the test set.
It achieves an accuracy of about 88%, thus getting only 12% of the examples wrong. This seems a bit better than the Naive Bayes classifier.
Step8: More detailed error analysis
Can we find out more about the mistakes both models still make? If we could, we could probably find ways to improve it. Or it might also be the case that we might find errors in the underlying data (e.g. mislabeled images, images that do not contain digits at all). The latter case is in this example rather unlikely, since this data set has been studied already for a long time and by many different researchers and practicioners.
Confusion matrices
One thing we could ask is which digits get often confused with one another. Or more generally, which classes often get confused? We can easily asses this, since we have the predictions and the true labels. So, for each digit we just have to count how often label $l$ in the ground truth is predicted as label $k$. We display this in matrix form, this matrix is called class confusion matrix $C$. Entry $(i,j)$ in this matrix holds the count of how often the target $i$ was predicted as $j$.
The strength of the confusion (i.e., the total number of misclassified examples) is indicated with a color in the respective cell. | Python Code:
# pythons scientific computing package and a random number generator
import numpy as np
import random
from keras.datasets import mnist
# machine learning classifiers and metrics
from sklearn.naive_bayes import MultinomialNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
# plotting tool
import matplotlib.pyplot as plt
Explanation: Introduction to Machine Learning
Author: Christin Seifert, licensed under the Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/
This is a tutorial for implementing a simple machine learning pipeline aimed at machine learning beginners.
In this notebook we will
* train classifiers to recognize hand-written digits
* evaluate how well the classifiers do that in general
* dig a little deeper on where they might have problems
It is assumed that you have some general knowledge on
* what a decision tree is and how it works
* how the Naive Bayes classifier works
* training and testing splits of data sets
* evaluation measures for classification (namely accuracy)
Setup
First, we import all the python libraries we will need later
End of explanation
# load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
#show 20 random images from the data set
n_images = X_train.shape[0]
n_rows=4
n_cols=5
for i in range(1,n_rows*n_cols+1):
im_idx = random.randint(0,n_images-1)
pixels=X_train[im_idx]
plt.subplot(n_rows, n_cols, i)
plt.imshow(pixels, cmap='gray')
plt.axis('off')
plt.show()
Explanation: Ok, let's get the data, then and have a look at some examples. It seems that there is a lot of variation for some numbers there. Can you make a decision which number you see wih high confidence for each of the examples?
End of explanation
# flatten 28*28 images to a 784 vector for each image
num_pixels = X_train.shape[1] * X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')
Explanation: Feature Generation
This tutorial works with classical machine learning algorithms. That means, we first have to find suitable features, and then apply a classifier on those features.
A classifier usually requires a vector of numbers (so called feature vectors) and an associated label (also called target) for each item we want to classify. In our example, we have to magically transform each image into a feature vector. For the sake of simplicity we generate a feature vector of dimension $w\times h$, where $w$ and $h$ are the width and the height of each image. Then we copy the pixel values of the image row-wise to the feature vector. This means, the pixel a location $(i,j)$ in the image ends up at location $i*w + j$ in the feature vector. As our input data is 28x28 pixels this gives us a feature vector with 784 entries for each image. Each value in the feature vector is between 0 and 255, with 0 indicating a black pixel and 255 a white one.
End of explanation
# investigate the size of the feature matrices
print(X_train.shape)
print(X_test.shape)
# inspect one example
print(X_train[1])
Explanation: You might have wondered why there are two variables X_train and X_test. This is because the library we loaded the data set from already provides a split of the data into training and test set. Let's have a look at the sizes of both. We can see (below), that the training set has 60,000 feature vectors and the testing set has 10,000 feature vectors. The dimensions of the feature vectors is 784. Mathematically, both are matrices and can be written as $X_{train}^{(60000,784)}$ and $X_{test}^{(10000,784)}$. For fun, we will also have a look at the first entry in the training data set. Do you have a chance to identify the digit from that?
Note, that in the printout one example looks like a matrix, but it is not! It's just python's way of printing a very long vector in a nice way. You can identify the start and end of a vector by the brackets.
End of explanation
# initialize the model with standard parameters
clf_nb = MultinomialNB()
# train the model
clf_nb.fit(X_train,y_train)
Explanation: Training the Naive Bayes Classifier
Now we are nearly ready to train our first classifier. One thing still needs to be said. Classification is a supervised machine learning task. This means, we give the classifier a feature vector together with the desired output (the target). The targets were also loaded from the original data set and reside in the vector $y_{train}$. Putting things together, the classifier gets a matrix, which contains one row for each image and as many columns as we have features. And it also gets a vector of targets, that is as long as we have images. Thus the number of rows in $X_{train}$ is equal to the length of $y_{train}$. Isn't this neat?
Now we finally can train our first model (a model is a trained classifer). The scikit learn library in python uses standard interfaces to all classifiers. This means, no matter which classifier you want to use, the functions you have to call are always named the same (but they might have different parameters).
End of explanation
# make predictions with the NB classifier
y_test_pred_nb = clf_nb.predict(X_test);
a_nb = accuracy_score(y_test, y_test_pred_nb);
print(a_nb)
Explanation: Evaluating the Naive Bayes classifier
Ok, nice. We have trained a model. In the code, the model is called clf_nb. But, is it a good model? To answer this, we need to evaluate the model on data it has not yet seen, that is on X_test and the respective labels y_test.
We do this in two steps:
We ask the classifier about its opinion by only giving it the test data (without the labels). This step is called prediction. We store the results in a vector y_test_pred_nb.
We count how often the classifier's predictions are the same as the correct labels. This step is called evaluaton. The counting is already conveniently implemented in the library, so we only need to call a function accuracy_score() which returns us the ratio of correct predictions and total items. If you multiply this ratio by 100 you get a value that can be interpreted as "the classifier is ... percent correct on the test data".
Thus, we can conclude, the classifier has an accuracy of approximately 85%. Or in other words, it misclassifies 15% of the examples. Is this good or bad? Has it learned something? What if we got a value of 50%. Would this be good?
Whether it has learned something can be answered quite easily. We could simply compared it to random guessing. There are 10 classes in the data set (digits from 0 to 9). In the test set, there is an equal amount of examples for each class. Or in other words, the examples are uniformly distributed over the classes. You could easily check this by inspecting y_test. If the classifier would randomly guess, which digit it sees, it would have a 10% chance of getting it right. So, it has learned quite a lot already by only looking at the pixel values.
End of explanation
clf_dt = DecisionTreeClassifier();
clf_dt.fit(X_train,y_train)
Explanation: Training the decision tree classifier
We train the decision tree classifier in a similar manner as we trained the Naive Bayes classifier. Note, that the function calls are equivalent.
Note: you might notice, that training of a decision tree can take some seconds.
End of explanation
# make predictions with the decision tree classifier
y_test_pred_dt = clf_dt.predict(X_test)
a_dt = accuracy_score(y_test, y_test_pred_dt)
print(a_dt)
Explanation: Evaluating the decision tree classifier
Now, let's see how well this model performs on the test set.
It achieves an accuracy of about 88%, thus getting only 12% of the examples wrong. This seems a bit better than the Naive Bayes classifier.
End of explanation
# get the confusion matrices for both classifiers
cm_nb = confusion_matrix(y_test, y_test_pred_nb);
cm_dt = confusion_matrix(y_test, y_test_pred_dt);
# plot the confusion matrices nicely
plt.subplot(1, 2, 1)
plt.title('Decision Tree', fontsize=16)
plt.imshow(cm_dt, interpolation='nearest',cmap=plt.cm.binary);
plt.tight_layout();
plt.colorbar();
plt.ylabel('True label');
plt.xlabel('Predicted label');
plt.xticks(np.arange(10));
plt.yticks(np.arange(10));
plt.subplot(1, 2, 2)
plt.title('Naive Bayes', fontsize=16)
plt.imshow(cm_nb, interpolation='nearest',cmap=plt.cm.binary);
plt.tight_layout();
plt.colorbar();
plt.ylabel('True label');
plt.xlabel('Predicted label');
plt.xticks(np.arange(10));
plt.yticks(np.arange(10));
Explanation: More detailed error analysis
Can we find out more about the mistakes both models still make? If we could, we could probably find ways to improve it. Or it might also be the case that we might find errors in the underlying data (e.g. mislabeled images, images that do not contain digits at all). The latter case is in this example rather unlikely, since this data set has been studied already for a long time and by many different researchers and practicioners.
Confusion matrices
One thing we could ask is which digits get often confused with one another. Or more generally, which classes often get confused? We can easily asses this, since we have the predictions and the true labels. So, for each digit we just have to count how often label $l$ in the ground truth is predicted as label $k$. We display this in matrix form, this matrix is called class confusion matrix $C$. Entry $(i,j)$ in this matrix holds the count of how often the target $i$ was predicted as $j$.
The strength of the confusion (i.e., the total number of misclassified examples) is indicated with a color in the respective cell.
End of explanation |
4,384 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Regression
This is the first assignment from Andrew Ng's Machine Learning class. In this notebook we perform linear regression with one variable and with multiple variables.
Importing Libraries
Step1: Linear Regression with One Variable
Loading data from text file into a numpy ndarray.
Step2: Creating training data matrix X and target values y
Step3: Plotting the data.
Step4: Gradient Descent Function
First building helper function to make predictions given data and the weights and bias.
Step5: This function takes as input the training data X_data and y_data (the target values) and the number of iterations and learning rate for gradient descent. This function returns the weights and bias learned by gradient descent and the value of the cost function after each iteration.
Step6: Best Fit Line from gradient descent
Plotting the cost function for several values of the learning rate.
Step7: Choosing the learning rate with the fastest convergence
Step8: Learning Parameters with the Normal Equation
Step9: Comparing the parameters learned by the normal equation to the parameters learned by gradient descent.
Step10: Linear Regression with Multiple Variables
Loading data from text file into a numpy ndarray.
Step11: Creating training data matrix X and target values y
Step12: Feature Normatization or Feature Scaling
Building a function to perform feature scalling. This will allow gradient descent to converge more quickly.
Step13: Multivariable Gradient Descent
Plotting the cost function for several values of the learning rate.
Step14: Choosing the learning rate with the fastest convergence
Step15: Learning Parameters with the Normal Equation
We do not need to apply feature scaling to learn the weights with the normal equation.
Step16: Getting the mean and standard deviation values from the dataset to rescale the weights learned with feature scaling to compare to the weights learned by the normal equation (with no feature scaling).
Step17: In the equation below $w_{1}$ is the weight learned for the first feature with the normal equation without feature scaling and $w_{1} ^{\prime}$ is the weight learned for the first feature with gradient descent with feature scaling.
$$ w_{1} = \frac{\sigma_{y}}{\sigma_{x1}} w_{1} ^{\prime} $$
Step18: In the equation below $w_{2}$ is the weight learned for the second feature with the normal equation without feature scaling and $w_{2} ^{\prime}$ is the weight learned for the second feature with gradient descent with feature scaling.
$$ w_{2} = \frac{\sigma_{y}}{\sigma_{x1}} w_{2} ^{\prime} $$ | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Linear Regression
This is the first assignment from Andrew Ng's Machine Learning class. In this notebook we perform linear regression with one variable and with multiple variables.
Importing Libraries
End of explanation
exercise_1_data = np.genfromtxt("ex1data1.txt", delimiter=",", dtype = float)
print(exercise_1_data.shape)
exercise_1_data[0:5]
Explanation: Linear Regression with One Variable
Loading data from text file into a numpy ndarray.
End of explanation
X_train = exercise_1_data[:,0]
X_train = X_train.reshape( (len(X_train),1) )
print(X_train.shape)
X_train[0:5]
y_train = exercise_1_data[:,1]
y_train = y_train.reshape( (len(y_train),1) )
print(y_train.shape)
y_train[0:5]
Explanation: Creating training data matrix X and target values y
End of explanation
plt.figure(figsize=(8, 6))
plt.scatter(X_train, y_train, marker = "+", color = "red", s=70)
plt.xlabel("Population of City in 10,000s", fontsize = 16)
plt.ylabel("Profit in $10,00s", fontsize = 16)
plt.xlim((4.0,24.0))
plt.ylim((-5.0,25.0))
Explanation: Plotting the data.
End of explanation
def predicted_values(bias, weights, X_data):
predictions = bias + np.dot( X_data, weights )
return predictions
Explanation: Gradient Descent Function
First building helper function to make predictions given data and the weights and bias.
End of explanation
def grad_desc(X_data, y_data, N_iterations, learning_rate):
# Getting the number of training examples and features from the dataset
m_train = y_train.shape[0]
N_features = X_data.shape[1]
# Initializing all the weights and bias term to 0.0
# The weights is a vector of dimensions ( N_features, 1 )
weights = np.zeros( ( N_features, 1 ) , dtype = float )
bias = 0.0
# Initializing an array to hold the value of the cost function for each iteration
J_cost = np.zeros( N_iterations + 1 , dtype = float )
# Performing gradient descent over the N_iterations
for i in range( 0, N_iterations+1, 1):
# Using the current weights to calculate the predictions and errors
predictions = predicted_values(bias, weights, X_data)
errors = predictions - y_data
# Using the errors to calculate the cost function
J_cost[i] = (1.0/(2.0*m_train))*np.sum( errors**2 )
# Calculate db and dw, the values by which we will update the weights and bias
db = (1.0/m_train)*np.sum( errors )
dw = (1.0/m_train)* np.dot( X_data.T, errors )
# Updating the weight and bias with db and dw times the learning rate
bias -= learning_rate*db
weights -= learning_rate*dw
# Putting the learned parameters into a dictionary
learned_params = { "weights": weights, "bias": bias }
# Returning the learned parameters with gradient descent
# and the cost function after each iteration
return learned_params, J_cost
Explanation: This function takes as input the training data X_data and y_data (the target values) and the number of iterations and learning rate for gradient descent. This function returns the weights and bias learned by gradient descent and the value of the cost function after each iteration.
End of explanation
plt.figure(figsize=(8, 6))
learn_rate_vals = np.array([ 1.0e-4, 3.0e-4, 1.0e-3, 3.0e-3, 1.0e-2, 3.0e-2, 1.0e-1])
for i in range(len(learn_rate_vals)):
learned_params, J_cost = grad_desc( X_data = X_train, y_data = y_train,
N_iterations = 500, learning_rate = learn_rate_vals[i])
N_iterations = np.array(range(len(J_cost)))
plt.plot(N_iterations,J_cost, label = r"$\alpha = $" + str(learn_rate_vals[i]))
plt.xlim((0.0,100.0))
plt.ylim((0.0,35.0))
plt.legend(loc="upper right")
Explanation: Best Fit Line from gradient descent
Plotting the cost function for several values of the learning rate.
End of explanation
learned_params, J_cost = grad_desc( X_data = X_train, y_data = y_train, N_iterations = 1500, learning_rate = 0.01)
print( learned_params )
plt.figure(figsize=(8, 6))
N_iterations = np.array(range(len(J_cost)))
plt.plot(N_iterations,J_cost)
plt.xlim((0.0,100.0))
plt.xlim((0.0,35.0))
x_vals_fit = np.arange( np.min(X_train), np.max(X_train) + 0.01, 0.01 )
x_vals_fit = x_vals_fit.reshape(len(x_vals_fit),1)
y_vals_fit = predicted_values( bias = learned_params["bias"] , weights = learned_params["weights"], X_data = x_vals_fit )
plt.figure(figsize=(8, 6))
plt.scatter(X_train, y_train, marker = "+", color = "red")
plt.plot( x_vals_fit, y_vals_fit, color="blue" )
plt.xlabel("Population of City in 10,000s", fontsize = 16)
plt.ylabel("Profit in $10,00s", fontsize = 16)
plt.xlim((4.0,24.0))
plt.ylim((-5.0,25.0))
Explanation: Choosing the learning rate with the fastest convergence
End of explanation
def normal_eqn( X_data, y_data ):
# Adding a column of ones to X_data for the bias feature
X_data = np.column_stack( ( np.ones( ( X_train.shape[0], 1 ) ) , X_train ) )
inv_matrix = np.linalg.pinv( np.dot( X_data.T, X_data ) )
weights = np.dot( np.dot( inv_matrix, X_data.T ) , y_data )
# Putting the learned parameters into a dictionary
learned_params = { "weights": weights[1:,0], "bias": weights[0,0] }
# Returning the learned parameters with the normal equation
return learned_params
Explanation: Learning Parameters with the Normal Equation
End of explanation
normal_eq_params = normal_eqn( X_train , y_train )
print( normal_eq_params["bias"] )
print( learned_params["bias"] )
print( normal_eq_params["weights"] )
print( learned_params["weights"] )
Explanation: Comparing the parameters learned by the normal equation to the parameters learned by gradient descent.
End of explanation
exercise_2_data = np.genfromtxt("ex1data2.txt", delimiter=",", dtype = float)
print(exercise_2_data.shape)
exercise_2_data[0:5]
Explanation: Linear Regression with Multiple Variables
Loading data from text file into a numpy ndarray.
End of explanation
X_train = exercise_2_data[:, 0:2 ]
print(X_train.shape)
X_train[0:5]
y_train = exercise_2_data[:,-1]
y_train = y_train.reshape( (len(y_train),1) )
print(y_train.shape)
y_train[0:5]
Explanation: Creating training data matrix X and target values y
End of explanation
def normalize_features( X_data ):
X_data_norm = np.zeros( X_data.shape )
mean_std_features = {}
for i in range( X_data.shape[1] ):
X_i_mean = np.mean( X_data[:,i] )
X_i_std = np.std( X_data[:,i] )
X_data_norm[:,i] = ( X_data[:,i] - X_i_mean )/X_i_std
mean_std_features[str(i)] = np.array([X_i_mean, X_i_std])
return X_data_norm, mean_std_features
X_train_norm, X_mean_std_train_feats = normalize_features( X_train )
y_train_norm, y_mean_std_train_feats = normalize_features( y_train )
Explanation: Feature Normatization or Feature Scaling
Building a function to perform feature scalling. This will allow gradient descent to converge more quickly.
End of explanation
plt.figure(figsize=(8, 6))
learn_rate_vals = np.array([ 1.0e-3, 3.0e-3, 1.0e-2, 3.0e-2, 1.0e-1, 3.0e-1, 1.0, 3.0, 10.0])
for i in range(len(learn_rate_vals)):
learned_params, J_cost = grad_desc( X_data = X_train_norm, y_data = y_train_norm,
N_iterations = 1000, learning_rate = learn_rate_vals[i])
N_iterations = np.array(range(len(J_cost)))
plt.plot(N_iterations,J_cost, label = r"$\alpha = $" + str(learn_rate_vals[i]))
plt.xlim((0.0,100.0))
plt.ylim((0.0,0.5))
plt.legend(loc="upper right")
Explanation: Multivariable Gradient Descent
Plotting the cost function for several values of the learning rate.
End of explanation
learned_params_feat_scaled, J_cost = grad_desc( X_data = X_train_norm, y_data = y_train_norm,
N_iterations = 1000, learning_rate = 1.0)
print( learned_params_feat_scaled )
plt.figure(figsize=(8, 6))
N_iterations = np.array(range(len(J_cost)))
plt.plot(N_iterations,J_cost)
plt.xlim((0.0,50.0))
Explanation: Choosing the learning rate with the fastest convergence
End of explanation
normal_eq_params = normal_eqn( X_train , y_train )
print( normal_eq_params["bias"] )
print( normal_eq_params["weights"] )
Explanation: Learning Parameters with the Normal Equation
We do not need to apply feature scaling to learn the weights with the normal equation.
End of explanation
mean_x1 = X_mean_std_train_feats['0'][0]
mean_x2 = X_mean_std_train_feats['1'][0]
mean_y = y_mean_std_train_feats['0'][0]
sigma_x1 = X_mean_std_train_feats['0'][1]
sigma_x2 = X_mean_std_train_feats['1'][1]
sigma_y = y_mean_std_train_feats['0'][1]
b_feat_scaled = learned_params_feat_scaled["bias"]
w1_feat_scaled = learned_params_feat_scaled["weights"][0]
w2_feat_scaled = learned_params_feat_scaled["weights"][1]
Explanation: Getting the mean and standard deviation values from the dataset to rescale the weights learned with feature scaling to compare to the weights learned by the normal equation (with no feature scaling).
End of explanation
print(normal_eq_params["weights"][0])
print( float( (sigma_y/sigma_x1)*w1_feat_scaled ) )
Explanation: In the equation below $w_{1}$ is the weight learned for the first feature with the normal equation without feature scaling and $w_{1} ^{\prime}$ is the weight learned for the first feature with gradient descent with feature scaling.
$$ w_{1} = \frac{\sigma_{y}}{\sigma_{x1}} w_{1} ^{\prime} $$
End of explanation
print(normal_eq_params["weights"][1])
print( float( (sigma_y/sigma_x2)*w2_feat_scaled ) )
Explanation: In the equation below $w_{2}$ is the weight learned for the second feature with the normal equation without feature scaling and $w_{2} ^{\prime}$ is the weight learned for the second feature with gradient descent with feature scaling.
$$ w_{2} = \frac{\sigma_{y}}{\sigma_{x1}} w_{2} ^{\prime} $$
End of explanation |
4,385 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decoding (MVPA)
Step1: Transformation classes
Scaler
The
Step2: PSDEstimator
The
Step3: Source power comodulation (SPoC)
Source Power Comodulation (
Step4: Decoding over time
This strategy consists in fitting a multivariate predictive model on each
time instant and evaluating its performance at the same instant on new
epochs. The
Step5: You can retrieve the spatial filters and spatial patterns if you explicitly
use a LinearModel
Step6: Temporal generalization
Temporal generalization is an extension of the decoding over time approach.
It consists in evaluating whether the model estimated at a particular
time instant accurately predicts any other time instant. It is analogous to
transferring a trained model to a distinct learning problem, where the
problems correspond to decoding the patterns of brain activity recorded at
distinct time instants.
The object to for Temporal generalization is
Step7: Plot the full (generalization) matrix
Step8: Projecting sensor-space patterns to source space
If you use a linear classifier (or regressor) for your data, you can also
project these to source space. For example, using our evoked_time_gen
from before
Step9: And this can be visualized using | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
import mne
from mne.datasets import sample
from mne.decoding import (SlidingEstimator, GeneralizingEstimator, Scaler,
cross_val_multiscore, LinearModel, get_coef,
Vectorizer, CSP)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
tmin, tmax = -0.200, 0.500
event_id = {'Auditory/Left': 1, 'Visual/Left': 3} # just use two
raw = mne.io.read_raw_fif(raw_fname, preload=True)
# The subsequent decoding analyses only capture evoked responses, so we can
# low-pass the MEG data. Usually a value more like 40 Hz would be used,
# but here low-pass at 20 so we can more heavily decimate, and allow
# the examlpe to run faster. The 2 Hz high-pass helps improve CSP.
raw.filter(2, 20)
events = mne.find_events(raw, 'STI 014')
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=('grad', 'eog'), baseline=(None, 0.), preload=True,
reject=dict(grad=4000e-13, eog=150e-6), decim=10)
epochs.pick_types(meg=True, exclude='bads') # remove stim and EOG
del raw
X = epochs.get_data() # MEG signals: n_epochs, n_meg_channels, n_times
y = epochs.events[:, 2] # target: Audio left or right
Explanation: Decoding (MVPA)
:depth: 3
.. include:: ../../links.inc
Design philosophy
Decoding (a.k.a. MVPA) in MNE largely follows the machine
learning API of the scikit-learn package.
Each estimator implements fit, transform, fit_transform, and
(optionally) inverse_transform methods. For more details on this design,
visit scikit-learn_. For additional theoretical insights into the decoding
framework in MNE :footcite:KingEtAl2018.
For ease of comprehension, we will denote instantiations of the class using
the same name as the class but in small caps instead of camel cases.
Let's start by loading data for a simple two-class problem:
End of explanation
# Uses all MEG sensors and time points as separate classification
# features, so the resulting filters used are spatio-temporal
clf = make_pipeline(Scaler(epochs.info),
Vectorizer(),
LogisticRegression(solver='lbfgs'))
scores = cross_val_multiscore(clf, X, y, cv=5, n_jobs=1)
# Mean scores across cross-validation splits
score = np.mean(scores, axis=0)
print('Spatio-temporal: %0.1f%%' % (100 * score,))
Explanation: Transformation classes
Scaler
The :class:mne.decoding.Scaler will standardize the data based on channel
scales. In the simplest modes scalings=None or scalings=dict(...),
each data channel type (e.g., mag, grad, eeg) is treated separately and
scaled by a constant. This is the approach used by e.g.,
:func:mne.compute_covariance to standardize channel scales.
If scalings='mean' or scalings='median', each channel is scaled using
empirical measures. Each channel is scaled independently by the mean and
standand deviation, or median and interquartile range, respectively, across
all epochs and time points during :class:~mne.decoding.Scaler.fit
(during training). The :meth:~mne.decoding.Scaler.transform method is
called to transform data (training or test set) by scaling all time points
and epochs on a channel-by-channel basis. To perform both the fit and
transform operations in a single call, the
:meth:~mne.decoding.Scaler.fit_transform method may be used. To invert the
transform, :meth:~mne.decoding.Scaler.inverse_transform can be used. For
scalings='median', scikit-learn_ version 0.17+ is required.
<div class="alert alert-info"><h4>Note</h4><p>Using this class is different from directly applying
:class:`sklearn.preprocessing.StandardScaler` or
:class:`sklearn.preprocessing.RobustScaler` offered by
scikit-learn_. These scale each *classification feature*, e.g.
each time point for each channel, with mean and standard
deviation computed across epochs, whereas
:class:`mne.decoding.Scaler` scales each *channel* using mean and
standard deviation computed across all of its time points
and epochs.</p></div>
Vectorizer
Scikit-learn API provides functionality to chain transformers and estimators
by using :class:sklearn.pipeline.Pipeline. We can construct decoding
pipelines and perform cross-validation and grid-search. However scikit-learn
transformers and estimators generally expect 2D data
(n_samples * n_features), whereas MNE transformers typically output data
with a higher dimensionality
(e.g. n_samples * n_channels * n_frequencies * n_times). A Vectorizer
therefore needs to be applied between the MNE and the scikit-learn steps
like:
End of explanation
csp = CSP(n_components=3, norm_trace=False)
clf_csp = make_pipeline(csp, LinearModel(LogisticRegression(solver='lbfgs')))
scores = cross_val_multiscore(clf_csp, X, y, cv=5, n_jobs=1)
print('CSP: %0.1f%%' % (100 * scores.mean(),))
Explanation: PSDEstimator
The :class:mne.decoding.PSDEstimator
computes the power spectral density (PSD) using the multitaper
method. It takes a 3D array as input, converts it into 2D and computes the
PSD.
FilterEstimator
The :class:mne.decoding.FilterEstimator filters the 3D epochs data.
Spatial filters
Just like temporal filters, spatial filters provide weights to modify the
data along the sensor dimension. They are popular in the BCI community
because of their simplicity and ability to distinguish spatially-separated
neural activity.
Common spatial pattern
:class:mne.decoding.CSP is a technique to analyze multichannel data based
on recordings from two classes :footcite:Koles1991 (see also
https://en.wikipedia.org/wiki/Common_spatial_pattern).
Let $X \in R^{C\times T}$ be a segment of data with
$C$ channels and $T$ time points. The data at a single time point
is denoted by $x(t)$ such that $X=[x(t), x(t+1), ..., x(t+T-1)]$.
Common spatial pattern (CSP) finds a decomposition that projects the signal
in the original sensor space to CSP space using the following transformation:
\begin{align}x_{CSP}(t) = W^{T}x(t)
:label: csp\end{align}
where each column of $W \in R^{C\times C}$ is a spatial filter and each
row of $x_{CSP}$ is a CSP component. The matrix $W$ is also
called the de-mixing matrix in other contexts. Let
$\Sigma^{+} \in R^{C\times C}$ and $\Sigma^{-} \in R^{C\times C}$
be the estimates of the covariance matrices of the two conditions.
CSP analysis is given by the simultaneous diagonalization of the two
covariance matrices
\begin{align}W^{T}\Sigma^{+}W = \lambda^{+}
:label: diagonalize_p\end{align}
\begin{align}W^{T}\Sigma^{-}W = \lambda^{-}
:label: diagonalize_n\end{align}
where $\lambda^{C}$ is a diagonal matrix whose entries are the
eigenvalues of the following generalized eigenvalue problem
\begin{align}\Sigma^{+}w = \lambda \Sigma^{-}w
:label: eigen_problem\end{align}
Large entries in the diagonal matrix corresponds to a spatial filter which
gives high variance in one class but low variance in the other. Thus, the
filter facilitates discrimination between the two classes.
.. topic:: Examples
* `sphx_glr_auto_examples_decoding_plot_decoding_csp_eeg.py`
* `sphx_glr_auto_examples_decoding_plot_decoding_csp_timefreq.py`
<div class="alert alert-info"><h4>Note</h4><p>The winning entry of the Grasp-and-lift EEG competition in Kaggle used
the :class:`~mne.decoding.CSP` implementation in MNE and was featured as
a `script of the week <sotw_>`_.</p></div>
We can use CSP with these data with:
End of explanation
# Fit CSP on full data and plot
csp.fit(X, y)
csp.plot_patterns(epochs.info)
csp.plot_filters(epochs.info, scalings=1e-9)
Explanation: Source power comodulation (SPoC)
Source Power Comodulation (:class:mne.decoding.SPoC)
:footcite:DahneEtAl2014 identifies the composition of
orthogonal spatial filters that maximally correlate with a continuous target.
SPoC can be seen as an extension of the CSP where the target is driven by a
continuous variable rather than a discrete variable. Typical applications
include extraction of motor patterns using EMG power or audio patterns using
sound envelope.
.. topic:: Examples
* `sphx_glr_auto_examples_decoding_plot_decoding_spoc_CMC.py`
xDAWN
:class:mne.preprocessing.Xdawn is a spatial filtering method designed to
improve the signal to signal + noise ratio (SSNR) of the ERP responses
:footcite:RivetEtAl2009. Xdawn was originally
designed for P300 evoked potential by enhancing the target response with
respect to the non-target response. The implementation in MNE-Python is a
generalization to any type of ERP.
.. topic:: Examples
* `sphx_glr_auto_examples_preprocessing_plot_xdawn_denoising.py`
* `sphx_glr_auto_examples_decoding_plot_decoding_xdawn_eeg.py`
Effect-matched spatial filtering
The result of :class:mne.decoding.EMS is a spatial filter at each time
point and a corresponding time course :footcite:SchurgerEtAl2013.
Intuitively, the result gives the similarity between the filter at
each time point and the data vector (sensors) at that time point.
.. topic:: Examples
* `sphx_glr_auto_examples_decoding_plot_ems_filtering.py`
Patterns vs. filters
When interpreting the components of the CSP (or spatial filters in general),
it is often more intuitive to think about how $x(t)$ is composed of
the different CSP components $x_{CSP}(t)$. In other words, we can
rewrite Equation :eq:csp as follows:
\begin{align}x(t) = (W^{-1})^{T}x_{CSP}(t)
:label: patterns\end{align}
The columns of the matrix $(W^{-1})^T$ are called spatial patterns.
This is also called the mixing matrix. The example
sphx_glr_auto_examples_decoding_plot_linear_model_patterns.py
discusses the difference between patterns and filters.
These can be plotted with:
End of explanation
# We will train the classifier on all left visual vs auditory trials on MEG
clf = make_pipeline(StandardScaler(), LogisticRegression(solver='lbfgs'))
time_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc', verbose=True)
scores = cross_val_multiscore(time_decod, X, y, cv=5, n_jobs=1)
# Mean scores across cross-validation splits
scores = np.mean(scores, axis=0)
# Plot
fig, ax = plt.subplots()
ax.plot(epochs.times, scores, label='score')
ax.axhline(.5, color='k', linestyle='--', label='chance')
ax.set_xlabel('Times')
ax.set_ylabel('AUC') # Area Under the Curve
ax.legend()
ax.axvline(.0, color='k', linestyle='-')
ax.set_title('Sensor space decoding')
Explanation: Decoding over time
This strategy consists in fitting a multivariate predictive model on each
time instant and evaluating its performance at the same instant on new
epochs. The :class:mne.decoding.SlidingEstimator will take as input a
pair of features $X$ and targets $y$, where $X$ has
more than 2 dimensions. For decoding over time the data $X$
is the epochs data of shape n_epochs x n_channels x n_times. As the
last dimension of $X$ is the time, an estimator will be fit
on every time instant.
This approach is analogous to SlidingEstimator-based approaches in fMRI,
where here we are interested in when one can discriminate experimental
conditions and therefore figure out when the effect of interest happens.
When working with linear models as estimators, this approach boils
down to estimating a discriminative spatial filter for each time instant.
Temporal decoding
We'll use a Logistic Regression for a binary classification as machine
learning model.
End of explanation
clf = make_pipeline(StandardScaler(),
LinearModel(LogisticRegression(solver='lbfgs')))
time_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc', verbose=True)
time_decod.fit(X, y)
coef = get_coef(time_decod, 'patterns_', inverse_transform=True)
evoked_time_gen = mne.EvokedArray(coef, epochs.info, tmin=epochs.times[0])
joint_kwargs = dict(ts_args=dict(time_unit='s'),
topomap_args=dict(time_unit='s'))
evoked_time_gen.plot_joint(times=np.arange(0., .500, .100), title='patterns',
**joint_kwargs)
Explanation: You can retrieve the spatial filters and spatial patterns if you explicitly
use a LinearModel
End of explanation
# define the Temporal generalization object
time_gen = GeneralizingEstimator(clf, n_jobs=1, scoring='roc_auc',
verbose=True)
scores = cross_val_multiscore(time_gen, X, y, cv=5, n_jobs=1)
# Mean scores across cross-validation splits
scores = np.mean(scores, axis=0)
# Plot the diagonal (it's exactly the same as the time-by-time decoding above)
fig, ax = plt.subplots()
ax.plot(epochs.times, np.diag(scores), label='score')
ax.axhline(.5, color='k', linestyle='--', label='chance')
ax.set_xlabel('Times')
ax.set_ylabel('AUC')
ax.legend()
ax.axvline(.0, color='k', linestyle='-')
ax.set_title('Decoding MEG sensors over time')
Explanation: Temporal generalization
Temporal generalization is an extension of the decoding over time approach.
It consists in evaluating whether the model estimated at a particular
time instant accurately predicts any other time instant. It is analogous to
transferring a trained model to a distinct learning problem, where the
problems correspond to decoding the patterns of brain activity recorded at
distinct time instants.
The object to for Temporal generalization is
:class:mne.decoding.GeneralizingEstimator. It expects as input $X$
and $y$ (similarly to :class:~mne.decoding.SlidingEstimator) but
generates predictions from each model for all time instants. The class
:class:~mne.decoding.GeneralizingEstimator is generic and will treat the
last dimension as the one to be used for generalization testing. For
convenience, here, we refer to it as different tasks. If $X$
corresponds to epochs data then the last dimension is time.
This runs the analysis used in :footcite:KingEtAl2014 and further detailed
in :footcite:KingDehaene2014:
End of explanation
fig, ax = plt.subplots(1, 1)
im = ax.imshow(scores, interpolation='lanczos', origin='lower', cmap='RdBu_r',
extent=epochs.times[[0, -1, 0, -1]], vmin=0., vmax=1.)
ax.set_xlabel('Testing Time (s)')
ax.set_ylabel('Training Time (s)')
ax.set_title('Temporal generalization')
ax.axvline(0, color='k')
ax.axhline(0, color='k')
plt.colorbar(im, ax=ax)
Explanation: Plot the full (generalization) matrix:
End of explanation
cov = mne.compute_covariance(epochs, tmax=0.)
del epochs
fwd = mne.read_forward_solution(
data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif')
inv = mne.minimum_norm.make_inverse_operator(
evoked_time_gen.info, fwd, cov, loose=0.)
stc = mne.minimum_norm.apply_inverse(evoked_time_gen, inv, 1. / 9., 'dSPM')
del fwd, inv
Explanation: Projecting sensor-space patterns to source space
If you use a linear classifier (or regressor) for your data, you can also
project these to source space. For example, using our evoked_time_gen
from before:
End of explanation
brain = stc.plot(hemi='split', views=('lat', 'med'), initial_time=0.1,
subjects_dir=subjects_dir)
Explanation: And this can be visualized using :meth:stc.plot <mne.SourceEstimate.plot>:
End of explanation |
4,386 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computing π by simulation
<img src="../img/cannon-circle.png" width="500">
Consider a cannon firing random shots on a square field enclosing a circle. If the radius of the circle is 1, then it's area is π, and the area of the field is 4.
After n shots, the number c of shots inside the circle will be proportional to π
Step1: Now we can select coordinate pairs inside the circle
Step2: We now plot the shots inside the circle in blue, outside in red
Step3: We can now see the π approximation
Step4: The next function abstracts the process so far. Given n, pi(n) will compute an approximation of π by generating random coordinates and counting those that fall inside the circle
Step5: Using this loop, I tried the pi() function with n at different orders of magnitude
Step6: Now we can graph how the results of pi() approach the actual π (the red line) | Python Code:
import random
def rnd(n):
return [random.uniform(-1, 1) for _ in range(n)]
SHOTS = 5000
x = rnd(SHOTS)
y = rnd(SHOTS)
Explanation: Computing π by simulation
<img src="../img/cannon-circle.png" width="500">
Consider a cannon firing random shots on a square field enclosing a circle. If the radius of the circle is 1, then it's area is π, and the area of the field is 4.
After n shots, the number c of shots inside the circle will be proportional to π:
$$
\frac{π}{4}=\frac{c}{n}
$$
Then the value of π can be computed like this:
$$
π = \frac{4 \cdot c}{n}
$$
To get started, let's generate coordinates for the shots:
End of explanation
def pairs(seq1, seq2):
yes1, yes2, no1, no2 = [], [], [], []
for a, b in zip(seq1, seq2):
if (a*a + b*b)**.5 <= 1:
yes1.append(a)
yes2.append(b)
else:
no1.append(a)
no2.append(b)
return yes1, yes2, no1, no2
x_sim, y_sim, x_nao, y_nao = pairs(x, y)
Explanation: Now we can select coordinate pairs inside the circle:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure()
plt.axes().set_aspect('equal')
plt.grid()
plt.scatter(x_sim, y_sim, 3, color='b')
plt.scatter(x_nao, y_nao, 3, color='r')
Explanation: We now plot the shots inside the circle in blue, outside in red:
End of explanation
4 * len(x_sim) / SHOTS
Explanation: We can now see the π approximation:
$$
π = \frac{4 \cdot c}{n}
$$
End of explanation
def pi(n):
uni = random.uniform
c = 0
i = 0
while i < n:
if abs(complex(uni(-1, 1), uni(-1, 1))) <= 1:
c += 1
i += 1
return c * 4.0 / n
Explanation: The next function abstracts the process so far. Given n, pi(n) will compute an approximation of π by generating random coordinates and counting those that fall inside the circle:
End of explanation
res = [
(1, 4.0),
(10, 2.8),
(100, 3.24),
(1000, 3.096),
(10000, 3.1248),
(100000, 3.14144),
(1000000, 3.142716),
(10000000, 3.1410784),
(100000000, 3.14149756),
(1000000000, 3.141589804)
]
Explanation: Using this loop, I tried the pi() function with n at different orders of magnitude:
```python
res = []
for i in range(10):
n = 10**i
res.append((n, pi(n)))
res
```
My notebook took more than 25 minutes to compute these results:
End of explanation
import math
plt.figure()
x, y = zip(*res)
x = [round(math.log(n, 10)) for n in x]
plt.plot(x, y)
plt.axhline(math.pi, color='r')
plt.grid()
plt.xticks(x, ['10**%1.0f' % a for a in x])
x
Explanation: Now we can graph how the results of pi() approach the actual π (the red line):
End of explanation |
4,387 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Iris Data Set
This problem sheet relates to the Iris data set and uses jupyter, numpy and pyplot. Problems are labelled 1 to 10.
1. Get and load the Iris data.
Step1: 2. Write a note about the data set.
The Iris data set was created by Ronald Fisher in 1936 and contains 50 samples from each of the three species of Iris - Iris setosa, Iris virginica and Iris versicolor. The structure of the set is as follows
Step2: 4. Create a more complex plot.
Recreate the above plot, marking the data points in different colours depending on species. Add a legend to the plot to show what species relates to what colour.
Step3: 5. Use Seaborn.
Use Seaborn to create a scatterplot matrix of all five variables (sepal length, sepal width, petal length, petal width, species classification).
Note
Step4: 6. Fit a line.
Fit a straight line to the petal length and width variables for the whole data set. Plot the data points in a scatter plot, including the best fit line.
Step5: 7. Calculate R-sqaured.
The R-squared value estimates how much of the changes in the $y$ value (petal length) are due to the changes in the $x$ value (petal width) compared to all of the other factors affecting the $y$ value.
Step6: 8. Fit another line.
Use numpy to fit a straight line to the petal length and width variables for the species Iris-setosa. Plot the data points in a scatter plot with the best fit line shown.
Step7: 9. Calculate R-squared for the Setosa line.
Calculate the r-squared of the best fitting line for the Setosa data, plotted above.
Step8: 10. Use Gradient Descent. | Python Code:
import numpy as np
# Adapted from https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.genfromtxt.html
filename = 'data.csv'
sLen, sWid, pLen, pWid = np.genfromtxt('data.csv', delimiter=',', usecols=(0,1,2,3), unpack=True, dtype=float)
spec = np.genfromtxt('data.csv', delimiter=',', usecols=(4), unpack=True, dtype=str)
for i in range(10):
print('{0:.1f} {1:.1f} {2:.1f} {3:.1f} {4:s}'.format(sLen[i], sWid[i], pLen[i], pWid[i], spec[i]))
Explanation: Iris Data Set
This problem sheet relates to the Iris data set and uses jupyter, numpy and pyplot. Problems are labelled 1 to 10.
1. Get and load the Iris data.
End of explanation
import matplotlib.pyplot as pl
pl.rcParams['figure.figsize'] = (14, 6) # Adapted from gradient descent notebook: https://github.com/emerging-technologies/emerging-technologies.github.io/blob/master/notebooks/gradient-descent.ipynb
pl.scatter(sLen, sWid, marker='.')
pl.title('Scatter Diagram of Sepal Width vs Length', fontsize=14)
pl.xlabel('Sepal Length')
pl.ylabel('Sepal Width')
pl.show()
Explanation: 2. Write a note about the data set.
The Iris data set was created by Ronald Fisher in 1936 and contains 50 samples from each of the three species of Iris - Iris setosa, Iris virginica and Iris versicolor. The structure of the set is as follows: sepal length, sepal width, petal length, petal width, species classification. A raw copy of the data set can be found here.
3. Create a simple plot.
Use pyplot to create a scatter plot of sepal length on the x-axis versus sepal width on the y-axis.
End of explanation
import matplotlib.patches as mpatches
pl.rcParams['figure.figsize'] = (14,6)
# Colour related to type adapted from https://stackoverflow.com/questions/27318906/python-scatter-plot-with-colors-corresponding-to-strings
colours = {'Iris-setosa': 'red', 'Iris-versicolor': 'green', 'Iris-virginica': 'blue'}
pl.scatter(sLen, sWid, c=[colours[i] for i in spec], label=[colours[i] for i in colours], marker=".")
pl.title('Scatter Diagram of Sepal Width vs Length', fontsize=14)
pl.xlabel('Sepal Length')
pl.ylabel('Sepal Width')
# Custom handles adapted from https://stackoverflow.com/a/44164349/7232648
a = 'red'
b = 'green'
c = 'blue'
handles = [mpatches.Patch(color=colour, label=label) for label, colour in [('Iris-setosa', a), ('Iris-versicolor', b), ('Iris-virginica', c)]]
pl.legend(handles=handles, loc=2, frameon=True)
#pl.grid()
pl.show()
Explanation: 4. Create a more complex plot.
Recreate the above plot, marking the data points in different colours depending on species. Add a legend to the plot to show what species relates to what colour.
End of explanation
# Seaborn scatterplot adapted from http://seaborn.pydata.org/examples/scatterplot_matrix.html
import seaborn as sb
sb.set(style="ticks")
# Load the data - Iris included in Seaborn's github repo for csv files here: https://github.com/mwaskom/seaborn-data
data = sb.load_dataset("iris")
# Plot data, base the colour of points on species
sb.pairplot(data, hue="species")
pl.show()
Explanation: 5. Use Seaborn.
Use Seaborn to create a scatterplot matrix of all five variables (sepal length, sepal width, petal length, petal width, species classification).
Note: needs work, dataframe working but sb plot isn't. Will do other questions and come back to this if there's time.
End of explanation
# Conversions adapted from https://stackoverflow.com/a/26440523/7232648
# Adapted from https://github.com/emerging-technologies/emerging-technologies.github.io/blob/master/notebooks/simple-linear-regression.ipynb
w = pLen
d = pWid
w_avg = np.mean(w)
d_avg = np.mean(d)
w_zero = w - w_avg
d_zero = d - d_avg
m = np.sum(w_zero * d_zero) / np.sum(w_zero * w_zero)
c = d_avg - m * w_avg
# Graph labels etc
pl.rcParams['figure.figsize'] = (14,6)
pl.title('Petal Measurements', fontsize=14)
pl.xlabel('Petal Length')
pl.ylabel('Petal Width')
pl.scatter(w, d, marker='.', label='Data Set')
pl.plot(w, m * w + c, 'r', label='Best Fit Line')
pl.legend(loc=2, frameon=True)
pl.show()
Explanation: 6. Fit a line.
Fit a straight line to the petal length and width variables for the whole data set. Plot the data points in a scatter plot, including the best fit line.
End of explanation
# Adapted from https://github.com/emerging-technologies/emerging-technologies.github.io/blob/master/notebooks/simple-linear-regression.ipynb
rsq = 1.0 - (np.sum((d - m * w - c)**2)/np.sum((d - d_avg)**2))
print("R-squared: {0:.6f}".format(rsq))
Explanation: 7. Calculate R-sqaured.
The R-squared value estimates how much of the changes in the $y$ value (petal length) are due to the changes in the $x$ value (petal width) compared to all of the other factors affecting the $y$ value.
End of explanation
# Adding arrays as columns adapted from https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.column_stack.html
data = np.column_stack((sLen, sWid, pLen, pWid, spec))
#for i in range(5):
#print(data[i])
# Setosa data -> 0 - 49 in data set. (Definitely better ways of doing this but works for now, will change if there's time)
spLen, spWid= [], []
for index, row in enumerate(data):
# Petal info contained in cols 2 & 3
# For each row, append column 2 to spLen array and column 3 to spWid array
spLen.append(float(row[2]))
spWid.append(float(row[3]))
if index == 49:
break
# Calculate best values for m and c
m, c = np.polyfit(spLen, spWid, 1)
y = m * (spLen + c)
# Graph labels etc
pl.rcParams['figure.figsize'] = (16,8)
pl.title('Iris Setosa Petal Measurements', fontsize=14)
pl.xlabel('Petal Length')
pl.ylabel('Petal Width')
pl.scatter(spLen, spWid, label = 'Iris Setosa') # Plot the data points
pl.plot(spLen, y, 'r', label = 'Best Fit Line') # Plot the line
pl.legend(loc=2, frameon=True)
pl.show()
orM = m
orC = c
Explanation: 8. Fit another line.
Use numpy to fit a straight line to the petal length and width variables for the species Iris-setosa. Plot the data points in a scatter plot with the best fit line shown.
End of explanation
rsq = 1.0 - (np.sum((d - m * w - c)**2)/np.sum((d - d_avg)**2))
print("R-squared: {0:.6f}".format(rsq))
Explanation: 9. Calculate R-squared for the Setosa line.
Calculate the r-squared of the best fitting line for the Setosa data, plotted above.
End of explanation
w = np.array(spLen)
d = np.array(spWid)
print("Original \t\tm: %20.16f c: %20.16f" % (orM, orC))
# Adapted from Gradient Descent worksheet - https://github.com/emerging-technologies/emerging-technologies.github.io/blob/master/notebooks/gradient-descent.ipynb
# Partial derivatives with respect to m and c
def grad_m(x, y, m, c):
return -2.0 * np.sum(x * (y - m * x - c))
def grad_c(x, y, m, c):
return -2.0 * np.sum(y - m * x - c)
# Set up variables
eta = 0.0001 # The x in mx + c
gdm, gdc = 1.0, 1.0 # Initial guesses for GD m and c
change = True
while change:
mnew = gdm - eta * grad_m(w, d, gdm, gdc)
cnew = gdc - eta * grad_c(w, d, gdm, gdc)
if gdm == mnew and gdc == cnew:
# Calculations no longer changing, stop the loop
change = False
else:
gdm, gdc = mnew, cnew
# - End adapted from Gradient Descent worksheet -
print("Gradient desc \t\tm: %20.16f c: %20.16f" % (gdm, gdc))
print()
# Graph labels etc
pl.rcParams['figure.figsize'] = (16,8)
pl.title('Iris Setosa Best Fit Line using Gradient Descent', fontsize=14)
pl.xlabel('Petal Length')
pl.ylabel('Petal Width')
y = gdm * (spLen + gdc)
pl.scatter(spLen, spWid, label = 'Iris Setosa')
pl.plot(spLen, y, 'g', label='Best Fit Line using Gradient Descent')
pl.legend()
pl.show()
Explanation: 10. Use Gradient Descent.
End of explanation |
4,388 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
pypdb demos
This is a set of basic examples of the usage and outputs of the various individual functions included in. There are generally three types of functions.
Preamble
Step1: Search functions that return lists of PDB IDs
Get a list of PDBs for a specific search term
Step2: Search by PubMed ID Number
Step3: Search by source organism using NCBI TaxId
Step4: Search by a specific experimental method
Step5: Search by protein structure similarity
Step6: Search by Author
Step7: Search by organism
Step8: Information Search functions
While the basic functions described in the previous section are useful for looking up and manipulating individual unique entries, these functions are intended to be more user-facing
Step9: Functions that return information about single PDB IDs
Get the full PDB file
Step10: Get a general description of the entry's metadata
Step11: Run a Sequence search
Formerly using BLAST, this method now uses MMseqs2
Step12: Search by PFAM number
Step13: New API for advanced search
The old API will gradually migrate to use these functions
Step14: Search for all entries that mention the word 'ribosome'
Step15: Search for polymers from 'Mus musculus'
Step16: Search for non-polymers from 'Mus musculus' or 'Homo sapiens'
Step17: Search for polymer instances whose titles contain "actin" or "binding" or "protein"
Step18: Search for assemblies that contain the words "actin binding protein"
(must be in that order).
For example, "actin-binding protein" and "actin binding protein" will match,
but "protein binding actin" will not.
Step19: Search for entries released in 2019 or later
Step20: Search for entries released only in 2019
Step21: Search by cell length
Step22: Search for structures under 4 angstroms of resolution
Step23: Search for structures with a given attribute.
(Admittedly every structure has a release date, but the same logic would
apply for a more sparse RCSB attribute).
Step24: Search for 'Mus musculus' or 'Homo sapiens' structures after 2019 using graph search | Python Code:
%pylab inline
from IPython.display import HTML
# Import from local directory
# import sys
# sys.path.insert(0, '../pypdb')
# from pypdb import *
# Import from installed package
from pypdb import *
%load_ext autoreload
%autoreload 2
Explanation: pypdb demos
This is a set of basic examples of the usage and outputs of the various individual functions included in. There are generally three types of functions.
Preamble
End of explanation
found_pdbs = Query("ribosome").search()
print(found_pdbs[:10])
Explanation: Search functions that return lists of PDB IDs
Get a list of PDBs for a specific search term
End of explanation
found_pdbs = Query(27499440, "PubmedIdQuery").search()
print(found_pdbs[:10])
Explanation: Search by PubMed ID Number
End of explanation
found_pdbs = Query('6239', 'TreeEntityQuery').search() #TaxID for C elegans
print(found_pdbs[:5])
Explanation: Search by source organism using NCBI TaxId
End of explanation
found_pdbs = Query('SOLID-STATE NMR', query_type='ExpTypeQuery').search()
print(found_pdbs[:10])
Explanation: Search by a specific experimental method
End of explanation
found_pdbs = Query('2E8D', query_type="structure").search()
print(found_pdbs[:10])
Explanation: Search by protein structure similarity
End of explanation
found_pdbs = Query('Perutz, M.F.', query_type='AdvancedAuthorQuery').search()
print(found_pdbs)
Explanation: Search by Author
End of explanation
q = Query("Dictyostelium", query_type="OrganismQuery")
print(q.search()[:10])
Explanation: Search by organism
End of explanation
matching_papers = find_papers('crispr', max_results=10)
print(list(matching_papers)[:10])
Explanation: Information Search functions
While the basic functions described in the previous section are useful for looking up and manipulating individual unique entries, these functions are intended to be more user-facing: they take search keywords and return lists of authors or dates
Find papers for a given keyword
End of explanation
pdb_file = get_pdb_file('4lza', filetype='cif', compression=False)
print(pdb_file[:400])
Explanation: Functions that return information about single PDB IDs
Get the full PDB file
End of explanation
all_info = get_info('4LZA')
print(list(all_info.keys()))
Explanation: Get a general description of the entry's metadata
End of explanation
q = Query("VLSPADKTNVKAAWGKVGAHAGEYGAEALERMFLSFPTTKTYFPHFDLSHGSAQVKGHGKKVADALTAVAHVDDMPNAL",
query_type="sequence",
return_type="polymer_entity")
print(q.search())
Explanation: Run a Sequence search
Formerly using BLAST, this method now uses MMseqs2
End of explanation
pfam_info = Query("PF00008", query_type="pfam").search()
print(pfam_info[:5])
Explanation: Search by PFAM number
End of explanation
from pypdb.clients.search.search_client import perform_search
from pypdb.clients.search.search_client import ReturnType
from pypdb.clients.search.operators import text_operators
Explanation: New API for advanced search
The old API will gradually migrate to use these functions
End of explanation
search_operator = text_operators.DefaultOperator(value="ribosome")
return_type = ReturnType.ENTRY
results = perform_search(search_operator, return_type)
print(results[:10])
Explanation: Search for all entries that mention the word 'ribosome'
End of explanation
search_operator = text_operators.ExactMatchOperator(value="Mus musculus",
attribute="rcsb_entity_source_organism.taxonomy_lineage.name")
return_type = ReturnType.POLYMER_ENTITY
results = perform_search(search_operator, return_type)
print(results[:5])
Explanation: Search for polymers from 'Mus musculus'
End of explanation
search_operator = text_operators.InOperator(values=["Mus musculus", "Homo sapiens"],
attribute="rcsb_entity_source_organism.taxonomy_lineage.name")
return_type = ReturnType.NON_POLYMER_ENTITY
results = perform_search(search_operator, return_type)
print(results[:5])
Explanation: Search for non-polymers from 'Mus musculus' or 'Homo sapiens'
End of explanation
search_operator = text_operators.ContainsWordsOperator(value="actin-binding protein",
attribute="struct.title")
return_type = ReturnType.POLYMER_INSTANCE
results = perform_search(search_operator, return_type)
print(results[:5])
Explanation: Search for polymer instances whose titles contain "actin" or "binding" or "protein"
End of explanation
search_operator = text_operators.ContainsPhraseOperator(value="actin-binding protein",
attribute="struct.title")
return_type = ReturnType.ASSEMBLY
results = perform_search(search_operator, return_type)
print(results[:5])
Explanation: Search for assemblies that contain the words "actin binding protein"
(must be in that order).
For example, "actin-binding protein" and "actin binding protein" will match,
but "protein binding actin" will not.
End of explanation
search_operator = text_operators.ComparisonOperator(
value="2019-01-01T00:00:00Z",
attribute="rcsb_accession_info.initial_release_date",
comparison_type=text_operators.ComparisonType.GREATER)
return_type = ReturnType.ENTRY
results = perform_search(search_operator, return_type)
print(results[:5])
Explanation: Search for entries released in 2019 or later
End of explanation
search_operator = text_operators.RangeOperator(
from_value="2019-01-01T00:00:00Z",
to_value="2020-01-01T00:00:00Z",
include_lower=True,
include_upper=False,
attribute="rcsb_accession_info.initial_release_date")
return_type = ReturnType.ENTRY
results = perform_search(search_operator, return_type)
print(results[:5])
Explanation: Search for entries released only in 2019
End of explanation
from pypdb.clients.search.search_client import perform_search_with_graph, SearchService, ReturnType
from pypdb.clients.search.operators import text_operators
cell_a_operator = text_operators.RangeOperator(
attribute='cell.length_a',
from_value=80,
to_value=84,
include_upper=True
)
results = perform_search_with_graph(
query_object=cell_a_operator,
return_type=ReturnType.ENTRY
)
print(results[:5])
Explanation: Search by cell length
End of explanation
search_operator = text_operators.ComparisonOperator(
value=4,
attribute="rcsb_entry_info.resolution_combined",
comparison_type=text_operators.ComparisonType.LESS)
return_type = ReturnType.ENTRY
results = perform_search(search_operator, return_type)
print(results[:5])
Explanation: Search for structures under 4 angstroms of resolution
End of explanation
search_operator = text_operators.ExistsOperator(
attribute="rcsb_accession_info.initial_release_date")
return_type = ReturnType.ENTRY
results = perform_search(search_operator, return_type)
print(results[:5])
Explanation: Search for structures with a given attribute.
(Admittedly every structure has a release date, but the same logic would
apply for a more sparse RCSB attribute).
End of explanation
from pypdb.clients.search.search_client import perform_search_with_graph
from pypdb.clients.search.search_client import ReturnType
from pypdb.clients.search.search_client import QueryGroup, LogicalOperator
from pypdb.clients.search.operators import text_operators
# SearchOperator associated with structures with under 4 Angstroms of resolution
under_4A_resolution_operator = text_operators.ComparisonOperator(
value=4,
attribute="rcsb_entry_info.resolution_combined",
comparison_type=text_operators.ComparisonType.GREATER)
# SearchOperator associated with entities containing 'Mus musculus' lineage
is_mus_operator = text_operators.ExactMatchOperator(
value="Mus musculus",
attribute="rcsb_entity_source_organism.taxonomy_lineage.name")
# SearchOperator associated with entities containing 'Homo sapiens' lineage
is_human_operator = text_operators.ExactMatchOperator(
value="Homo sapiens",
attribute="rcsb_entity_source_organism.taxonomy_lineage.name")
# QueryGroup associated with being either human or `Mus musculus`
is_human_or_mus_group = QueryGroup(
queries = [is_mus_operator, is_human_operator],
logical_operator = LogicalOperator.OR
)
# QueryGroup associated with being ((Human OR Mus) AND (Under 4 Angstroms))
is_under_4A_and_human_or_mus_group = QueryGroup(
queries = [is_human_or_mus_group, under_4A_resolution_operator],
logical_operator = LogicalOperator.AND
)
return_type = ReturnType.ENTRY
results = perform_search_with_graph(
query_object=is_under_4A_and_human_or_mus_group,
return_type=return_type)
print("\n", results[:10]) # Huzzah
Explanation: Search for 'Mus musculus' or 'Homo sapiens' structures after 2019 using graph search
End of explanation |
4,389 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Représentation graphique des orbitales atomiques
Germain Salvato-Vallverdu [email protected]
Parties Radiales
Parties angulaires
Orbitales atomiques
Un site avec de nombreuses visualisations
Step1: 1. Parties radiales
Expressions des parties radiales
Experssion analytique des parties radiales des orbitales atomiques d'un ion hydrogènpïde de numéro atomique $Z$
Step2: Vérification de la condition de normalisation.
Step3: Représentation graphique
Représentation graphiques de la fonction d'onde et de la densité de probabilité de présence radiale.
Step4: 2. Parties angulaires
Expressions des parties angulaires
Harmoniques spéhriques $Y^m_{\ell}(\theta, \varphi)$
\begin{align}
Y_0^0 & = \frac{1}{\sqrt{4\pi}} \
Y_1^0 & = \sqrt{\frac{3}{4\pi}} \cos\theta &
Y_1^{\pm1} & = \mp\sqrt{\frac{3}{2\pi}} \sin\theta \, e^{\pm i\varphi} \
Y_2^0 & = \sqrt{\frac{5}{16\pi}} \left(3\cos^2\theta-1\right) &
Y_2^{\pm 1} & = \mp \sqrt{\frac{15}{4\pi}} \sin\theta\cos\theta \, e^{\pm i\varphi} &
Y_2^{\pm 2} & = \sqrt{\frac{15}{4\pi}} \sin^2\theta \, e^{\pm 2i\varphi} \
Y_3^0 & = \sqrt{\frac{7}{16\pi}} \left(5\cos^3\theta - 3\cos\theta\right) &
Y_3^{\pm 1} & = \mp \sqrt{\frac{21}{64\pi}} \sin\theta\left(5\cos^2\theta - 1\right) \, e^{\pm i\varphi} &
Y_3^{\pm 2} & = \sqrt{\frac{105}{16\pi}} \sin^2\theta\cos\theta \, e^{\pm 2i\varphi} \
& & & & Y_3^{\pm 3} & = \mp \sqrt{\frac{35}{64\pi}} \sin^3\theta \, e^{\pm 3i\varphi}
\end{align}
Les fonctions pour m=0 sont réelles
Step8: On peut construire des fonctions réelles en combinant des fonctions de même valeur de m. Par exemple
Step10: Représentation graphique
On représentera deux fonctions réelles pour chaque valeurs de $\ell$.
Step12: 3. Orbitales atomiques
L'expression générale des orbitales atomiques fait intervenir une partie radiale et une partie angulaire et est caractérisée par les trois nombres quantiques $(n, \ell, m_{\ell})$
Step13: Orbitales atomiques de symétrie sphérique
Step14: Orbitales atomiques p, d et f | Python Code:
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
%matplotlib inline
Explanation: Représentation graphique des orbitales atomiques
Germain Salvato-Vallverdu [email protected]
Parties Radiales
Parties angulaires
Orbitales atomiques
Un site avec de nombreuses visualisations : orbitron gallery
End of explanation
def radial1s(r, Z=1, ao=0.529):
rho = Z * r / ao
return 2 * np.sqrt(Z/ao)**3 * np.exp(- rho)
def radial2s(r, Z=1, ao=0.529):
rho = Z * r / ao
return 1 / (2 * np.sqrt(2)) * (Z/ao)**(3/2) * (2 - rho) * np.exp(- rho / 2)
def radial2p(r, Z=1, ao=0.529):
rho = Z * r / ao
return 1 / (2 * np.sqrt(6)) * (Z/ao)**(3/2) * rho * np.exp(- rho / 2)
def radial3s(r, Z=1, ao=0.529):
rho = Z * r / ao
return 2 / (81 * np.sqrt(3)) * (Z/ao)**(3/2) * (27 - 18*rho + 2*rho**2) * np.exp(- rho / 3)
def radial3p(r, Z=1, ao=0.529):
rho = Z * r / ao
return 4 / (81 * np.sqrt(6)) * (Z/ao)**(3/2) * (6 * rho - rho**2) * np.exp(- rho / 3)
def radial3d(r, Z=1, ao=0.529):
rho = Z * r / ao
return 4 / (81 * np.sqrt(30)) * (Z/ao)**(3/2) * rho**2 * np.exp(- rho / 3)
Explanation: 1. Parties radiales
Expressions des parties radiales
Experssion analytique des parties radiales des orbitales atomiques d'un ion hydrogènpïde de numéro atomique $Z$ :
\begin{align}
R_{10}(r) & = 2 \left(\frac{Z}{a_o}\right)^{3/2} \exp\left(-\frac{Zr}{a_o}\right) \
R_{20}(r) & = \frac{1}{2\sqrt{2}} \left(\frac{Z}{a_o}\right)^{3/2} \left(2 - \frac{Zr}{a_o}\right) \exp\left(-\frac{Zr}{2a_o}\right) \
R_{21}(r) & = \frac{1}{2\sqrt{6}} \left(\frac{Z}{a_o}\right)^{5/2} r \exp\left(-\frac{Zr}{2a_o}\right) \
R_{30}(r) & = \frac{2}{81\sqrt{3}} \left(\frac{Z}{a_o}\right)^{3/2} \left(27 - \frac{18Zr}{a_o} + \frac{2Z^2r^2}{{a_o}^2}\right) \exp\left(-\frac{Zr}{3a_o}\right) \
R_{31}(r) & = \frac{4}{81\sqrt{6}} \left(\frac{Z}{a_o}\right)^{5/2} \left(6r - \frac{Zr^2}{a_o}\right) \exp\left(-\frac{Zr}{3a_o}\right) \
R_{32}(r) & = \frac{4}{81\sqrt{30}} \left(\frac{Z}{a_o}\right)^{7/2} r^2 \exp\left(-\frac{Zr}{3a_o}\right)
\end{align}
End of explanation
r = np.linspace(0, 20, 200)
np.trapz(r**2 * radial1s(r)**2, x=r)
Explanation: Vérification de la condition de normalisation.
End of explanation
fig, axes = plt.subplots(
ncols=3, nrows=3,
figsize=(12, 12), sharex=True, sharey=True,
gridspec_kw=dict(wspace=.06, hspace=.06)
)
[ax.grid(False) for ax in axes.flatten()]
#plt.rcParams["font.size"] = 12
size = 18
r = np.linspace(0, 15, 400)
axes[0, 0].plot(r, radial1s(r), label=r"$\phi_{1s}$")
axes[0, 0].plot(r, r**2 * radial1s(r)**2, label=r"$r^2 \left\vert\phi_{1s}\right\vert^2$")
axes[1, 0].plot(r, radial2s(r), label=r"$\phi_{2s}$")
axes[1, 0].plot(r, r**2 * radial2s(r)**2, label=r"$r^2 \left\vert\phi_{2s}\right\vert^2$")
axes[1, 1].plot(r, radial2p(r), label=r"$\phi_{2p}$")
axes[1, 1].plot(r, r**2 * radial2p(r)**2, label=r"$r^2 \left\vert\phi_{2p}\right\vert^2$")
axes[2, 0].plot(r, radial3s(r), label=r"$\phi_{3s}$")
axes[2, 0].plot(r, r**2 * radial3s(r)**2, label=r"$r^2 \left\vert\phi_{3s}\right\vert^2$")
axes[2, 1].plot(r, radial3p(r), label=r"$\phi_{3p}$")
axes[2, 1].plot(r, r**2 * radial3p(r)**2, label=r"$r^2 \left\vert\phi_{3p}\right\vert^2$")
axes[2, 2].plot(r, radial3d(r), label=r"$\phi_{3d}$")
axes[2, 2].plot(r, r**2 * radial3d(r)**2, label=r"$r^2 \left\vert\phi_{3d}\right\vert^2$")
for i in range(3):
axes[i, 0].set_ylabel("n = %d" % (i + 1), fontsize=size)
for j in range(3):
if j > i:
axes[i, j].axis("off")
else:
axes[i, j].plot([0, 15], [0, 0], linewidth=.5, color="C7", label="")
axes[i, j].legend(fontsize=size, frameon=False)
if i == 2:
axes[i, j].set_xlabel("r ($\AA$)", fontsize=size)
if i == j:
axes[i, j].set_title("$\ell$ = %d" % j, fontsize=size)
if j > 0:
axes[i, j].yaxis.set_visible(False)
axes[0, 0].set_ylim((-.3, 1.05))
axes[0, 0].set_xlim((0, 14.9))
fig.suptitle("Parties radiales des orbitales atomiques", fontsize=size, y=.96)
fig.savefig("AO_radial.pdf", bbox_inches="tight")
Explanation: Représentation graphique
Représentation graphiques de la fonction d'onde et de la densité de probabilité de présence radiale.
End of explanation
def Y00(theta, phi):
return 1 / np.sqrt(4 * np.pi)
def Y10(theta, phi):
return np.sqrt(3 / (4 * np.pi)) * np.cos(theta)
def Y20(theta, phi):
return np.sqrt(5 / (16 * np.pi)) * (3 * np.cos(theta)**2 - 1)
def Y30(theta, phi):
return np.sqrt(7 / (16 * np.pi)) * (5 * np.cos(theta)**3 - 3 * np.cos(theta))
Explanation: 2. Parties angulaires
Expressions des parties angulaires
Harmoniques spéhriques $Y^m_{\ell}(\theta, \varphi)$
\begin{align}
Y_0^0 & = \frac{1}{\sqrt{4\pi}} \
Y_1^0 & = \sqrt{\frac{3}{4\pi}} \cos\theta &
Y_1^{\pm1} & = \mp\sqrt{\frac{3}{2\pi}} \sin\theta \, e^{\pm i\varphi} \
Y_2^0 & = \sqrt{\frac{5}{16\pi}} \left(3\cos^2\theta-1\right) &
Y_2^{\pm 1} & = \mp \sqrt{\frac{15}{4\pi}} \sin\theta\cos\theta \, e^{\pm i\varphi} &
Y_2^{\pm 2} & = \sqrt{\frac{15}{4\pi}} \sin^2\theta \, e^{\pm 2i\varphi} \
Y_3^0 & = \sqrt{\frac{7}{16\pi}} \left(5\cos^3\theta - 3\cos\theta\right) &
Y_3^{\pm 1} & = \mp \sqrt{\frac{21}{64\pi}} \sin\theta\left(5\cos^2\theta - 1\right) \, e^{\pm i\varphi} &
Y_3^{\pm 2} & = \sqrt{\frac{105}{16\pi}} \sin^2\theta\cos\theta \, e^{\pm 2i\varphi} \
& & & & Y_3^{\pm 3} & = \mp \sqrt{\frac{35}{64\pi}} \sin^3\theta \, e^{\pm 3i\varphi}
\end{align}
Les fonctions pour m=0 sont réelles :
End of explanation
def Y11x(theta, phi):
1 / sqrt(2) (-Y_1^1 + Y_1^-1)
return np.sqrt(3 / (4 * np.pi)) * np.sin(theta) * np.cos(phi)
def Y21xz(theta, phi):
1 / sqrt(2) (-Y_2^1 + Y_2^-1)
return np.sqrt(15 / (2 * np.pi)) * np.sin(theta) * np.cos(theta) * np.cos(phi)
def Y31xz2(theta, phi):
1 / sqrt(2) (-Y_3^1 + Y_3^-1)
return np.sqrt(21 / (32 * np.pi)) * np.sin(theta) * (5 * np.cos(theta)**2 - 1) * np.cos(phi)
Explanation: On peut construire des fonctions réelles en combinant des fonctions de même valeur de m. Par exemple :
\begin{equation}
\frac{1}{\sqrt 2} \left(-Y_1^1 + Y_1^{-1}\right) = \sqrt{\frac{3}{2\pi}} \sin\theta\cos\varphi
\end{equation}
End of explanation
def pos_neg_part(fonction, theta, phi=0):
return the positive and negative part of the fonction
r = fonction(theta, phi)
ix = np.where(r >= 0)
xp = r[ix] * np.sin(theta[ix])
zp = r[ix] * np.cos(theta[ix])
ix = np.where(r < 0)
xn = -r[ix] * np.sin(theta[ix])
zn = -r[ix] * np.cos(theta[ix])
return xp, zp, xn, zn
fig, axes = plt.subplots(
ncols=4, nrows=2,
figsize=(14, 8), sharex=True, sharey=True,
gridspec_kw=dict(wspace=.05, hspace=.05)
)
for ax in axes.flatten():
ax.grid(False)
ax.set_aspect("equal")
ax.set_xlim(-lim, lim)
ax.set_ylim(-lim, lim)
npts = 500
lim = .8
a = .6
cp = "C0" # positive
cn = "C3" # negative
co = "C7" # nodal
theta = np.linspace(0, 2 * np.pi, npts)
# s
x = Y00(theta, phi=0) * np.sin(theta)
z = Y00(theta, phi=0) * np.cos(theta)
axes[0, 0].fill(x, z, color=cp, alpha=a)
axes[0, 0].plot(x, z, color=cp)
axes[0, 0].set_title("$ns$, $\ell=0$", fontsize=16)
axes[0, 0].set_xlabel("x / $\AA$", fontsize=14)
axes[0, 0].text(.45, .65, "$ns$", fontsize=14)
axes[1, 0].axis("off")
# p
xp, zp, xn, zn = pos_neg_part(Y10, theta)
axes[0, 1].fill(xp, zp, alpha=a, color=cp)
axes[0, 1].plot(xp, zp, color=cp)
axes[0, 1].fill(xn, zn, color=cn, alpha=a)
axes[0, 1].plot(xn, zn, color=cn)
axes[0, 1].plot((-lim, lim), (0, 0), color=co, lw=.5)
axes[0, 1].set_title("$np$, $\ell=1$", fontsize=16)
axes[0, 1].text(.45, .65, "$np_z$", fontsize=14)
axes[0, 1].yaxis.set_visible(False)
xp, zp, xn, zn = pos_neg_part(Y11x, theta)
axes[1, 1].fill(xp, zp, color=cp, alpha=a)
axes[1, 1].plot(xp, zp, color=cp)
axes[1, 1].fill(xn, zn, color=cn, alpha=a)
axes[1, 1].plot(xn, zn, color=cn)
axes[1, 1].plot((0, 0), (-lim, lim), color=co, lw=.5)
axes[1, 1].set_xlabel("x / $\AA$", fontsize=14)
axes[1, 1].set_ylabel("z / $\AA$", fontsize=14)
axes[1, 1].text(.45, .65, "$np_x$", fontsize=14)
# d
xp, zp, xn, zn = pos_neg_part(Y20, theta)
axes[0, 2].fill(xp, zp, color=cp, alpha=a)
axes[0, 2].plot(xp, zp, color=cp)
axes[0, 2].fill(xn, zn, color=cn, alpha=a)
axes[0, 2].plot(xn, zn, color=cn)
theta0 = np.arccos(1 / np.sqrt(3))
axes[0, 2].plot((-np.sin(theta0), np.sin(theta0)), (-np.cos(theta0), np.cos(theta0)), color=co, lw=.5)
axes[0, 2].plot((-np.sin(theta0), np.sin(theta0)), (np.cos(theta0), -np.cos(theta0)), color=co, lw=.5)
axes[0, 2].set_title("$nd$, $\ell=2$", fontsize=16)
axes[0, 2].text(.45, .65, "$nd_{z^2}$", fontsize=14)
axes[0, 2].yaxis.set_visible(False)
xp, zp, xn, zn = pos_neg_part(Y21xz, theta)
axes[1, 2].fill(xp, zp, color=cp, alpha=a)
axes[1, 2].plot(xp, zp, color=cp)
axes[1, 2].fill(xn, zn, color=cn, alpha=a)
axes[1, 2].plot(xn, zn, color=cn)
axes[1, 2].plot((0, 0), (-lim, lim), color=co, lw=.5)
axes[1, 2].plot((-lim, lim), (0, 0), color=co, lw=.5)
axes[1, 2].set_xlabel("x / $\AA$", fontsize=14)
axes[1, 2].text(.45, .65, "$nd_{xz}$", fontsize=14)
axes[1, 2].yaxis.set_visible(False)
# f
xp, zp, xn, zn = pos_neg_part(Y30, theta)
axes[0, 3].fill(xp, zp, color=cp, alpha=a)
axes[0, 3].plot(xp, zp, color=cp)
axes[0, 3].fill(xn, zn, color=cn, alpha=a)
axes[0, 3].plot(xn, zn, color=cn)
axes[0, 3].plot((-lim, lim), (0, 0), color=co, lw=.5)
theta0 = np.arccos(np.sqrt(3/5))
axes[0, 3].plot((-np.sin(theta0), np.sin(theta0)), (-np.cos(theta0), np.cos(theta0)), color=co, lw=.5)
axes[0, 3].plot((-np.sin(theta0), np.sin(theta0)), (np.cos(theta0), -np.cos(theta0)), color=co, lw=.5)
axes[0, 3].set_title("$nf$, $\ell=3$", fontsize=16)
axes[0, 3].text(.45, .65, "$nf_{z^3}$", fontsize=14)
axes[0, 3].yaxis.set_visible(False)
xp, zp, xn, zn = pos_neg_part(Y31xz2, theta)
axes[1, 3].fill(xp, zp, color=cp, alpha=a)
axes[1, 3].plot(xp, zp, color=cp)
axes[1, 3].fill(xn, zn, color=cn, alpha=a)
axes[1, 3].plot(xn, zn, color=cn)
axes[1, 3].plot((0, 0), (-lim, lim), color=co, lw=.5)
theta0 = np.arccos(np.sqrt(1/5))
axes[1, 3].plot((-np.sin(theta0), np.sin(theta0)), (-np.cos(theta0), np.cos(theta0)), color=co, lw=.5)
axes[1, 3].plot((-np.sin(theta0), np.sin(theta0)), (np.cos(theta0), -np.cos(theta0)), color=co, lw=.5)
axes[1, 3].set_xlabel("x / $\AA$", fontsize=14)
axes[1, 3].text(.45, .65, "$nf_{xz^2}$", fontsize=14)
axes[1, 3].yaxis.set_visible(False)
# layout
axes[0, 0].set_ylabel("z / $\AA$", fontsize=14)
fig.suptitle("Parties angulaires des orbitales atomiques", fontsize=18)
fig.savefig("OA_angular.pdf", bbox_inches="tight")
Explanation: Représentation graphique
On représentera deux fonctions réelles pour chaque valeurs de $\ell$.
End of explanation
def sample(fonction, rmax=10, ntry=10000, phi=0):
échantillonage de la densité de probabilité de présence associée à une OA
x = np.random.uniform(-rmax, rmax, ntry)
z = np.random.uniform(-rmax, rmax, ntry)
r = np.sqrt(x**2 + z**2)
theta = np.arccos(z / r) # le theta des sphériques
rho = fonction(r, theta, phi=phi)**2
rnd = np.random.rand(ntry)
ix = np.where(rho > rnd)
return x[ix], z[ix]
Explanation: 3. Orbitales atomiques
L'expression générale des orbitales atomiques fait intervenir une partie radiale et une partie angulaire et est caractérisée par les trois nombres quantiques $(n, \ell, m_{\ell})$ :
\begin{equation}
\Psi_{n, \ell, m_{\ell}}(r, \theta, \varphi) = R_{n, \ell} (r) \, Y_{\ell}^{m_{\ell}}(\theta, \varphi)
\end{equation}
On représente la densité électronique associée à différentes orbitales atomiques dans le plan $(xOz)$.
End of explanation
def OA1s(r, theta, phi, ao=0.529, Z=1):
return radial1s(r, Z, ao) * Y00(theta, phi)
def OA2s(r, theta, phi, ao=0.529, Z=1):
return radial2s(r, Z, ao) * Y00(theta, phi)
def OA3s(r, theta, phi, ao=0.529, Z=1):
return radial3s(r, Z, ao) * Y00(theta, phi)
fig, axes = plt.subplots(
ncols=3, nrows=1,
figsize=(12, 4), sharex=True, sharey=True,
gridspec_kw=dict(wspace=.05, hspace=.05)
)
# select ntry such as you get about the same amout of points for each case
# 1s
x, z = sample(OA1s, ntry=1000000, rmax=10)
axes[0].scatter(x, z, s=4, color="C7")
axes[0].set(aspect="equal")
print("1s", x.shape)
# 2s
x, z = sample(OA2s, ntry=6000000, rmax=12)
axes[1].scatter(x, z, s=4, color="C7")
axes[1].set(aspect="equal")
axes[1].grid(False)
axes[1].yaxis.set_visible(False)
print("2s", x.shape)
# 3s
x, z = sample(OA3s, ntry=20000000, rmax=14)
axes[2].scatter(x, z, s=4, color="C7")
axes[2].set(aspect="equal")
print("3s", x.shape)
rmax = 10
axes[2].set_xlim((-rmax, rmax))
axes[2].set_ylim((-rmax, rmax))
axes[2].grid(False)
axes[2].yaxis.set_visible(False)
axes[0].set_ylabel("z / $\AA$", fontsize=14)
axes[0].grid(False)
axes[0].yaxis.set_visible(False)
for i in range(3):
axes[i].set_xlabel("x / $\AA$", fontsize=14)
axes[i].set_title("%ds" % (i+1), fontsize=14)
fig.suptitle("Densité électroniques (OA de type s)", fontsize=18, y=1.05)
fig.savefig("nuage_electronique_s.pdf", bbox_inches="tight")
# select ntry such as you get about the same amout of points for each case
rmax = 11
fig = plt.figure(figsize=(12, 12))
r = np.linspace(0, 15, 400)
for i, OA in enumerate([(radial1s, "1s"), (radial2s, "2s"), (radial3s, "3s")]):
fonction, label = OA
ax = fig.add_subplot(3, 3, i + 1)
ax.plot(r, fonction(r), color="C0")
ax.plot(-r, fonction(r), color="C0")
ax.plot((-rmax, rmax), (0, 0), color="C7", linewidth=.5)
ax.set_ylim((-.3, 4))
ax.set_xlim((-rmax, rmax))
ax.xaxis.set_visible(False)
ax.set_title(label, fontsize=16)
ax.grid(False)
if i == 0:
ax.set_ylabel("Fonction d'onde", fontsize=14)
if i > 0:
ax.yaxis.set_visible(False)
ax = fig.add_subplot(3, 3, 7 + i)
ax.plot(r, r**2 * fonction(r)**2, color="C1")
ax.plot(-r, r**2 * fonction(r)**2, color="C1")
ax.set_xlim((-rmax, rmax))
ax.set_ylim((-.05, 1))
ax.plot((-rmax, rmax), (0, 0), color="C7", linewidth=.5)
ax.set_xlabel("x / $\AA$", fontsize=14)
ax.grid(False)
if i == 0:
ax.set_ylabel("Densité de probabilité\nde présence", fontsize=14)
if i > 0:
ax.yaxis.set_visible(False)
# 1s
x, z = sample(OA1s, ntry=1000000, rmax=10)
ax = fig.add_subplot(3, 3, 4)
ax.scatter(x, z, s=4, alpha=1, color="C7")
ax.set_xlim((-rmax, rmax))
ax.set(aspect="equal")
ax.xaxis.set_visible(False)
ax.set_ylabel("z / $\AA$", fontsize=14)
ax.grid(False)
print("1s", x.shape)
# 2s
x, z = sample(OA2s, ntry=6000000, rmax=12)
ax = fig.add_subplot(3, 3, 5, sharey=ax)
ax.scatter(x, z, s=4, alpha=1, color="C7")
ax.set_xlim((-rmax, rmax))
ax.set(aspect="equal")
ax.yaxis.set_visible(False)
ax.xaxis.set_visible(False)
ax.grid(False)
print("2s", x.shape)
# 3s
x, z = sample(OA3s, ntry=20000000, rmax=14)
ax = fig.add_subplot(3, 3, 6, sharey=ax)
ax.scatter(x, z, s=4, alpha=1, color="C7")
ax.set_xlim((-rmax, rmax))
ax.set(aspect="equal")
ax.yaxis.set_visible(False)
ax.xaxis.set_visible(False)
ax.grid(False)
print("3s", x.shape)
ax.set_xlim((-rmax, rmax))
ax.set_ylim((-rmax, rmax))
fig.suptitle("Densité électroniques (OA de type s)", fontsize=18, y=.95)
fig.savefig("AO_s.pdf", bbox_inches="tight")
fig.subplots_adjust(wspace=.05, hspace=.05)
Explanation: Orbitales atomiques de symétrie sphérique
End of explanation
def OA2pz(r, theta, phi, ao=0.529, Z=1):
return radial2p(r, Z, ao) * Y10(theta, phi)
def OA3pz(r, theta, phi, ao=0.529, Z=1):
return radial3p(r, Z, ao) * Y10(theta, phi)
def OA3dz2(r, theta, phi, ao=0.529, Z=1):
return radial3d(r, Z, ao) * Y20(theta, phi)
def OA4fz3(r, theta, phi, ao=0.529, Z=1):
rho = Z * r / ao
radial = 1 / (768 * np.sqrt(35)) * (Z/ao)**(3/2) * rho**3 * np.exp(- rho / 4)
return radial * Y30(theta, phi)
fig, axes = plt.subplots(
ncols=2, nrows=3,
figsize=(10, 15), sharex=True, sharey=True,
gridspec_kw=dict(wspace=.05, hspace=.05)
)
[ax.grid(False) for ax in axes.flatten()]
lim = 20
co = "C3"
# 1s
x, z = sample(OA1s, ntry=1200000, rmax=12)
axes[0, 0].scatter(x, z, s=4, color="C7")
axes[0, 0].set(aspect="equal")
axes[0, 0].text(10, 10, "$1s$", fontsize=16)
print("1s", x.shape)
# 2s
x, z = sample(OA2s, ntry=4000000, rmax=12)
axes[0, 1].scatter(x, z, s=4, color="C7")
axes[0, 1].set(aspect="equal")
axes[0, 1].text(10, 10, "$2s$", fontsize=16)
print("2s", x.shape)
# 2p
x, z = sample(OA2pz, ntry=5000000, rmax=15)
axes[1, 0].scatter(x, z, s=4, color="C7")
axes[1, 0].set_aspect("equal")
axes[1, 0].plot((-lim, lim), (0, 0), color=co, lw=.5)
axes[1, 0].text(10, 11, "$2p_z$", fontsize=16)
print("2p", x.shape)
# 3p
x, z = sample(OA3pz, ntry=15000000, rmax=15)
axes[1, 1].scatter(x, z, s=4, color="C7")
axes[1, 1].set_aspect("equal")
axes[1, 1].plot((-lim, lim), (0, 0), color=co, lw=.5)
axes[1, 1].text(10, 10, "$3p_z$", fontsize=16)
print("3p", x.shape)
# 3dz2
x, z = sample(OA3dz2, ntry=15000000, rmax=15)
axes[2, 0].scatter(x, z, s=4, color="C7")
axes[2, 0].set_aspect("equal")
theta0 = np.arccos(np.sqrt(1/3))
axes[2, 0].plot(
(-lim * np.sin(theta0), lim * np.sin(theta0)),
(-lim * np.cos(theta0), lim * np.cos(theta0)),
color=co, lw=.5
)
axes[2, 0].plot(
(-lim * np.sin(theta0), lim * np.sin(theta0)),
(lim * np.cos(theta0), -lim * np.cos(theta0)),
color=co, lw=.5
)
axes[2, 0].text(10, 10, "$3d_{z^2}$", fontsize=16)
print("3d", x.shape)
# 4fz3
x, z = sample(OA4fz3, ntry=40000000, rmax=15)
axes[2, 1].scatter(x, z, s=4, color="C7")
axes[2, 1].set_aspect("equal")
theta0 = np.arccos(np.sqrt(3/5))
axes[2, 1].plot(
(-lim * np.sin(theta0), lim * np.sin(theta0)),
(-lim * np.cos(theta0), lim * np.cos(theta0)),
color=co, lw=.5
)
axes[2, 1].plot(
(-lim * np.sin(theta0), lim * np.sin(theta0)),
(lim * np.cos(theta0), -lim * np.cos(theta0)),
color=co, lw=.5
)
axes[2, 1].plot((-lim, lim), (0, 0), color=co, lw=.5)
axes[2, 1].text(10, 10, "$4f_{z^3}$", fontsize=16)
print("4f", x.shape)
# layout
axes[1, 1].set_xlim((-14, 14))
axes[1, 1].set_ylim((-14, 14))
axes[0, 0].set_ylabel("z / $\AA$", fontsize=14)
axes[1, 0].set_ylabel("z / $\AA$", fontsize=14)
axes[2, 0].set_ylabel("z / $\AA$", fontsize=14)
axes[2, 0].set_xlabel("x / $\AA$", fontsize=14)
axes[2, 1].set_xlabel("x / $\AA$", fontsize=14)
fig.suptitle("Densité électronique des orbitales atomique", fontsize=18, y=.92)
fig.savefig("AO_densite_electronique.pdf", bbox_inches="tight")
Explanation: Orbitales atomiques p, d et f
End of explanation |
4,390 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning in Cyber Security
Learn to spot an attacker through monitoring TCP logs.
Dataset
Step1: Transform Data
Dataset values are all of type "object" => convert to numeric types.
Label Encoder - replaces strings with an incrementing integer.
Step2: Preprocessing Data
Step3: Train Model
Step4: Use model to make predictions
Step5: Sources
[1] - KDD Cup 99 dataset
[2] - M. Tavallaee, E. Bagheri, W. Lu, and A. Ghorbani, “A Detailed Analysis of the KDD CUP 99 Data Set,” Submitted to Second IEEE Symposium on Computational Intelligence for Security and Defense Applications (CISDA), 2009. link
Other Resources
PySpark solution to the KDDCup99
link
Logs
Public PCAP files for PCAP-based evaluation of network-based intrusion detection system (NIDS) evaluation.
The Cyber Research Center - DataSets - ITOC CDX (2009)
Labelled datasets
UNB ISCX (2012-) datasets contain a range of "sophisticated" intrusion attacks, botnets and DoS attacks.
CSIC 2010 HTTP Dataset in CSV format (for Weka Analysis) dataset is from a web penetration testing testbed for anomaly detection training.
Attack Challenge - ECML/PKDD Workshop (2007) dataset contains web penetration testing data.
NSL-KDD Data Set (2007) intended to replace the DARPA KDDCup99 dataset for IDS.
gureKddcup data base (2008) intended to replace the DARPA KDDCup99 dataset for IDS.
CTU-13 dataset - pcap files (Stratosphere IPS).
Where to go from here
Seeking more labelled datasets and determining the potential for other non-labelled datasets. | Python Code:
from array import array
import matplotlib.pyplot as plt
import pandas as pd
import sklearn
from sklearn.datasets import fetch_kddcup99
from sklearn.model_selection import train_test_split, KFold, cross_val_score
from sklearn import preprocessing
from sklearn.linear_model import LogisticRegression
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.cluster import KMeans
%matplotlib inline
dataset_part = fetch_kddcup99(percent10=True) # Over 600 MB in memeory.
# dataset_full = fetch_kddcup99(percent10=False) # Crashed my computer with 16 GB of RAM.
dataset_part.data[0] # Sample of TCP record.
len(set(dataset_part.target)) # Number of unique classifications.
Explanation: Machine Learning in Cyber Security
Learn to spot an attacker through monitoring TCP logs.
Dataset: KDD Cup 1999
Source [1]
TCP data dump - array of 41 variables (TCP features)
Basic Features (9)
Content Features (13)
Traffic Features (19)
Consists of 23 types of attacks:
DOS: back, land, neptune, pod, surf, teardrop
R2L: ftp_write, guess_passwd, lmap, multihop, phf, spy, warezclient, warezmaster
U2R: buffer_overflow, loadmodule, perl, rootkit
Probing: ipsweep, nmap, portsweep, satan
Basic Features of TCP
duration - connection time in seconds.
protocal_type - i.e. TCP, UDP
service - Netword service on destination (i.e. HTTP, Telnet)
source_bytes - amount of data from source to destination.
flag - normal or error status of connection.
land - Connection is from/to the same host/port: "1", else "0".
wrong_fragment - number of "wrong" fragments.
urgent - number of urgent packets.
Content Features of TCP
host - number of "hot" indicators.
num_failed_logins - number of attempts.
logged_in - succes: "1", else "0".
compromised - number of "compromised" conditions.
root_shell - if root shell obtained: "1", else "0".
su_attempted - if "su" command attempted: "1", else "0".
num_root - number of root accesses.
num_file_creations - number of creation operations.
num_shells - number of shell prompts.
num_access_files - number of operations on access control files.
num_outbound_cmds - number per session.
is_hot_login - if login on "hot" list: "1", else "0".
is_guest_login - if guest: "1", else "0".
Traffic Features of TCP (2 sec window)
count - number of connections to the same host.
serror_rate - % of connections with "SYN" errors.
rerror_rate - % of connections with "REJ" errors.
same_srv_rate - % of connections to the same service.
diff_srv_rate - % of connections to different services.
srv_count - number of connections to the same service.
srv_serror_rate - % of connections with "SYN" errors.
srv_rerror_rate - % of connections with "REJ" errors.
srv_diff_host_rate - % of connections to different hosts.
Models (from scikit-learn)
Generative:
Naive Bayes
Descriminative:
kNN
kMeans
Logistic Regression
Decision Tree
Random Forest
Others to consider:
LinearDiscriminantAnalysis
GaussianNB
NBTree, Random Tree, Multilayer Perceptron (additional models from [2])
SVC (Not used for this problem. "The fit time complexity is more than quadratic with the number of samples which makes it hard to scale to dataset with more than a couple of 10000 samples.")
End of explanation
df = pd.DataFrame(dataset_part.data)
df.head(1)
df = df.apply(pd.to_numeric, errors='ignore')
# Example from http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html
'''
le = preprocessing.LabelEncoder()
le.fit(list(names))
# le.classes_ # Shows all labels.
print(le.transform([b'icmpeco_iSF', b'icmpecr_iSF', b'icmpred_iSF']) )
print(le.inverse_transform([0, 0, 1, 2]))
'''
# https://datascience.stackexchange.com/questions/16728/could-not-convert-string-to-float-error-on-kddcup99-dataset
for column in df.columns:
if df[column].dtype == object:
le = preprocessing.LabelEncoder()
df[column] = le.fit_transform(df[column])
df.head(1) # All strings removed.
Explanation: Transform Data
Dataset values are all of type "object" => convert to numeric types.
Label Encoder - replaces strings with an incrementing integer.
End of explanation
X = df.values
le = preprocessing.LabelEncoder()
y = le.fit_transform(dataset_part.target)
y_dict = dict(zip(y,le.classes_)) # Saved for later lookup.
# Test options and evaluation metric
N_SPLITS = 7
SCORING = 'accuracy'
# Split-out validation dataset
test_size=0.33
SEED = 42
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=test_size, random_state=SEED)
Explanation: Preprocessing Data
End of explanation
# Algorithms
models = [
#('LR', LogisticRegression()),
('LDA', LinearDiscriminantAnalysis()),
#('KNN', KNeighborsClassifier()),
#('KMN', KMeans()),
#('CART', DecisionTreeClassifier()),
#('NB', GaussianNB()),
]
# evaluate each model in turn
results = []
names = []
print('{:8}{:^8}{:^8}'.format('Model','mean','std'))
print('-' * 23)
for name, model in models:
kfold = KFold(n_splits=N_SPLITS, random_state=SEED)
%timeit -n1 cv_results = cross_val_score(model, X_train, y_train, cv=kfold, scoring=SCORING)
results.append(cv_results)
names.append(name)
print('{:8}{:^8.2%}{:^8.2%}'.format(name, cv_results.mean(), cv_results.std()))
print(*cv_results)
previous_results = '''
LR: 98.87% (0.10%)
LDA: 99.49% (0.05%)
KNN: 99.84% (0.01%) <-- slow
CART: 99.94% (0.00%)
NB: 93.96% (0.96%)
SVM: <-- very slow
'''
# Compare Algorithms
fig = plt.figure()
fig.suptitle('Algorithm Comparison')
ax = fig.add_subplot(111)
plt.boxplot(results)
ax.set_xticklabels(y)
plt.show()
Explanation: Train Model
End of explanation
test = [0, 1, 22, 9, 181, 5450, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 8, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 9, 9, 1.0, 0.0, 0.11, 0.0, 0.0, 0.0, 0.0, 0.0]
print(neigh.predict([test]))
print(neigh.predict_proba([test])) # TODO: research this.
Explanation: Use model to make predictions:
End of explanation
print('{:10}{:10}{:10}'.format('Model','mean','std'))
print('LDA: 99.49% (0.05%)')
print('{:8}{:^8}{:^8}'.format('Model','mean','std'))
print('-' * 23)
print('{:8}{:^8.2%}{:^8.2%}'.format('LDA', .9949, .0005))
Explanation: Sources
[1] - KDD Cup 99 dataset
[2] - M. Tavallaee, E. Bagheri, W. Lu, and A. Ghorbani, “A Detailed Analysis of the KDD CUP 99 Data Set,” Submitted to Second IEEE Symposium on Computational Intelligence for Security and Defense Applications (CISDA), 2009. link
Other Resources
PySpark solution to the KDDCup99
link
Logs
Public PCAP files for PCAP-based evaluation of network-based intrusion detection system (NIDS) evaluation.
The Cyber Research Center - DataSets - ITOC CDX (2009)
Labelled datasets
UNB ISCX (2012-) datasets contain a range of "sophisticated" intrusion attacks, botnets and DoS attacks.
CSIC 2010 HTTP Dataset in CSV format (for Weka Analysis) dataset is from a web penetration testing testbed for anomaly detection training.
Attack Challenge - ECML/PKDD Workshop (2007) dataset contains web penetration testing data.
NSL-KDD Data Set (2007) intended to replace the DARPA KDDCup99 dataset for IDS.
gureKddcup data base (2008) intended to replace the DARPA KDDCup99 dataset for IDS.
CTU-13 dataset - pcap files (Stratosphere IPS).
Where to go from here
Seeking more labelled datasets and determining the potential for other non-labelled datasets.
End of explanation |
4,391 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We start by importing NumPy which you should be familiar with from the previous tutorial. The next library introduced is called MatPlotLib which is the roughly the Python equivalent of Matlab's plotting functionality. Think of it as a Mathematical Plotting Library.
Let's use NumPy to create a Gaussian distribution and then plot it.
Step1: We make a figure object that allows us to draw things inside of it. This is our canvas which lets us save the entire thing as an image or a PDF to our computer.
We also split up this canvas to a 2x2 grid and tell matplotlib that we want 4 axes object. Each axes object is a separate plot that we can draw into. For the purposes of the exercise, we'll demonstrate the different linestyles in each subplot. The ordering is by setting [0,0] to the top-left and [n,m] to the bottom-right. As this returns a 2D array, you access each axis by ax[i,j] notation.
Step2: Now, for each axes, we want to draw one of the four different example linestyles so you can get an idea of how this works.
Step3: You can see that we use ax.reshape(-1) which flattens our axes object, so we can just loop over all 4 entries without nested loops, and we combine this with the different linestyles we want to look at
Step4: But as a perfectionist, I dislike that things look like they overlap... let's fix this using matplotlib.tight_layout()
Step5: Sharing Axes
A nice example to demonstrate another feature of NumPy and Matplotlib together for analysis and visualization is to make one of my favorite kinds of plots
Step6: Draw size=1000000 random samples from a multivariate normal distribution. We first specify the means
Step7: Oh, that looks weird, maybe we should increase the binning.
Step8: And we can understand the underlying histograms that lie alone each axis.
Step9: Now let's combine the these plots in a way that teaches someone what a 2D histogram represents along each dimension. In order to get our histogram for the y-axis "rotated", we just need to specify a orientiation='horizontal' when drawing the histogram.
Step10: But again, I am not a huge fan of the whitespace between subplots, so I run the following | Python Code:
fig, ax = pl.subplots(2,2, figsize=(8,6))
fig
ax
ax[0,0]
Explanation: We start by importing NumPy which you should be familiar with from the previous tutorial. The next library introduced is called MatPlotLib which is the roughly the Python equivalent of Matlab's plotting functionality. Think of it as a Mathematical Plotting Library.
Let's use NumPy to create a Gaussian distribution and then plot it.
End of explanation
# create x values from [0,99)
x = np.arange(100)
x
# generate y values based on a Gaussian PDF
y1 = np.random.normal(loc=0.0, scale=1.0, size=x.size) # mu=0.0, sigma=1.0
y2 = np.random.normal(loc=2.0, scale=2.0, size=x.size) # mu=1.0, sigma=2.0
y3 = np.random.normal(loc=-2.0, scale=0.5, size=x.size)# mu=-1.0, sigma=0.5
y1[:20] # just show the first 20 as an example
Explanation: We make a figure object that allows us to draw things inside of it. This is our canvas which lets us save the entire thing as an image or a PDF to our computer.
We also split up this canvas to a 2x2 grid and tell matplotlib that we want 4 axes object. Each axes object is a separate plot that we can draw into. For the purposes of the exercise, we'll demonstrate the different linestyles in each subplot. The ordering is by setting [0,0] to the top-left and [n,m] to the bottom-right. As this returns a 2D array, you access each axis by ax[i,j] notation.
End of explanation
for axis, linestyle in zip(ax.reshape(-1), ['-', '--', '-.', ':']):
axis.plot(x, y1, color="red", linewidth=1.0, linestyle=linestyle)
axis.plot(x, y2, color="blue", linewidth=1.0, linestyle=linestyle)
axis.plot(x, y3, color="green", linewidth=1.0, linestyle=linestyle)
axis.set_title('line style: '+linestyle)
axis.set_xlabel("$x$")
axis.set_ylabel("$e^{-\\frac{(x-\\mu)^2}{2\\sigma}}$")
Explanation: Now, for each axes, we want to draw one of the four different example linestyles so you can get an idea of how this works.
End of explanation
fig
Explanation: You can see that we use ax.reshape(-1) which flattens our axes object, so we can just loop over all 4 entries without nested loops, and we combine this with the different linestyles we want to look at: ['-', '--', '-.', ':'].
So for each axis, we plot y1, y2, and y3 with different colors for the same linestyle and then set the title. Let's look at the plots we just made:
End of explanation
pl.tight_layout() # a nice command that just fixes overlaps
fig
pl.clf() # clear current figure
Explanation: But as a perfectionist, I dislike that things look like they overlap... let's fix this using matplotlib.tight_layout()
End of explanation
data_2d = np.random.multivariate_normal([10, 5], [[9,3],[3,18]], size=1000000)
Explanation: Sharing Axes
A nice example to demonstrate another feature of NumPy and Matplotlib together for analysis and visualization is to make one of my favorite kinds of plots
End of explanation
pl.hist2d(data_2d[:, 0], data_2d[:,1])
pl.show()
Explanation: Draw size=1000000 random samples from a multivariate normal distribution. We first specify the means: [10, 5], then the covariance matrix of the distribution [[3,2],[2,3]]. What does this look like?
End of explanation
pl.hist2d(data_2d[:, 0], data_2d[:, 1], bins=100)
pl.show()
Explanation: Oh, that looks weird, maybe we should increase the binning.
End of explanation
fig, ax = pl.subplots()
ax.hist(data_2d[:,0], bins=100, color="red", alpha=0.5) # draw x-histogram
ax.hist(data_2d[:,1], bins=100, color="blue", alpha=0.5) # draw y-histogram
pl.show()
pl.clf()
Explanation: And we can understand the underlying histograms that lie alone each axis.
End of explanation
fig, ax = pl.subplots(2,2, sharex='col', sharey='row', figsize=(10,10))
# draw x-histogram at top-left
ax[0,0].hist(data_2d[:,0], bins=100, color="red") # draw x-histogram
# draw y-histogram at bottom-right
ax[1,1].hist(data_2d[:,1], bins=100, color="blue",orientation="horizontal")
# draw 2d histogram at bottom-left
ax[1,0].hist2d(data_2d[:, 0], data_2d[:, 1], bins=100)
# delete top-right
fig.delaxes(ax[0,1])
fig
Explanation: Now let's combine the these plots in a way that teaches someone what a 2D histogram represents along each dimension. In order to get our histogram for the y-axis "rotated", we just need to specify a orientiation='horizontal' when drawing the histogram.
End of explanation
pl.subplots_adjust(wspace=0, hspace=0)
fig
Explanation: But again, I am not a huge fan of the whitespace between subplots, so I run the following
End of explanation |
4,392 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualising modes
In this example, we use a simple approach to search for the modes of a split ring resonator, and visualise the corresponding current and charge distribution. These modes exist at complex frequencies (ie. values of $s$ with nonzero real parts), and are found using iterative search techniques
Step1: Setup simulation
First we load the geometry of a split ring from a file. Note how the geometric parameter inner_radius has been overriden to make the ring is slightly wider than in the previous example, for nicer plots.
Step2: Search for modes
Now we ask OpenModes to find the values of the complex frequency parameter s for which the system becomes singular. This is how we find the modes of the system, using an iterative search. Note that we need to specify a frequency at which to perform some intial estimations. The choice of this frequency is not too critical, but it should be somewhere in the frequency range of interest. Here we will calculate the 4 lowest order modes.
Notice that this is a 3 step process. First estimates are given for the location of the modes. Then they are iteratively refined. Finally, the complex conjugate modes are added (at negative $\omega$), as required for any physically realisable resonator.
Step3: Let us now plot the location of the modes in the complex s plane. The frequency of each mode is represented by their position on the $j\omega$ axis, while the $\Omega$ axis gives the damping, which is related to the width of the resonance.
Step4: We can find the computed values of the resonant frequencies in Hz by looking at the imaginary parts of the singular points. For this particular geometry, we see that these split rings resonate in the GHz frequency range. Note that the lowest order modes will be most accurately represented, whereas for higher order modes the mesh cells are larger relative to the wavelength. If in doubt, repeat the calculation for smaller mesh tolerance and see how much the values change.
Step5: Plot the mode currents and charges
As well as calculating the frequencies of the modes, we can also plot the corresponding surface currents and charges. The easiest way to view this calculated solution is with the the 3D interactive web-based plots that openmodes produces.
Use the mouse to navigate the plots. The left button rotates the view, the right button pans, and the scroll wheel zooms in and out. If you have problems viewing the output, please make sure that your web browser and graphics drivers are up to date.
As current and charge are complex quantities, you can view their real and imaginary parts, and for the charge also the magnitude and phase. | Python Code:
# the numpy library contains useful mathematical functions
import numpy as np
# import useful python libraries
import os.path as osp
# import the openmodes packages
import openmodes
# setup 2D plotting
%matplotlib inline
from openmodes.ipython import matplotlib_defaults
matplotlib_defaults()
import matplotlib.pyplot as plt
Explanation: Visualising modes
In this example, we use a simple approach to search for the modes of a split ring resonator, and visualise the corresponding current and charge distribution. These modes exist at complex frequencies (ie. values of $s$ with nonzero real parts), and are found using iterative search techniques
End of explanation
sim = openmodes.Simulation(notebook=True)
mesh = sim.load_mesh(osp.join(openmodes.geometry_dir, "SRR.geo"), parameters={'inner_radius': 2.5e-3}, mesh_tol=0.5e-3)
ring = sim.place_part(mesh)
Explanation: Setup simulation
First we load the geometry of a split ring from a file. Note how the geometric parameter inner_radius has been overriden to make the ring is slightly wider than in the previous example, for nicer plots.
End of explanation
start_freq = 2e9
start_s = 2j*np.pi*start_freq
num_modes = 4
estimates = sim.estimate_poles(start_s, modes=num_modes, cauchy_integral=False)
refined = sim.refine_poles(estimates)
modes = refined.add_conjugates()
Explanation: Search for modes
Now we ask OpenModes to find the values of the complex frequency parameter s for which the system becomes singular. This is how we find the modes of the system, using an iterative search. Note that we need to specify a frequency at which to perform some intial estimations. The choice of this frequency is not too critical, but it should be somewhere in the frequency range of interest. Here we will calculate the 4 lowest order modes.
Notice that this is a 3 step process. First estimates are given for the location of the modes. Then they are iteratively refined. Finally, the complex conjugate modes are added (at negative $\omega$), as required for any physically realisable resonator.
End of explanation
plt.figure()
plt.plot(estimates.s.imag, np.abs(estimates.s.real), 'x')
plt.xlabel('Frequency $j\omega$')
plt.ylabel('Damping rate $|\Omega|$')
plt.title('Complex eigenfrequencies of modes')
plt.tight_layout()
plt.show()
Explanation: Let us now plot the location of the modes in the complex s plane. The frequency of each mode is represented by their position on the $j\omega$ axis, while the $\Omega$ axis gives the damping, which is related to the width of the resonance.
End of explanation
print(refined.s.imag/2/np.pi)
Explanation: We can find the computed values of the resonant frequencies in Hz by looking at the imaginary parts of the singular points. For this particular geometry, we see that these split rings resonate in the GHz frequency range. Note that the lowest order modes will be most accurately represented, whereas for higher order modes the mesh cells are larger relative to the wavelength. If in doubt, repeat the calculation for smaller mesh tolerance and see how much the values change.
End of explanation
for mode in range(num_modes):
sim.plot_3d(solution=refined.vr["J", :, 'modes', mode], width=400, height=400)
Explanation: Plot the mode currents and charges
As well as calculating the frequencies of the modes, we can also plot the corresponding surface currents and charges. The easiest way to view this calculated solution is with the the 3D interactive web-based plots that openmodes produces.
Use the mouse to navigate the plots. The left button rotates the view, the right button pans, and the scroll wheel zooms in and out. If you have problems viewing the output, please make sure that your web browser and graphics drivers are up to date.
As current and charge are complex quantities, you can view their real and imaginary parts, and for the charge also the magnitude and phase.
End of explanation |
4,393 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The brythonmagic extension has been tested on
Step1: brythonmagic installation
Just type the following
Step2: And load the brython js lib in the notebook
Step3: Warning
In order to load javascript libraries in a safety way you should try to use https instead of http when possible (read more here). If you don't trust the source and/or the source cannot be loaded using https then you could download the javascript library and load it from a local location.
Usage
Step4: -c, --container option
In the following example can be seen the use of the -c, --container. The -p is also used to show you the result. See the id attribute of the div tag created
Step6: -i, --input option
In this example you can see how the data are passed to brython from python using the -i or --input option. First, we create some data in a regular Python cell.
Step7: And now, the created data are passed to Brython and used in the Brython code cell. Remember that only Python lists, tuples, dicts and strings are allowed as inputs.
Step9: -h, --html option
In this example you can see how to create some HTML code in a cell and then use that HTML code in the brython cell. In this way you do not need to create the HTML code via scripting with Brython.
Step10: -s, --script option
With this option you are creating a reference of the code in the Brython cell (e.g., an id of the HTML script tag created to run the Brython code). So, if you need to use the code of the Brython cell in a future Brython cell you could reference it by its id. Let's see this on an example (the -p option is used to show you the generated code and how the id of the script tag is created)
Step11: -S, --scripts option
This option could be used to call code created in a previous Brython code cell using its id (see the -s option above). In the following code cell we will use the dummy_function created in another Brython code cell. The dummy_function was created in a script tag with an id="my_dummy_function".
[HINT] The result of the Brython code cell below is shown in the javascript console of your browser.
Step12: -f, --fiddle option
With this option, the code in the cell will be automatically uploaded to gist.github.com/ as an anonymous gist with several files in it. This files will be used to create an anonymous 'fiddle' on jsfiddle.net. Finally, some links will be printed in the output linking to the gist and the fiddle.
Step13: -e, --embedfiddle option
With this option, the code in the cell will be automatically uploaded to gist.github.com/ as an anonymous gist with several files in it. These files will be used to create an anonymous 'fiddle' on jsfiddle.net. Finally, some links will be printed in the output linking to the gist and the fiddle and an iframe will be created showing the fiddle on jsfiddle.net.
Step14: How to use Brython in the IPython notebook
First step should be to read the brython documentation. You can find the docs here
Step15: Simple example, writing some numbers in the div container
In this example we just write inside a <div> ten numbers using a <P> tag for each number.
[HINT] To see the line numbers in the code cell just go to the cell and press <CTRL>-m and then l.
Line 2
Step16: A more useful example
Step17: Let's add some animation using HTML5 canvas technology...
In the following example we draw a shape using the HTML5 canvas. Also, we add some controls to stop and animate the shape. The example has been adapted from the javascript example available here.
Step18: Interaction with other javascript libraries
Step19: Now, we can access D3 objects(see example below). In the result you can see how the circle change its color when the mouse is over the circle.
Step20: Manipulating the IPython notebook
An example to hide or show the code cells using a button.
Step21: A more complete d3 example calculating things in Python and drawing results in Brython using D3.js
A more complete D3 example. In this case, first we create some data in Python.
Step22: And now, the data is passed to Brython to be used in a D3 plot. In this case, the D3.js library is already loaded so it is not necessary to load it.
Step23: Mapping with Python in the IPython notebook using OpenLayers?
In the following example we will use OpenLayers to center a map in a specific location, with a zoom and a projection and then we will draw some vector points around the location.
As before, first we should load the OpenLayers.js library.
Step24: And now we can create a map.
Step25: Using Raphaël.js
A dummy example using raphaël.js library.
As usual, first we should include the library
Step26: And now let's make a dumb example using JSObject.
Step27: Include the cell number for each cell
The cells starts by 0 and all the cells (markdown, headings, code,...) has a number. If we want to re-run some cells in a programmatically way it is useful to know the number of the cells to identify them. You can delete the cell numbers using show_cell_number(on = False)
Step28: Running Python cells as a loop
Imagine you have several cells of code and you want just to modify some data and run again these cells as a loop not having to create a big cell with the code of the cells together.
Step29: Get the code of all the cells and create a new cell with the code
If you want to compile all the code used in a notebook you can use this recipe (<span style="color
Step31: Styling the nb
Lets modify a little bit the look of the notebook. Warning | Python Code:
import IPython
IPython.version_info
Explanation: The brythonmagic extension has been tested on:
End of explanation
%install_ext https://raw.github.com/kikocorreoso/brythonmagic/master/brythonmagic.py
%load_ext brythonmagic
Explanation: brythonmagic installation
Just type the following:
End of explanation
from brythonmagic import load_brython_dev
load_brython_dev()
Explanation: And load the brython js lib in the notebook:
End of explanation
%%brython -p
print('hello world!')
Explanation: Warning
In order to load javascript libraries in a safety way you should try to use https instead of http when possible (read more here). If you don't trust the source and/or the source cannot be loaded using https then you could download the javascript library and load it from a local location.
Usage:
The brythonmagic provides you a cell magic, %%brython, to run brython code and show the results in a html div tag below the code cell.
You can use several options:
-p, --print: will show you the generated html code below the results obtained from the brython code.
-c, --container: you can define de name of the div container in case you want to 'play' with it in other cell. If you don't define an output the div will have and id with the following format 'brython-container-[random number between 0 and 999999]'
-i, --input: you can pass variables defined in the Python namespace separated by commas. If you pass a python list it will be converted to a brython list, a python tuple will be converted to a brython tuple, a python dict will be converted to a brython dict, a python string will be converted to a brython string.
-h, --html: you can pass a string with html markup code. This html code will be inserted inside the div container. In this way you can avoid the generation of HTML markup code via a Brython script so you can separate the layout from the 'action'.
-s, --script: Use this option to provide and id to the script defined in the Brython code cell. Also, this value could be used to run the code of this cell in other brython cells.
-S, --scripts: Use this option to run code previously defined in other Brython code cells. The values should be the provided values in the -s/--script option in other Brython code cells.
-f, --fiddle: With this option, the code in the cell will be automatically uploaded to gist.github.com/ as an anonymous gist with several files in it. This files will be used to create an anonymous 'fiddle' on jsfiddle.net. Finally, some links will be printed in the output linking to the gist and the fiddle. See an example here (https://gist.github.com/anonymous/b664e8b4617afc09db6c and http://jsfiddle.net/gh/gist/library/pure/b664e8b4617afc09db6c/)
-e, --embedfiddle: With this option, the code in the cell will be automatically uploaded to gist.github.com/ as an anonymous gist with several files in it. This files will be used to create an anonymous 'fiddle' on jsfiddle.net. Finally, some links will be printed in the output linking to the gist and the fiddle and an iframe will be created showing the fiddle on jsfiddle.net.
[WARNING] This options may change as the brythonmagic is in active development.
-p, --print option
The following example shows the use of the -p, --print option.
[HINT] The result of the print is shown in the javascript console of your browser.
End of explanation
%%brython -c my_container -p
from browser import document, html
# This will be printed in the js console of your browser
print('Hello world!')
# This will be printed in the container div on the output below
document["my_container"] <= html.P("This text is inside the div",
style = {"backgroundColor": "cyan"})
Explanation: -c, --container option
In the following example can be seen the use of the -c, --container. The -p is also used to show you the result. See the id attribute of the div tag created:
End of explanation
data_list = [1,2,3,4]
data_tuple = (1,2,3,4)
data_dict = {'one': 1, 'two': 2}
data_str =
Hello
GoodBye
# A numpy array can be converted to a list and you will obtain a brython list
import numpy as np
data_arr = np.empty((3,2))
data_arr = data_arr.tolist()
Explanation: -i, --input option
In this example you can see how the data are passed to brython from python using the -i or --input option. First, we create some data in a regular Python cell.
End of explanation
%%brython -c p2b_data_example -i data_list data_tuple data_dict data_str data_arr
from browser import document, html
document["p2b_data_example"] <= html.P(str(data_list))
document["p2b_data_example"] <= html.P(str(type(data_list)))
document["p2b_data_example"] <= html.P(str(data_tuple))
document["p2b_data_example"] <= html.P(str(type(data_tuple)))
document["p2b_data_example"] <= html.P(str(data_dict))
document["p2b_data_example"] <= html.P(str(type(data_dict)))
document["p2b_data_example"] <= html.P(data_str.replace('Hello', 'Hi'))
document["p2b_data_example"] <= html.P(str(type(data_str)))
document["p2b_data_example"] <= html.P(str(data_arr))
document["p2b_data_example"] <= html.P(str(type(data_arr)))
Explanation: And now, the created data are passed to Brython and used in the Brython code cell. Remember that only Python lists, tuples, dicts and strings are allowed as inputs.
End of explanation
html =
<div id="paragraph">Hi</div>
%%brython -c html_ex -h html
from browser import document
document["paragraph"].style = {
"color": "yellow",
"fontSize": "100px",
"lineHeight": "150px",
"textAlign": "center",
"backgroundColor": "black"
}
Explanation: -h, --html option
In this example you can see how to create some HTML code in a cell and then use that HTML code in the brython cell. In this way you do not need to create the HTML code via scripting with Brython.
End of explanation
%%brython -s my_dummy_function
def dummy_function(some_text):
print(some_text)
Explanation: -s, --script option
With this option you are creating a reference of the code in the Brython cell (e.g., an id of the HTML script tag created to run the Brython code). So, if you need to use the code of the Brython cell in a future Brython cell you could reference it by its id. Let's see this on an example (the -p option is used to show you the generated code and how the id of the script tag is created):
End of explanation
%%brython -S my_dummy_function
dummy_function('Hi')
Explanation: -S, --scripts option
This option could be used to call code created in a previous Brython code cell using its id (see the -s option above). In the following code cell we will use the dummy_function created in another Brython code cell. The dummy_function was created in a script tag with an id="my_dummy_function".
[HINT] The result of the Brython code cell below is shown in the javascript console of your browser.
End of explanation
%%brython -f
from browser import alert
alert('hello world from jsfiddle!')
Explanation: -f, --fiddle option
With this option, the code in the cell will be automatically uploaded to gist.github.com/ as an anonymous gist with several files in it. This files will be used to create an anonymous 'fiddle' on jsfiddle.net. Finally, some links will be printed in the output linking to the gist and the fiddle.
End of explanation
%%brython -e
from browser import alert
alert('hello world from jsfiddle!')
Explanation: -e, --embedfiddle option
With this option, the code in the cell will be automatically uploaded to gist.github.com/ as an anonymous gist with several files in it. These files will be used to create an anonymous 'fiddle' on jsfiddle.net. Finally, some links will be printed in the output linking to the gist and the fiddle and an iframe will be created showing the fiddle on jsfiddle.net.
End of explanation
%%brython
from browser import alert
alert('Hello world!, Welcome to the brythonmagic!')
Explanation: How to use Brython in the IPython notebook
First step should be to read the brython documentation. You can find the docs here:
http://brython.info/doc/en/index.html?lang=en
In the following section I will show you some dummy examples.
Hello world example
In this example let's see how to pop up an alert window. This could be an standard 'Hello world!' example in the Brython world.
End of explanation
%%brython -c simple_example
from browser import document, html
for i in range(10):
document["simple_example"] <= html.P(i)
Explanation: Simple example, writing some numbers in the div container
In this example we just write inside a <div> ten numbers using a <P> tag for each number.
[HINT] To see the line numbers in the code cell just go to the cell and press <CTRL>-m and then l.
Line 2: We import the libraries to use
Line 4: A for loop :-P
Line 10: We create a P tag and write the value of i inside. Finally, add the P element to the selected div, in this case the div with "simple_example" id attribute.
End of explanation
%%brython -c table
from browser import document, html
table = html.TABLE()
for i in range(10):
color = ['cyan','#dddddd'] * 5
table <= html.TR(
html.TD(str(i+1) + ' x 2 =', style = {'backgroundColor':color[i]}) +
html.TD((i+1)*2, style = {'backgroundColor':color[i]}))
document['table'] <= table
Explanation: A more useful example: A multiplication table
In the following cell we create a multiplication table. First, we create a table tag. We append the table rows and cells (TR and TD tags) and, finally, we append the final table to the div with "table" id attribute.
End of explanation
%%brython -c canvas_example
from browser.timer import request_animation_frame as raf
from browser.timer import cancel_animation_frame as caf
from browser import document, html
from time import time
import math
# First we create a table to insert the elements
table = html.TABLE(cellpadding = 10)
btn_anim = html.BUTTON('Animate', Id="btn-anim", type="button")
btn_stop = html.BUTTON('Stop', Id="btn-stop", type="button")
cnvs = html.CANVAS(Id="raf-canvas", width=256, height=256)
table <= html.TR(html.TD(btn_anim + btn_stop) +
html.TD(cnvs))
document['canvas_example'] <= table
# Now we access the canvas context
ctx = document['raf-canvas'].getContext( '2d' )
# And we create several functions in charge to animate and stop the draw animation
toggle = True
def draw():
t = time() * 3
x = math.sin(t) * 96 + 128
y = math.cos(t * 0.9) * 96 + 128
global toggle
if toggle:
toggle = False
else:
toggle = True
ctx.fillStyle = 'rgb(200,200,20)' if toggle else 'rgb(20,20,200)'
ctx.beginPath()
ctx.arc( x, y, 6, 0, math.pi * 2, True)
ctx.closePath()
ctx.fill()
def animate(i):
global id
id = raf(animate)
draw()
def stop(i):
global id
print(id)
caf(id)
document["btn-anim"].bind("click", animate)
document["btn-stop"].bind("click", stop)
Explanation: Let's add some animation using HTML5 canvas technology...
In the following example we draw a shape using the HTML5 canvas. Also, we add some controls to stop and animate the shape. The example has been adapted from the javascript example available here.
End of explanation
from brythonmagic import load_js_lib
load_js_lib("http://d3js.org/d3.v3.js")
Explanation: Interaction with other javascript libraries: D3.js
In Brython there is a javascript library that allows to access objects available in the javascript namespace. In this example we are using a javascript object (D3.js library) from Brython.
So, in order to allow Brython to access to D3 first you should load the D3 library.
End of explanation
%%brython -c simple_d3
from browser import window, document, html
d3 = window.d3
container = d3.select("#simple_d3")
svg = container.append("svg").attr("width", 100).attr("height", 100)
circle1 = svg.append("circle").style("stroke", "gray").style("fill", "gray").attr("r", 40)
circle1.attr("cx", 50).attr("cy", 50).attr("id", "mycircle")
circle2 = svg.append("circle").style("stroke", "gray").style("fill", "white").attr("r", 20)
circle2.attr("cx", 50).attr("cy", 50)
def over(ev):
document["mycircle"].style.fill = "blue"
def out(ev):
document["mycircle"].style.fill = "gray"
document["mycircle"].bind("mouseover", over)
document["mycircle"].bind("mouseout", out)
Explanation: Now, we can access D3 objects(see example below). In the result you can see how the circle change its color when the mouse is over the circle.
End of explanation
%%brython -c manipulating
from browser import document, html
def hide(ev):
divs = document.get(selector = 'div.input')
for div in divs:
div.style.display = "none"
def show(ev):
divs = document.get(selector = 'div.input')
for div in divs:
div.style.display = "inherit"
document["manipulating"] <= html.BUTTON('Hide code cells', Id="btn-hide")
document["btn-hide"].bind("click", hide)
document["manipulating"] <= html.BUTTON('Show code cells', Id="btn-show")
document["btn-show"].bind("click", show)
Explanation: Manipulating the IPython notebook
An example to hide or show the code cells using a button.
End of explanation
from random import randint
n = 100
x = [randint(0,800) for i in range(n)]
y = [randint(0,600) for i in range(n)]
r = [randint(25,50) for i in range(n)]
red = [randint(0,255) for i in range(n)]
green = [randint(0,255) for i in range(n)]
blue = [randint(0,255) for i in range(n)]
Explanation: A more complete d3 example calculating things in Python and drawing results in Brython using D3.js
A more complete D3 example. In this case, first we create some data in Python.
End of explanation
%%brython -c other_d3 -i x y r red green blue
from browser import window, document, html
d3 = window.d3
WIDTH = 800
HEIGHT = 600
container = d3.select("#other_d3")
svg = container.append("svg").attr("width", WIDTH).attr("height", HEIGHT)
class AddShapes:
def __init__(self, x, y, r, red, green, blue, shape = "circle", interactive = True):
self.shape = shape
self.interactive = interactive
self._color = "gray"
self.add(x, y, r, red, green, blue)
def over(self, ev):
self._color = ev.target.style.fill
document[ev.target.id].style.fill = "white"
def out(self, ev):
document[ev.target.id].style.fill = self._color
def add(self, x, y, r, red, green, blue):
for i in range(len(x)):
self.idx = self.shape + '_' + str(i)
self._color = "rgb(%s,%s,%s)" % (red[i], green[i], blue[i])
shaped = svg.append(self.shape).style("stroke", "gray").style("fill", self._color).attr("r", r[i])
shaped.attr("cx", x[i]).attr("cy", y[i]).attr("id", self.idx)
if self.interactive:
document[self.idx].bind("mouseover", self.over)
document[self.idx].bind("mouseout", self.out)
plot = AddShapes(x, y, r, red, green, blue, interactive = True)
Explanation: And now, the data is passed to Brython to be used in a D3 plot. In this case, the D3.js library is already loaded so it is not necessary to load it.
End of explanation
from brythonmagic import load_js_lib
load_js_lib("http://cdnjs.cloudflare.com/ajax/libs/openlayers/2.11/OpenLayers.js")
Explanation: Mapping with Python in the IPython notebook using OpenLayers?
In the following example we will use OpenLayers to center a map in a specific location, with a zoom and a projection and then we will draw some vector points around the location.
As before, first we should load the OpenLayers.js library.
End of explanation
%%brython -c ol_map
from browser import document, window
## Div layout
document['ol_map'].style.width = "800px"
document['ol_map'].style.height = "400px"
document['ol_map'].style.border = "1px solid black"
OpenLayers = window.OpenLayers
## Map
_map = OpenLayers.Map.new('ol_map')
## Addition of an OpenStreetMap layer
_layer = OpenLayers.Layer.OSM.new('Simple OSM map')
_map.addLayer(_layer)
## Map centered on Lon, Lat = (-3.671416, 40.435897) and a zoom = 14
## with a projection = "EPSG:4326" (Lat-Lon WGS84)
_proj = OpenLayers.Projection.new("EPSG:4326")
_center = OpenLayers.LonLat.new(-3.671416, 40.435897)
_center.transform(_proj, _map.getProjectionObject())
_map.setCenter(_center, 10)
## Addition of some points around the defined location
lons = [-3.670, -3.671, -3.672, -3.672, -3.672,
-3.671, -3.670, -3.670]
lats = [40.435, 40.435, 40.435, 40.436, 40.437,
40.437, 40.437, 40.436]
points_layer = OpenLayers.Layer.Vector.new("Point Layer")
for lon, lat in zip(lons, lats):
point = OpenLayers.Geometry.Point.new(lon, lat)
point.transform(_proj, _map.getProjectionObject())
_feat = OpenLayers.Feature.Vector.new(point)
points_layer.addFeatures([_feat])
_map.addLayer(points_layer)
# Add a control for the layers
layer_switcher= OpenLayers.Control.LayerSwitcher.new({})
_map.addControl(layer_switcher)
Explanation: And now we can create a map.
End of explanation
load_js_lib("http://cdnjs.cloudflare.com/ajax/libs/raphael/2.1.2/raphael-min.js")
Explanation: Using Raphaël.js
A dummy example using raphaël.js library.
As usual, first we should include the library:
End of explanation
%%brython -c raphael_ex
from browser import window
from javascript import JSObject
Raphael = window.Raphael
paper = JSObject(Raphael("raphael_ex", 400, 400))
#Draw rectagle
rect = paper.rect(1,1,398,398)
rect.attr("stroke", "black")
#Draw orbits
for rot in range(90,280,60):
ellipse = paper.ellipse(200, 200, 180, 50)
ellipse.attr("stroke", "gray")
ellipse.rotate(rot)
#Draw nucleus
nucleus = paper.circle(200,200,40)
nucleus.attr("fill", "black")
# Draw electrons
electron = paper.circle(200, 20, 10)
electron.attr("fill", "red")
electron = paper.circle(44, 290, 10)
electron.attr("fill", "yellow")
electron = paper.circle(356, 290, 10)
electron.attr("fill", "blue")
Explanation: And now let's make a dumb example using JSObject.
End of explanation
%%brython
from browser import doc, html
def show_cell_number(on = True):
cells = doc.get(selector = '.input_prompt')
for i, cell in enumerate(cells):
if on:
if 'In' in cell.html and '<br>' not in cell.html:
cell.html += "<br>cell #" + str(i)
else:
if 'In' in cell.text:
cell.html = cell.html.split('<br>')[0]
show_cell_number(on = True)
Explanation: Include the cell number for each cell
The cells starts by 0 and all the cells (markdown, headings, code,...) has a number. If we want to re-run some cells in a programmatically way it is useful to know the number of the cells to identify them. You can delete the cell numbers using show_cell_number(on = False):
End of explanation
%%brython
from javascript import JSObject
from browser import window
IPython = window.IPython
nb = IPython.notebook
# This is used to prevent an infinite loop
this_cell = nb.get_selected_index()
for i in range(1,10): # Ths will run cells 1 to 9 (the beginning of the nb)
cell = nb.get_cell(i)
if cell.cell_type == "code" and i != this_cell:
cell.execute()
Explanation: Running Python cells as a loop
Imagine you have several cells of code and you want just to modify some data and run again these cells as a loop not having to create a big cell with the code of the cells together.
End of explanation
%%brython
from javascript import JSObject
from browser import window
IPython = window.IPython
nb = IPython.notebook
this_cell = nb.get_selected_index()
total_cells = nb.ncells()
code = ""
first_cell = True
for i in range(total_cells):
cell = nb.get_cell(i)
if cell.cell_type == "code" and i != this_cell:
if first_cell:
code += "# This cell has been generated automatically using a brython script\n\n"
code += "# code from cell " + str(i) + '\n'
first_cell = False
else:
code += "\n\n\n# code from cell " + str(i) + '\n'
code += cell.get_text() + '\n'
nb.insert_cell_below('code')
new_cell = nb.get_cell(this_cell + 1)
new_cell.set_text(code)
Explanation: Get the code of all the cells and create a new cell with the code
If you want to compile all the code used in a notebook you can use this recipe (<span style="color: red; background-color: yellow;">use crtl + Enter to run the cell if you don't want a bad behaviour</span>):
End of explanation
%%brython -s styling
from browser import doc, html
# Changing the background color
body = doc[html.BODY][0]
body.style = {"backgroundColor": "#99EEFF"}
# Changing the color of the imput prompt
inps = body.get(selector = ".input_prompt")
for inp in inps:
inp.style = {"color": "blue"}
# Changin the color of the output cells
outs = body.get(selector = ".output_wrapper")
for out in outs:
out.style = {"backgroundColor": "#E0E0E0"}
# Changing the font of the text cells
text_cells = body.get(selector = ".text_cell")
for cell in text_cells:
cell.style = {"fontFamily": "Courier New", Courier, monospace,
"fontSize": "20px"}
# Changing the color of the code cells.
code_cells = body.get(selector = ".CodeMirror")
for cell in code_cells:
cell.style = {"backgroundColor": "#D0D0D0"}
Explanation: Styling the nb
Lets modify a little bit the look of the notebook. Warning: The result will be very ugly...
End of explanation |
4,394 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
(NVM)=
1.3 Normas vectoriales y matriciales
```{admonition} Notas para contenedor de docker
Step1: Norma $2$
Step2: Norma $1$
Step3: Norma $\infty$
Step4: ```{admonition} Observación
Step5: en este caso $D=\left[\begin{array}{cc} \frac{1}{25} &0\ 0 &\frac{1}{9} \end{array}\right ] = \left[\begin{array}{cc} \frac{1}{d_1} &0\ 0 &\frac{1}{d_2} \end{array}\right ]$
```{admonition} Definiciones
Una matriz $A$ es simétrica si $A = A^T$, con $A^T$ la transpuesta de $A$.
Una matriz $A$ es semidefinida positiva si $x^TAx \geq 0, \forall x \in \mathbb{R}^n - {0}$. Si se cumple de forma estricta la desigualdad entonces $A$ es definida positiva.
La norma cuadrática de $z$ con matriz $A$ se define como $||z||_A = \sqrt{z^TAz}$ con $A$ matriz simétrica definida positiva.
```
(NMAT)=
Normas matriciales
Inducidas
De las normas matriciales más importantes se encuentran las inducidas por normas vectoriales. Estas normas matriciales se definen en términos de los vectores en $\mathbb{R}^n$ a los que se les aplica la multiplicación $Ax$
Step6: ```{admonition} Ejercicio | Python Code:
import numpy as np
import matplotlib.pyplot as plt
Explanation: (NVM)=
1.3 Normas vectoriales y matriciales
```{admonition} Notas para contenedor de docker:
Comando de docker para ejecución de la nota de forma local:
nota: cambiar <ruta a mi directorio> por la ruta de directorio que se desea mapear a /datos dentro del contenedor de docker y <versión imagen de docker> por la versión más actualizada que se presenta en la documentación.
docker run --rm -v <ruta a mi directorio>:/datos --name jupyterlab_optimizacion -p 8888:8888 -d palmoreck/jupyterlab_optimizacion:<versión imagen de docker>
password para jupyterlab: qwerty
Detener el contenedor de docker:
docker stop jupyterlab_optimizacion
Documentación de la imagen de docker palmoreck/jupyterlab_optimizacion:<versión imagen de docker> en liga.
```
Nota generada a partir de liga
```{admonition} Al final de esta nota la comunidad lectora:
:class: tip
Aprenderá las definiciones de algunas normas vectoriales y matriciales más utilizadas en las Matemáticas para la medición de errores, residuales, en general cercanía a cantidades de interés.
Comprenderá la interpretación que tiene una norma matricial.
```
Una norma define una medida de distancia en un conjunto y da nociones de tamaño, vecindad, convergencia y continuidad.
Normas vectoriales
Sea $\mathbb{R}^n$ el conjunto de $n$-tuplas o vectores columna o $1$-arreglo de orden $1$, esto es:
$$x \in \mathbb{R}^n \iff x = \left[\begin{array}{c}
x_1\
x_2\
\vdots\
x_n
\end{array} \right] \text{ con } x_i \in \mathbb{R}$$
Una norma vectorial en $\mathbb{R}^n$ es una función $g: \mathbb{R}^n \rightarrow \mathbb{R}$ que satisface las siguientes propiedades:
$g$ es no negativa: $g(x) \geq 0 \forall x \in \mathbb{R}^n$.
$g$ es definida: $g(x) = 0 \iff x = 0$.
$g$ satisface la desigualdad del triángulo:
$$g(x+y) \leq g(x) + g(y) \forall x,y \in \mathbb{R}^n.$$
$g$ es homogénea: $g(\alpha x)=|\alpha|g(x), \forall \alpha \in \mathbb{R}, \forall x \in \mathbb{R}^n$.
Notación: $g(x) = ||x||$.
```{admonition} Definición
Un conjunto $V \neq \emptyset$ en el que se le han definido las operaciones $(+, \cdot)$ se le nombra espacio vectorial sobre $\mathbb{R}$ si satisface las siguientes propiedades $\forall x, y, z \in V$, $\forall a,b \in \mathbb{R}$:
x + (y + z) = (x + y) + z
x + y = y + x
$\exists 0 \in V$ tal que $x + 0 = 0 + x = x$ $\forall x \in V$.
$\forall x \in V$ $\exists -x \in V$ tal que $x + (-x) = 0$.
a(bx) = (ab)x.
$1x = x$ con $1 \in \mathbb{R}$.
$a(x + y) = ax + ay$.
$(a+b)x = ax + bx$.
```
```{admonition} Comentarios y propiedades
Una norma es una generalización del valor absoluto de $\mathbb{R}$: $|x|, x \in \mathbb{R}.$
Un espacio vectorial con una norma definida en éste se le llama espacio vectorial normado.
Una norma es una medida de la longitud de un vector.
Con una norma es posible definir conceptos como distancia entre vectores: $x,y \in \mathbb{R}^n: \text{dist}(x,y) = ||x-y||$.
Existen varias normas en $\mathbb{R}^n$ siendo las más comunes:
La norma $\mathcal{l}_2$, Euclidiana o norma $2$: $||x||_2$.
La norma $\mathcal{l}_1$ o norma $1$: $||x||_1$.
La norma $\infty$ o de Chebyshev o norma infinito: $||x||_\infty$.
Las normas anteriores pertenecen a una familia parametrizada por una constante $p, p \geq 1$ cuyo nombre es norma $\mathcal{l}_p$:
$$ ||x||p = \left(\displaystyle \sum{i=1}^n|x_i|^p \right )^{1/p}.$$
Un resultado para $x \in \mathbb{R}^n$ es la equivalencia entre normas:
$$\exists \alpha, \beta > 0 \text{ tales que }: \alpha||x||_a \leq ||x||_b \leq \beta ||x||_a \forall x \in \mathbb{R}^n$$
donde: $||\cdot||_a, ||\cdot||_b$ son normas cualesquiera en $\mathbb{R}^n$. Por la propiedad anterior decimos que si se cumple convergencia en la norma $||\cdot||_a$ entonces también se cumple convergencia en la norma $||\cdot||_b$.
```
(EGNP)=
Ejemplos de gráficas de normas en el plano.
End of explanation
f=lambda x: np.sqrt(x[:,0]**2 + x[:,1]**2) #definición de norma2
density=1e-5
density_p=int(2.5*10**3)
x=np.arange(-1,1,density)
y1=np.sqrt(1-x**2)
y2=-np.sqrt(1-x**2)
x_p=np.random.uniform(-1,1,(density_p,2))
ind=f(x_p)<1
x_p_subset=x_p[ind]
plt.scatter(x_p_subset[:,0],x_p_subset[:,1],marker='.')
plt.title('Puntos en el plano que cumplen $||x||_2 < 1$')
plt.grid()
plt.show()
Explanation: Norma $2$: ${ x \in \mathbb{R}^2 \text{ tales que } ||x||_2 < 1}$
End of explanation
f=lambda x:np.abs(x[:,0]) + np.abs(x[:,1]) #definición de norma1
density=1e-5
density_p=int(2.5*10**3)
x1=np.arange(0,1,density)
x2=np.arange(-1,0,density)
y1=1-x1
y2=1+x2
y3=x1-1
y4=-1-x2
x_p=np.random.uniform(-1,1,(density_p,2))
ind=f(x_p)<=1
x_p_subset=x_p[ind]
plt.plot(x1,y1,'b',x2,y2,'b',x1,y3,'b',x2,y4,'b')
plt.scatter(x_p_subset[:,0],x_p_subset[:,1],marker='.')
plt.title('Puntos en el plano que cumplen $||x||_1 \leq 1$')
plt.grid()
plt.show()
Explanation: Norma $1$: ${ x \in \mathbb{R}^2 \text{ tales que } ||x||_1 \leq 1}$
End of explanation
f=lambda x:np.max(np.abs(x),axis=1) #definición de norma infinito
point1 = (-1, -1)
point2 = (-1, 1)
point3 = (1, 1)
point4 = (1, -1)
point5 = point1
arr = np.row_stack((point1, point2,
point3, point4,
point5))
density_p=int(2.5*10**3)
x_p=np.random.uniform(-1,1,(density_p,2))
ind=f(x_p)<=1
x_p_subset=x_p[ind]
plt.scatter(x_p_subset[:,0],x_p_subset[:,1],marker='.')
plt.plot(arr[:,0], arr[:,1])
plt.title('Puntos en el plano que cumplen $||x||_{\infty} \leq 1$')
plt.grid()
plt.show()
Explanation: Norma $\infty$: ${ x \in \mathbb{R}^2 \text{ tales que } ||x||_\infty \leq 1}$
End of explanation
d1_inv=1/5
d2_inv=1/3
f=lambda x: np.sqrt((d1_inv*x[:,0])**2 + (d2_inv*x[:,1])**2) #definición de norma2
density=1e-5
density_p=int(2.5*10**3)
x=np.arange(-1/d1_inv,1/d1_inv,density)
y1=1.0/d2_inv*np.sqrt(1-(d1_inv*x)**2)
y2=-1.0/d2_inv*np.sqrt(1-(d1_inv*x)**2)
x_p=np.random.uniform(-1/d1_inv,1/d1_inv,(density_p,2))
ind=f(x_p)<=1
x_p_subset=x_p[ind]
plt.plot(x,y1,'b',x,y2,'b')
plt.scatter(x_p_subset[:,0],x_p_subset[:,1],marker='.')
plt.title('Puntos en el plano que cumplen $||x||_D \leq 1$')
plt.grid()
plt.show()
Explanation: ```{admonition} Observación
:class: tip
La norma $\infty$ se encuentra en la familia de las normas-p como límite:
$$||x||\infty = \displaystyle \lim{p \rightarrow \infty} ||x||_p.$$
```
```{admonition} Comentario
En la norma $\mathcal{l}_2$ o Euclidiana $||x||_2$ tenemos una desigualdad muy importante, la desigualdad de Cauchy-Schwartz:
$$|x^Ty| \leq ||x||_2||y||_2 \forall x,y \in \mathbb{R}^n$$
la cual relaciona el producto interno estándar para $x,y \in \mathbb{R}^n$: $<x,y> = x^Ty = \displaystyle \sum_{i=1}^nx_iy_i$ con la norma $\mathcal{l}_2$ de $x$ y la norma $\mathcal{l}_2$ de $y$. Además se utiliza lo anterior para definir el ángulo (sin signo por el intervalo en el que está $\cos^{-1}$) entre $x,y$:
$$\measuredangle x,y = \cos ^{-1}\left(\frac{x^Ty}{||x||_2||y||_2} \right )$$
para $\cos^{-1}(u) \in [0,\pi]$ y se nombra a $x,y$ ortogonales si $x^Ty=0$. Obsérvese que $||x||_2 = \sqrt{x^Tx}$.
```
Ejemplo
También se utilizan matrices para definir normas.
```{admonition} Definición
Recuérdese que una matriz es un arreglo $2$-dimensional de datos o $2$ arreglo de orden $2$. Se utiliza la notación $A \in \mathbb{R}^{m\times n}$ para denotar:
$$A = \left[\begin{array}{cccc}
a_{11} &a_{12}&\dots&a_{1n}\
a_{21} &a_{22}&\dots&a_{2n}\
\vdots &\vdots& \vdots&\vdots\
a_{n1} &a_{n2}&\dots&a_{nn}\
\vdots &\vdots& \vdots&\vdots\
a_{m-11} &a_{m-12}&\dots&a_{m-1n}\
a_{m1} &a_{m2}&\dots&a_{mm}
\end{array}
\right]
$$
con $a_{ij} \mathbb{R} \forall i=1,\dots,m, j=1,\dots,n$. Y se utilizan las siguientes notaciones para describir a la matriz $A$:
$A=(a_1,\dots a_n), a_j \in \mathbb{R}^m (=\mathbb{R}^{m\times1}) \forall j=1,\dots,n$.
$A=\left ( \begin{array}{c}
a_1^T\
\vdots\
a_m^T
\end{array} \right ), a_i \in \mathbb{R}^n (=\mathbb{R}^{n\times1}) \forall i=1,\dots,m$.
La multiplicación de una matriz de tamaño $m\times n$ por un vector se define como:
$$y=Ax=\displaystyle \sum_{j=1}^n a_jx_j$$
con $a_j \in \mathbb{R}^m, x \in \mathbb{R}^n$. Obsérvese que $x \in \mathbb{R}^n, Ax \in \mathbb{R}^m$.
```
Un ejemplo de norma-$2$ ponderada es: ${x \in \mathbb{R}^2 \text{ tales que } ||x||_D \leq 1, ||x||_D = ||Dx||_2, \text{con matriz diagonal } D \text{ y entradas positivas}}$:
End of explanation
A=np.array([[1,2],[0,2]])
density=1e-5
x1=np.arange(0,1,density)
x2=np.arange(-1,0,density)
x1_y1 = np.column_stack((x1,1-x1))
x2_y2 = np.column_stack((x2,1+x2))
x1_y3 = np.column_stack((x1,x1-1))
x2_y4 = np.column_stack((x2,-1-x2))
apply_A = lambda vec : np.transpose([email protected](vec))
A_to_vector_1 = apply_A(x1_y1)
A_to_vector_2 = apply_A(x2_y2)
A_to_vector_3 = apply_A(x1_y3)
A_to_vector_4 = apply_A(x2_y4)
plt.subplot(1,2,1)
plt.plot(x1_y1[:,0],x1_y1[:,1],'b',
x2_y2[:,0],x2_y2[:,1],'b',
x1_y3[:,0],x1_y3[:,1],'b',
x2_y4[:,0],x2_y4[:,1],'b')
e1 = np.array([[0,0],
[1, 0]])
e2 = np.array([[0, 0],
[0, 1]])
plt.plot(e2[:,0], e2[:,1],'g',
e1[:,0], e1[:,1],'b')
plt.xlabel('Vectores con norma 1 menor o igual a 1')
plt.grid()
plt.subplot(1,2,2)
plt.plot(A_to_vector_1[:,0],A_to_vector_1[:,1],'b',
A_to_vector_2[:,0],A_to_vector_2[:,1],'b',
A_to_vector_3[:,0],A_to_vector_3[:,1],'b',
A_to_vector_4[:,0],A_to_vector_4[:,1],'b')
A_to_vector_e2 = apply_A(e2)
plt.plot(A_to_vector_e2[:,0],A_to_vector_e2[:,1],'g')
plt.grid()
plt.title('Efecto de la matriz A sobre los vectores con norma 1 menor o igual a 1')
plt.show()
print(np.linalg.norm(A,1))
Explanation: en este caso $D=\left[\begin{array}{cc} \frac{1}{25} &0\ 0 &\frac{1}{9} \end{array}\right ] = \left[\begin{array}{cc} \frac{1}{d_1} &0\ 0 &\frac{1}{d_2} \end{array}\right ]$
```{admonition} Definiciones
Una matriz $A$ es simétrica si $A = A^T$, con $A^T$ la transpuesta de $A$.
Una matriz $A$ es semidefinida positiva si $x^TAx \geq 0, \forall x \in \mathbb{R}^n - {0}$. Si se cumple de forma estricta la desigualdad entonces $A$ es definida positiva.
La norma cuadrática de $z$ con matriz $A$ se define como $||z||_A = \sqrt{z^TAz}$ con $A$ matriz simétrica definida positiva.
```
(NMAT)=
Normas matriciales
Inducidas
De las normas matriciales más importantes se encuentran las inducidas por normas vectoriales. Estas normas matriciales se definen en términos de los vectores en $\mathbb{R}^n$ a los que se les aplica la multiplicación $Ax$:
Dadas las normas vectoriales $||\cdot||{(n)}, ||\cdot||{(m)}$ en $\mathbb{R}^n$ y $\mathbb{R}^m$ respectivamente, la norma matricial inducida $||A||_{(m,n)}$ para $A \in \mathbb{R}^{m \times n}$ es el menor número $C$ para el cual la desigualdad:
$$||Ax||{(m)} \leq C||x||{(n)}$$
se cumple $\forall x \in \mathbb{R}^n$. Esto es:
$$||A||{(m,n)} = \displaystyle \sup{x \in \mathbb{R}^n-{0}} \frac{||Ax||{(m)}}{||x||{(n)}}$$
Ver {ref}Nota sobre sup e inf <SI> para definición de $\sup$.
```{admonition} Comentarios
$||A||_{(m,n)}$ representa el máximo factor por el cual $A$ puede modificar el tamaño de $x$ sobre todos los vectores $x \in \mathbb{R}^n$, es una medida de un tipo de worst case stretch factor.
Así definidas, la norma $||\cdot||{(m,n)}$ es la norma matricial inducida por las normas vectoriales $||\cdot||{(m)}, ||\cdot||_{(n)}$.
Son definiciones equivalentes:
$$||A||{(m,n)} = \displaystyle \sup{x \in \mathbb{R}^n-{0}} \frac{||Ax||{(m)}}{||x||{(n)}} = \displaystyle \sup_{||x||{(n)} \leq 1} \frac{||Ax||{(m)}}{||x||{(n)}} = \displaystyle \sup{||x||{(n)}=1} ||Ax||{(m)}$$
```
Ejemplo
La matriz $A=\left[\begin{array}{cc}
1 &2\
0 &2
\end{array}\right ]$ mapea $\mathbb{R}^2$ a $\mathbb{R}^2$, en particular se tiene:
$A$ mapea $e_1 = \left[\begin{array}{c}
1 \
0
\end{array}\right ]$ a la columna $a_1 = \left[\begin{array}{c}
1 \
0
\end{array}\right ]$ de $A$.
$A$ mapea $e_2 = \left[\begin{array}{c}
0 \
1
\end{array}\right ]$ a la columna $a_2 = \left[\begin{array}{c}
2 \
2
\end{array}\right ]$ de $A$.
Considerando $||A||p := ||A||{(p,p)}$ con $p=1, p=2, p=\infty$ se tiene:
<img src="https://dl.dropboxusercontent.com/s/3fqz9uspfwdurjf/normas_matriciales.png?dl=0" heigth="500" width="500">
```{admonition} Observación
:class: tip
Al observar la segunda gráfica se tiene la siguiente afirmación: la acción de una matriz sobre una circunferencia es una elipse con longitudes de semiejes iguales a $|d_i|$. En general la acción de una matriz sobre una hiper esfera es una hiperelipse. Por lo que los vectores unitarios en $\mathbb{R}^n$ que son más amplificados por la acción de una matriz diagonal $D \in \mathbb{R}^{m\times n}$ con entradas iguales a $d_i$ son aquellos que se mapean a los semiejes de una hiperelipse en $\mathbb{R}^m$ de longitud igual a $\max{|d_i|}$ y así tenemos: si $D$ es una matriz diagonal con entradas $d_i$ entonces $||D||2 = \displaystyle \max{i=1,\dots,m}{|d_i|}$.
```
Ejemplo
End of explanation
print(np.linalg.norm(A,2))
_,s,_ = np.linalg.svd(A)
print(np.max(s))
Explanation: ```{admonition} Ejercicio
:class: tip
Obtener las otras dos gráficas con Python usando norma $2$ y norma $\infty$. Para el caso de la norma $2$ el vector en color azul está dado por la descomposición en valores singulares (SVD) de A. En específico la primer columna de la matriz $U$ multiplicado por el primer valor singular. En el ejemplo resulta en:
$$\sigma_1U[:,0] \approx 2.9208*\left[ \begin{array}{c}
0.74967 \
0.66180
\end{array} \right] \approx \left[\begin{array}{c}
2.189\
1.932
\end{array}
\right]
$$
y el vector $v$ que será multiplicado por la matriz $A$ es la primer columna de $V$ dada por:
$$V[:,0] \approx \left[
\begin{array}{c}
0.2566\
0.9664
\end{array}
\right]
$$
```
Resultados computacionales que son posibles probar
1.$||A||1 = \displaystyle \max{j=1,\dots,n}\sum_{i=1}^n|a_{ij}|$.
2.$||A||\infty = \displaystyle \max{i=1,\dots,n}\sum_{j=1}^n|a_{ij}|$.
3.$\begin{eqnarray}||A||2 = \sqrt{\lambda{\text{max}}(A^TA)} &=& \max \left {\sqrt{\lambda}\in \mathbb{R} | \lambda \text{ es eigenvalor de } A^TA \right } \nonumber \ &=& \max \left { \sigma \in \mathbb{R} | \sigma \text{ es valor singular de A } \right } \nonumber \ &=& \sigma_{\text{max}}(A) \end{eqnarray}$.
por ejemplo para la matriz anterior se tiene:
End of explanation |
4,395 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Queries
Marvin Queries are a tool designed to remotely query the MaNGA dataset in global and local galaxy properties, and retrieve only the results you want. Let's learn the basics of how to construct a query and also test drive some of the more advanced features that are unique to the Marvin-tools version of querying.
Step1: The Marvin Query object allows you to specify a string search condition with which you want to look for results. It will construct the necessary SQL syntax for you, send it to the database at Utah using the Marvin API, and return the results. The Query accepts as a keyword argument search_filter.
Let's try searching for all galaxies with a redshift < 0.1.
Step2: The above string search condition is a pseudo-natural language format. Natural language in that you type what you mean to say, and pseudo because it still must be formatted in the standard SQL where condition syntax. This syntax generally takes the form of parameter_name operand value.
Marvin is smart enough to figure out which database table a parameter_name belongs to if and only if that name is a unique parameter name. If not you must specify the database table name along with the parameter name, in the form of table.parameter_name. Most MaNGA global properties come from the NASA-Sloan Atlas (NSA) catalog used for target selection. The database table name thus is nsa. So the full parameter_name for redshift is nsa.z.
If a parameter name is not unique, then Marvin will return an error asking you to fine-tune your parameter name by using the full parameter table.parameter_name
Step3: Running the query produces a Marvin Results object (r)
Step4: For number of results < 1000, Marvin will return the entire set of results. For queries that return > 1000, Marvin will paginate the results and only return the first 100, by default. (This can be modified with the limit keyword).
Step5: It can be useful for informational and debugging purposes to see the raw SQL of your query, and your query runtime. If your query times out or crashes, the Marvin team will need these pieces of info to assess anything.
Step6: Query results are stored in r.results. This is a Python list object, and be indexed like an array. Since we have 100 results, let's only look at 10 for brevity.
Step7: We will learn how to use the features of our Results object a little bit later, but first let's revise our search to see how more complex search queries work.
Multiple Search Criteria
Let's add to our previous search to find only galaxies with M$\star$ > 3 $\times$ 10$^{11}$ M$\odot$.
Let's use the Sersic profile determination for stellar mass, which is the sersic_mass parameter of the nsa table, so its full search parameter designation will be nsa.sersic_mass. Since it's unique, you can also just use sersic_mass.
Adding multiple search criteria is as easy as writing it how you want it. In this case, we want to AND the two criteria. You can also OR, and NOT criteria.
Step8: Compound Search Statements
Let's say we are interested in galaxies with redshift < 0.1 and stellar mass > 3e11 or 19-fiber IFUs with an NSA sersic index < 2. We can compound multiple criteria together using parantheses. Use parantheses to help set the order of precedence. Without parantheses, the order is NOT > AND > OR.
To find 19 fiber IFUs, we'll use the name parameter of the ifu table, which means the full search parameter is ifu.name. However, ifu.name returns the IFU design name, such as 1901, so we need to to set the value to 19*, which acts as a wildcard.
Step9: Returning Additional Parameters
Often you want to run a query and return parameters that you didn't explicitly search on. For instance, you want to find galaxies below a redshift of 0.1 and would like to know their RA and DECs.
This is as easy as specifying the return_params keyword option in Query with either a string (for a single parameter) or a list of strings (for multiple parameters).
Step10: Local (Sub-Spaxel) Queries (... or DAP Zonal Queries)
So far we have seen queries on global galaxy properties. These queries returned a list of galaxies satisfying the search criteria. We can also perform queries on spaxel regions within galaxies.
Let's find all spaxels from galaxies with a redshift < 0.1 that have H-alpha emission line flux > 30.
DAP properties are in a table called spaxelprop. The DAP-derived H-alpha emission line gaussian flux is called emline_gflux_ha_6564. Since this parameter is unique, you can either specify emline_gflux_ha_6564 or spaxelprop.emline_gflux_ha_6564
Step11: Spaxel queries will return a list of all spaxels satisfying your criteria. By default spaxel queries will return the galaxy information, and spaxel x and y.
Step12: Once you have a set of query Results, you can easily convert your results into Marvin objects in your workflow. Depending on your result parameters, you can convert to Marvin Cubes, Maps, Spaxels, ModelCubes, or RSS. Let's convert our Results to Marvin Cubes. Note
Step13: or since our results are from a spaxel query, we can convert to Marvin Spaxels
Step14: You can also convert your query results into other formats like an Astropy Table, or FITS
Step15: A note on Table and Name shortcuts
In Queries you must specify a parameter_name or table.parameter_name. However to make it a bit easier, we have created table shortcuts and parameter name shortcuts for a few parameters. (more to be added..)
ifu.name = ifudesign.name
haflux = emline_gflux_ha_6564
g_r = nsa.elpetro_mag_g_r
Retrieving Available Search Parameters
There are many parameters to search with. You can retrieve a list of available parameters to query. Please note that while currently many parameters in the list can technically be queried on, they have not been thoroughly tested to work, nor may they make any sense to query on. We cannot guarantee what will happen. If you find a parameter that should be queryable and does not work, please let us know. | Python Code:
# Python 2/3 compatibility
from __future__ import print_function, division, absolute_import
# import matplolib just in case
import matplotlib.pyplot as plt
# this line tells the notebook to plot matplotlib static plots in the notebook itself
%matplotlib inline
# this line does the same thing but makes the plots interactive
#%matplotlib notebook
# Import the config and set to remote. Let's query MPL-5 data
from marvin import config
# by default the mode is set to 'auto', but let's set it explicitly to remote.
config.mode = 'remote'
# by default, Marvin uses the latest MPL but let's set it explicitly to MPL-5
config.setRelease('MPL-5')
# By default the API will query using the Utah server, at api.sdss.org/marvin2. See the config.sasurl attribute.
config.sasurl
# If you are using one of the two local ngrok Marvins, you need to switch the SAS Url to one of our ngrok ids.
# Uncomment out the following lines and replace the ngrokid with the provided string
#ngrokid = 'ngrok_number_string'
#config.switchSasUrl('local', ngrokid=ngrokid)
#print(config.sasurl)
# this is the Query tool
from marvin.tools.query import Query
Explanation: Queries
Marvin Queries are a tool designed to remotely query the MaNGA dataset in global and local galaxy properties, and retrieve only the results you want. Let's learn the basics of how to construct a query and also test drive some of the more advanced features that are unique to the Marvin-tools version of querying.
End of explanation
# the string search condition
my_search = 'z < 0.1'
Explanation: The Marvin Query object allows you to specify a string search condition with which you want to look for results. It will construct the necessary SQL syntax for you, send it to the database at Utah using the Marvin API, and return the results. The Query accepts as a keyword argument search_filter.
Let's try searching for all galaxies with a redshift < 0.1.
End of explanation
# the search condition using the full parameter name
my_search = 'nsa.z < 0.1'
# Let's setup the query. This will not run it automatically.
q = Query(search_filter=my_search)
print(q)
Explanation: The above string search condition is a pseudo-natural language format. Natural language in that you type what you mean to say, and pseudo because it still must be formatted in the standard SQL where condition syntax. This syntax generally takes the form of parameter_name operand value.
Marvin is smart enough to figure out which database table a parameter_name belongs to if and only if that name is a unique parameter name. If not you must specify the database table name along with the parameter name, in the form of table.parameter_name. Most MaNGA global properties come from the NASA-Sloan Atlas (NSA) catalog used for target selection. The database table name thus is nsa. So the full parameter_name for redshift is nsa.z.
If a parameter name is not unique, then Marvin will return an error asking you to fine-tune your parameter name by using the full parameter table.parameter_name
End of explanation
# To run the query
r = q.run()
Explanation: Running the query produces a Marvin Results object (r):
End of explanation
# Print result counts
print('total', r.totalcount)
print('returned', r.count)
Explanation: For number of results < 1000, Marvin will return the entire set of results. For queries that return > 1000, Marvin will paginate the results and only return the first 100, by default. (This can be modified with the limit keyword).
End of explanation
# See the raw SQL
print(r.showQuery())
# See the runtime of your query. This produces a Python datetime.timedelta object showing days, seconds, microseconds
print('timedelta', r.query_runtime)
# See the total time in seconds
print('query time in seconds:', r.query_runtime.total_seconds())
Explanation: It can be useful for informational and debugging purposes to see the raw SQL of your query, and your query runtime. If your query times out or crashes, the Marvin team will need these pieces of info to assess anything.
End of explanation
# Show the results.
r.results[0:10]
Explanation: Query results are stored in r.results. This is a Python list object, and be indexed like an array. Since we have 100 results, let's only look at 10 for brevity.
End of explanation
# my new search
new_search = 'nsa.z < 0.1 and nsa.sersic_mass > 3e11'
config.setRelease('MPL-5')
q2 = Query(search_filter=new_search)
r2 = q2.run()
print(r2.totalcount)
r2.results
Explanation: We will learn how to use the features of our Results object a little bit later, but first let's revise our search to see how more complex search queries work.
Multiple Search Criteria
Let's add to our previous search to find only galaxies with M$\star$ > 3 $\times$ 10$^{11}$ M$\odot$.
Let's use the Sersic profile determination for stellar mass, which is the sersic_mass parameter of the nsa table, so its full search parameter designation will be nsa.sersic_mass. Since it's unique, you can also just use sersic_mass.
Adding multiple search criteria is as easy as writing it how you want it. In this case, we want to AND the two criteria. You can also OR, and NOT criteria.
End of explanation
# new search
new_search = '(z<0.1 and nsa.sersic_logmass > 11.47) or (ifu.name=19* and nsa.sersic_n < 2)'
q3 = Query(search_filter=new_search)
r3 = q3.run()
r3.results[0:5]
Explanation: Compound Search Statements
Let's say we are interested in galaxies with redshift < 0.1 and stellar mass > 3e11 or 19-fiber IFUs with an NSA sersic index < 2. We can compound multiple criteria together using parantheses. Use parantheses to help set the order of precedence. Without parantheses, the order is NOT > AND > OR.
To find 19 fiber IFUs, we'll use the name parameter of the ifu table, which means the full search parameter is ifu.name. However, ifu.name returns the IFU design name, such as 1901, so we need to to set the value to 19*, which acts as a wildcard.
End of explanation
my_search = 'nsa.z < 0.1'
q = Query(search_filter=my_search, return_params=['cube.ra', 'cube.dec'])
r = q.run()
r.results[0:5]
Explanation: Returning Additional Parameters
Often you want to run a query and return parameters that you didn't explicitly search on. For instance, you want to find galaxies below a redshift of 0.1 and would like to know their RA and DECs.
This is as easy as specifying the return_params keyword option in Query with either a string (for a single parameter) or a list of strings (for multiple parameters).
End of explanation
spax_search = 'nsa.z < 0.1 and emline_gflux_ha_6564 > 30'
q4 = Query(search_filter=spax_search, return_params=['emline_sew_ha_6564', 'emline_gflux_hb_4862', 'stellar_vel'])
r4 = q4.run()
r4.totalcount
r4.query_runtime.total_seconds()
Explanation: Local (Sub-Spaxel) Queries (... or DAP Zonal Queries)
So far we have seen queries on global galaxy properties. These queries returned a list of galaxies satisfying the search criteria. We can also perform queries on spaxel regions within galaxies.
Let's find all spaxels from galaxies with a redshift < 0.1 that have H-alpha emission line flux > 30.
DAP properties are in a table called spaxelprop. The DAP-derived H-alpha emission line gaussian flux is called emline_gflux_ha_6564. Since this parameter is unique, you can either specify emline_gflux_ha_6564 or spaxelprop.emline_gflux_ha_6564
End of explanation
r4.results[0:5]
# We have a large number query spaxel results but from how many actual galaxies?
plateifu = r4.getListOf('plateifu')
print('# unique galaxies', len(set(plateifu)))
print(set(plateifu))
Explanation: Spaxel queries will return a list of all spaxels satisfying your criteria. By default spaxel queries will return the galaxy information, and spaxel x and y.
End of explanation
# Convert to Cubes. For brevity, let's only convert only the first object.
r4.convertToTool('cube', limit=1)
print(r4.objects)
cube = r4.objects[0]
# From a cube, now we can do all things from Marvin Tools, like get a MaNGA MAPS object
maps = cube.getMaps()
print(maps)
# get a emission line sew map
em=maps.getMap('emline_sew', channel='ha_6564')
# plot it
em.plot()
# .. and a stellar velocity map
st=maps.getMap('stellar_vel')
# plot it
st.plot()
Explanation: Once you have a set of query Results, you can easily convert your results into Marvin objects in your workflow. Depending on your result parameters, you can convert to Marvin Cubes, Maps, Spaxels, ModelCubes, or RSS. Let's convert our Results to Marvin Cubes. Note: Depending on the number of results, this conversion step may take a long time. Be careful!
End of explanation
# let's convert to Marvin Spaxels. Again, for brevity, let's only convert the first two.
r4.convertToTool('spaxel', limit=2)
print(r4.objects)
# Now we can do all the Spaxel things, like plot
spaxel = r4.objects[0]
spaxel.spectrum.plot()
Explanation: or since our results are from a spaxel query, we can convert to Marvin Spaxels
End of explanation
r4.toTable()
r4.toFits('my_r4_results_2.fits')
Explanation: You can also convert your query results into other formats like an Astropy Table, or FITS
End of explanation
# retrieve the list
allparams = q.get_available_params()
allparams
Explanation: A note on Table and Name shortcuts
In Queries you must specify a parameter_name or table.parameter_name. However to make it a bit easier, we have created table shortcuts and parameter name shortcuts for a few parameters. (more to be added..)
ifu.name = ifudesign.name
haflux = emline_gflux_ha_6564
g_r = nsa.elpetro_mag_g_r
Retrieving Available Search Parameters
There are many parameters to search with. You can retrieve a list of available parameters to query. Please note that while currently many parameters in the list can technically be queried on, they have not been thoroughly tested to work, nor may they make any sense to query on. We cannot guarantee what will happen. If you find a parameter that should be queryable and does not work, please let us know.
End of explanation |
4,396 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BigQuery Anonymize Dataset
Copies tables and view from one dataset to another and anynonamizes all rows. Used to create sample datasets for dashboards.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter BigQuery Anonymize Dataset Recipe Parameters
Ensure you have user access to both datasets.
Provide the source project and dataset.
Provide the destination project and dataset.
Modify the values below for your use case, can be done multiple times, then click play.
Step3: 4. Execute BigQuery Anonymize Dataset
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: BigQuery Anonymize Dataset
Copies tables and view from one dataset to another and anynonamizes all rows. Used to create sample datasets for dashboards.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'auth_read':'service', # Credentials used.
'from_project':'', # Original project to read from.
'from_dataset':'', # Original dataset to read from.
'to_project':None, # Anonymous data will be writen to.
'to_dataset':'', # Anonymous data will be writen to.
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter BigQuery Anonymize Dataset Recipe Parameters
Ensure you have user access to both datasets.
Provide the source project and dataset.
Provide the destination project and dataset.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'anonymize':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'service','description':'Credentials used.'}},
'bigquery':{
'from':{
'project':{'field':{'name':'from_project','kind':'string','order':1,'description':'Original project to read from.'}},
'dataset':{'field':{'name':'from_dataset','kind':'string','order':2,'description':'Original dataset to read from.'}}
},
'to':{
'project':{'field':{'name':'to_project','kind':'string','order':3,'default':None,'description':'Anonymous data will be writen to.'}},
'dataset':{'field':{'name':'to_dataset','kind':'string','order':4,'description':'Anonymous data will be writen to.'}}
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute BigQuery Anonymize Dataset
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
4,397 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table align="left">
<td>
<a href="https
Step1: Restart the kernel
After you install the SDK, you need to restart the notebook kernel so it can find the packages. You can restart kernel from Kernel -> Restart Kernel, or running the following
Step2: Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step3: Otherwise, set your project ID here.
Step4: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step5: Import libraries and define constants
Step6: Create Feature Store Resources
Create Featurestore
The method to create a Featurestore returns a
long-running operation (LRO). An LRO starts an asynchronous job. LROs are returned for other API
methods too, such as updating or deleting a featurestore. Running the code cell will create a featurestore and print the process log.
Step7: Create Entity Types
Entity types can be created within the Featurestore class. Below, create the Users entity type and Movies entity type. A process log will be printed out.
Step8: Create Features
Features can be created within each entity type. Add defining features to the Users entity type and Movies entity type by using the following methods.
Step9: Ingest Feature Values into Entity Type from a Pandas DataFrame
You need to ingest feature values into your entity type containing the features, so you can later read (online) or batch serve (offline) the feature values from the entity type. In this step, you will learn how to ingest feature values from a Pandas DataFrame into an entity type. We can also import feature values from BigQuery or Google Cloud Storage.
Entity Type Source Files
Step10: Load Avro Files into Pandas DataFrames
Step11: Ingest Feature Values into Users Entity Type
Step12: Ingest Feature Values into Movies Entity Type
Step13: Read/Online Serve Entity's Feature Values from Vertex AI Online Feature Store
Feature Store allows online serving
which lets you read feature values for small batches of entities. It works well when you want to read values of selected features from an entity or multiple entities in an entity type.
Step14: Batch Serve Featurestore's Feature Values from Vertex AI Feature Store
Batch Serving is used to fetch a large batch of feature values for high-throughput, and is typically used for training a model or batch prediction. In this section, you will learn how to prepare for training examples by using the Featurestore's batch serve function.
Read Instances Source File
Step15: Load Csv File into a Pandas DataFrame
Step16: Change the Dtype of Timestamp to Datetime64
Step17: Batch Serve Feature Values from Movie Predictions Featurestore
Step18: Read the Updated Feature Values
Recall Read from the Entity Type Shows Feature Values from the Last Ingestion
Step19: Ingest Updated Feature Values
Step20: Read from the Entity Type Shows Updated Feature Values from the Latest Ingestion
Step21: Point-in-Time Correctness
Recall Batch Serve From the Last Ingestion Has Missing Data
Step22: Backfill/Correct Point-in-Time Data
Step23: Ingest Backfill/Correct Point-in-Time Data
Step24: Batch Serve From the Latest Ingestion with Backfill/Correction Has Reduced Missing Data
Step25: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
You can also keep the project but delete the featurestore by running the code below | Python Code:
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip uninstall {USER_FLAG} -y google-cloud-aiplatform
! pip uninstall {USER_FLAG} -y google-cloud-bigquery
! pip uninstall {USER_FLAG} -y google-cloud-bigquery-storage
! pip uninstall {USER_FLAG} -y google-cloud-aiplatform
! pip install {USER_FLAG} --upgrade google-cloud-aiplatform
! pip install {USER_FLAG} --upgrade google-cloud-bigquery
! pip install {USER_FLAG} --upgrade google-cloud-bigquery-storage
! pip install {USER_FLAG} avro
Explanation: <table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/feature_store/sdk-feature-store-pandas.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/feature_store/sdk-feature-store-pandas.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
Overview
This Colab introduces Pandas support of Vertex AI SDK Feature Store. For pre-requisite and introduction for Vertex AI SDK Feature Store native support, please see this Colab.
Dataset
This Colab uses a movie recommendation dataset as an example throughout all the sessions. The task is to train a model to predict if a user is going to watch a movie and serve this model online.
Objective
In this notebook, you will learn how to:
* Ingest Feature Values from Pandas DataFrame into featurestore's entity types.
* Read Entity Feature Values from Online Feature Store into Pandas DataFrame.
* Batch Serve Feature Values from your featurestore to Pandas DataFrame.
We will also discuss how Vertex AI Feature Store can be useful in the below scenarios:
* online serving with updated feature values
* point-in-time correctness to fetch feature values for training
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud BigQuery
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Before you begin
Install additional packages
For this Colab, you need the Vertex SDK for Python.
End of explanation
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
After you install the SDK, you need to restart the notebook kernel so it can find the packages. You can restart kernel from Kernel -> Restart Kernel, or running the following:
End of explanation
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "" # @param {type:"string"}
print("Project ID: ", PROJECT_ID)
Explanation: Otherwise, set your project ID here.
End of explanation
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key
page.
Click Create service account.
In the Service account name field, enter a name, and
click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI"
into the filter box, and select
Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
import datetime
import pandas as pd
from google.cloud import aiplatform
REGION = "" # @param {type:"string"}
aiplatform.init(project=PROJECT_ID, location=REGION)
Explanation: Import libraries and define constants
End of explanation
movie_predictions_feature_store = aiplatform.Featurestore.create(
featurestore_id="movie_predictions",
online_store_fixed_node_count=1,
)
Explanation: Create Feature Store Resources
Create Featurestore
The method to create a Featurestore returns a
long-running operation (LRO). An LRO starts an asynchronous job. LROs are returned for other API
methods too, such as updating or deleting a featurestore. Running the code cell will create a featurestore and print the process log.
End of explanation
users_entity_type = movie_predictions_feature_store.create_entity_type(
entity_type_id="users",
description="Users entity",
)
movies_entity_type = movie_predictions_feature_store.create_entity_type(
entity_type_id="movies",
description="Movies entity",
)
Explanation: Create Entity Types
Entity types can be created within the Featurestore class. Below, create the Users entity type and Movies entity type. A process log will be printed out.
End of explanation
users_feature_age = users_entity_type.create_feature(
feature_id="age",
value_type="INT64",
description="User age",
)
users_feature_gender = users_entity_type.create_feature(
feature_id="gender",
value_type="STRING",
description="User gender",
)
users_feature_liked_genres = users_entity_type.create_feature(
feature_id="liked_genres",
value_type="STRING_ARRAY",
description="An array of genres this user liked",
)
movies_feature_configs = {
"title": {
"value_type": "STRING",
"description": "The title of the movie",
},
"genres": {
"value_type": "STRING",
"description": "The genre of the movie",
},
"average_rating": {
"value_type": "DOUBLE",
"description": "The average rating for the movie, range is [1.0-5.0]",
},
}
movie_features = movies_entity_type.batch_create_features(
feature_configs=movies_feature_configs,
)
Explanation: Create Features
Features can be created within each entity type. Add defining features to the Users entity type and Movies entity type by using the following methods.
End of explanation
GCS_USERS_AVRO_URI = (
"gs://cloud-samples-data-us-central1/vertex-ai/feature-store/datasets/users.avro"
)
GCS_MOVIES_AVRO_URI = (
"gs://cloud-samples-data-us-central1/vertex-ai/feature-store/datasets/movies.avro"
)
USERS_AVRO_FN = "users.avro"
MOVIES_AVRO_FN = "movies.avro"
! gsutil cp $GCS_USERS_AVRO_URI $USERS_AVRO_FN
! gsutil cp $GCS_MOVIES_AVRO_URI $MOVIES_AVRO_FN
Explanation: Ingest Feature Values into Entity Type from a Pandas DataFrame
You need to ingest feature values into your entity type containing the features, so you can later read (online) or batch serve (offline) the feature values from the entity type. In this step, you will learn how to ingest feature values from a Pandas DataFrame into an entity type. We can also import feature values from BigQuery or Google Cloud Storage.
Entity Type Source Files
End of explanation
from avro.datafile import DataFileReader
from avro.io import DatumReader
class AvroReader:
def __init__(self, data_file):
self.avro_reader = DataFileReader(open(data_file, "rb"), DatumReader())
def to_dataframe(self):
records = [record for record in self.avro_reader]
return pd.DataFrame.from_records(data=records)
users_avro_reader = AvroReader(data_file=USERS_AVRO_FN)
users_source_df = users_avro_reader.to_dataframe()
print(users_source_df)
movies_avro_reader = AvroReader(data_file=MOVIES_AVRO_FN)
movies_source_df = movies_avro_reader.to_dataframe()
print(movies_source_df)
Explanation: Load Avro Files into Pandas DataFrames
End of explanation
users_entity_type.ingest_from_df(
feature_ids=["age", "gender", "liked_genres"],
feature_time="update_time",
df_source=users_source_df,
entity_id_field="user_id",
)
Explanation: Ingest Feature Values into Users Entity Type
End of explanation
movies_entity_type.ingest_from_df(
feature_ids=["average_rating", "title", "genres"],
feature_time="update_time",
df_source=movies_source_df,
entity_id_field="movie_id",
)
Explanation: Ingest Feature Values into Movies Entity Type
End of explanation
users_read_df = users_entity_type.read(
entity_ids=["dave", "alice", "charlie", "bob", "eve"],
)
print(users_read_df)
movies_read_df = movies_entity_type.read(
entity_ids=["movie_01", "movie_02", "movie_03", "movie_04"],
feature_ids=["title", "genres", "average_rating"],
)
print(movies_read_df)
Explanation: Read/Online Serve Entity's Feature Values from Vertex AI Online Feature Store
Feature Store allows online serving
which lets you read feature values for small batches of entities. It works well when you want to read values of selected features from an entity or multiple entities in an entity type.
End of explanation
GCS_READ_INSTANCES_CSV_URI = "gs://cloud-samples-data-us-central1/vertex-ai/feature-store/datasets/movie_prediction.csv"
! gsutil cp $GCS_READ_INSTANCES_CSV_URI $READ_INSTANCES_CSV_FN
Explanation: Batch Serve Featurestore's Feature Values from Vertex AI Feature Store
Batch Serving is used to fetch a large batch of feature values for high-throughput, and is typically used for training a model or batch prediction. In this section, you will learn how to prepare for training examples by using the Featurestore's batch serve function.
Read Instances Source File
End of explanation
read_instances_df = pd.read_csv(read_instances_csv_fn)
print(read_instances_df)
Explanation: Load Csv File into a Pandas DataFrame
End of explanation
print("before: ", read_instances_df["timestamp"].dtype)
read_instances_df = read_instances_df.astype({"timestamp": "datetime64"})
print("after: ", read_instances_df["timestamp"].dtype)
Explanation: Change the Dtype of Timestamp to Datetime64
End of explanation
movie_predictions_df = movie_predictions_feature_store.batch_serve_to_df(
serving_feature_ids={
"users": ["age", "gender", "liked_genres"],
"movies": ["title", "average_rating", "genres"],
},
read_instances_df=read_instances_df,
)
movie_predictions_df
Explanation: Batch Serve Feature Values from Movie Predictions Featurestore
End of explanation
print(movies_read_df)
Explanation: Read the Updated Feature Values
Recall Read from the Entity Type Shows Feature Values from the Last Ingestion
End of explanation
update_movies_df = pd.DataFrame(
data=[["movie_03", 4.3], ["movie_04", 4.8]],
columns=["movie_id", "average_rating"],
)
print(update_movies_df)
movies_entity_type.ingest_from_df(
feature_ids=["average_rating"],
feature_time=datetime.datetime.now(),
df_source=update_movies_df,
entity_id_field="movie_id",
)
Explanation: Ingest Updated Feature Values
End of explanation
update_movies_read_df = movies_entity_type.read(
entity_ids=["movie_01", "movie_02", "movie_03", "movie_04"],
feature_ids=["title", "genres", "average_rating"],
)
print(update_movies_read_df)
Explanation: Read from the Entity Type Shows Updated Feature Values from the Latest Ingestion
End of explanation
print(movie_predictions_df)
Explanation: Point-in-Time Correctness
Recall Batch Serve From the Last Ingestion Has Missing Data
End of explanation
backfill_users_df = pd.DataFrame(
data=[["bob", 34, "Male", ["Drama"], "2020-02-13 09:35:15"]],
columns=["user_id", "age", "gender", "liked_genres", "update_time"],
)
backfill_users_df = backfill_users_df.astype({"update_time": "datetime64"})
print(backfill_users_df)
backfill_movies_df = pd.DataFrame(
data=[["movie_04", 4.2, "The Dark Knight", "Action", "2020-02-13 09:35:15"]],
columns=["movie_id", "average_rating", "title", "genres", "update_time"],
)
backfill_movies_df = backfill_movies_df.astype({"update_time": "datetime64"})
print(backfill_movies_df)
Explanation: Backfill/Correct Point-in-Time Data
End of explanation
users_entity_type.ingest_from_df(
feature_ids=["age", "gender", "liked_genres"],
feature_time="update_time",
df_source=backfill_users_df,
entity_id_field="user_id",
)
movies_entity_type.ingest_from_df(
feature_ids=["average_rating", "title", "genres"],
feature_time="update_time",
df_source=backfill_movies_df,
entity_id_field="movie_id",
)
Explanation: Ingest Backfill/Correct Point-in-Time Data
End of explanation
backfill_movie_predictions_df = movie_predictions_feature_store.batch_serve_to_df(
serving_feature_ids={
"users": ["age", "gender", "liked_genres"],
"movies": ["title", "average_rating", "genres"],
},
read_instances_df=read_instances_df,
)
print(backfill_movie_predictions_df)
Explanation: Batch Serve From the Latest Ingestion with Backfill/Correction Has Reduced Missing Data
End of explanation
movie_predictions_feature_store.delete(force=True)
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
You can also keep the project but delete the featurestore by running the code below:
End of explanation |
4,398 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Formulas
Step1: Import convention
You can import explicitly from statsmodels.formula.api
Step2: Alternatively, you can just use the formula namespace of the main statsmodels.api.
Step3: Or you can use the following conventioin
Step4: These names are just a convenient way to get access to each model's from_formula classmethod. See, for instance
Step5: All of the lower case models accept formula and data arguments, whereas upper case ones take endog and exog design matrices. formula accepts a string which describes the model in terms of a patsy formula. data takes a pandas data frame or any other data structure that defines a __getitem__ for variable names like a structured array or a dictionary of variables.
dir(sm.formula) will print a list of available models.
Formula-compatible models have the following generic call signature
Step6: Fit the model
Step7: Categorical variables
Looking at the summary printed above, notice that patsy determined that elements of Region were text strings, so it treated Region as a categorical variable. patsy's default is also to include an intercept, so we automatically dropped one of the Region categories.
If Region had been an integer variable that we wanted to treat explicitly as categorical, we could have done so by using the C() operator
Step8: Patsy's mode advanced features for categorical variables are discussed in
Step9: Multiplicative interactions
"
Step10: Many other things are possible with operators. Please consult the patsy docs to learn more.
Functions
You can apply vectorized functions to the variables in your model
Step11: Define a custom function
Step12: Any function that is in the calling namespace is available to the formula.
Using formulas with models that do not (yet) support them
Even if a given statsmodels function does not support formulas, you can still use patsy's formula language to produce design matrices. Those matrices
can then be fed to the fitting function as endog and exog arguments.
To generate numpy arrays
Step13: To generate pandas data frames | Python Code:
from __future__ import print_function
import numpy as np
import statsmodels.api as sm
Explanation: Formulas: Fitting models using R-style formulas
Since version 0.5.0, statsmodels allows users to fit statistical models using R-style formulas. Internally, statsmodels uses the patsy package to convert formulas and data to the matrices that are used in model fitting. The formula framework is quite powerful; this tutorial only scratches the surface. A full description of the formula language can be found in the patsy docs:
Patsy formula language description
Loading modules and functions
End of explanation
from statsmodels.formula.api import ols
Explanation: Import convention
You can import explicitly from statsmodels.formula.api
End of explanation
sm.formula.ols
Explanation: Alternatively, you can just use the formula namespace of the main statsmodels.api.
End of explanation
import statsmodels.formula.api as smf
Explanation: Or you can use the following conventioin
End of explanation
sm.OLS.from_formula
Explanation: These names are just a convenient way to get access to each model's from_formula classmethod. See, for instance
End of explanation
dta = sm.datasets.get_rdataset("Guerry", "HistData", cache=True)
df = dta.data[['Lottery', 'Literacy', 'Wealth', 'Region']].dropna()
df.head()
Explanation: All of the lower case models accept formula and data arguments, whereas upper case ones take endog and exog design matrices. formula accepts a string which describes the model in terms of a patsy formula. data takes a pandas data frame or any other data structure that defines a __getitem__ for variable names like a structured array or a dictionary of variables.
dir(sm.formula) will print a list of available models.
Formula-compatible models have the following generic call signature: (formula, data, subset=None, *args, **kwargs)
OLS regression using formulas
To begin, we fit the linear model described on the Getting Started page. Download the data, subset columns, and list-wise delete to remove missing observations:
End of explanation
mod = ols(formula='Lottery ~ Literacy + Wealth + Region', data=df)
res = mod.fit()
print(res.summary())
Explanation: Fit the model:
End of explanation
res = ols(formula='Lottery ~ Literacy + Wealth + C(Region)', data=df).fit()
print(res.params)
Explanation: Categorical variables
Looking at the summary printed above, notice that patsy determined that elements of Region were text strings, so it treated Region as a categorical variable. patsy's default is also to include an intercept, so we automatically dropped one of the Region categories.
If Region had been an integer variable that we wanted to treat explicitly as categorical, we could have done so by using the C() operator:
End of explanation
res = ols(formula='Lottery ~ Literacy + Wealth + C(Region) -1 ', data=df).fit()
print(res.params)
Explanation: Patsy's mode advanced features for categorical variables are discussed in: Patsy: Contrast Coding Systems for categorical variables
Operators
We have already seen that "~" separates the left-hand side of the model from the right-hand side, and that "+" adds new columns to the design matrix.
Removing variables
The "-" sign can be used to remove columns/variables. For instance, we can remove the intercept from a model by:
End of explanation
res1 = ols(formula='Lottery ~ Literacy : Wealth - 1', data=df).fit()
res2 = ols(formula='Lottery ~ Literacy * Wealth - 1', data=df).fit()
print(res1.params, '\n')
print(res2.params)
Explanation: Multiplicative interactions
":" adds a new column to the design matrix with the interaction of the other two columns. "*" will also include the individual columns that were multiplied together:
End of explanation
res = smf.ols(formula='Lottery ~ np.log(Literacy)', data=df).fit()
print(res.params)
Explanation: Many other things are possible with operators. Please consult the patsy docs to learn more.
Functions
You can apply vectorized functions to the variables in your model:
End of explanation
def log_plus_1(x):
return np.log(x) + 1.
res = smf.ols(formula='Lottery ~ log_plus_1(Literacy)', data=df).fit()
print(res.params)
Explanation: Define a custom function:
End of explanation
import patsy
f = 'Lottery ~ Literacy * Wealth'
y,X = patsy.dmatrices(f, df, return_type='dataframe')
print(y[:5])
print(X[:5])
Explanation: Any function that is in the calling namespace is available to the formula.
Using formulas with models that do not (yet) support them
Even if a given statsmodels function does not support formulas, you can still use patsy's formula language to produce design matrices. Those matrices
can then be fed to the fitting function as endog and exog arguments.
To generate numpy arrays:
End of explanation
f = 'Lottery ~ Literacy * Wealth'
y,X = patsy.dmatrices(f, df, return_type='dataframe')
print(y[:5])
print(X[:5])
print(sm.OLS(y, X).fit().summary())
Explanation: To generate pandas data frames:
End of explanation |
4,399 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using the library(odd number of qubits)
This code tutorial shows how to estimate a 1-RDM and perform variational optimization
Step1: Generate the input files, set up quantum resources, and set up the OpdmFunctional to make measurements.
Step2: The displayed text is the output of the gradient based restricted Hartree-Fock. We define the gradient in rhf_objective and use the conjugate-gradient optimizer to optimize the basis rotation parameters. This is equivalent to doing Hartree-Fock theory from the canonical transformation perspective.
Next, we will do the following
Step3: This should print out the various energies estimated from the 1-RDM along with error bars. Generated from resampling the 1-RDM based on the estimated covariance.
Optimization
We use the sampling functionality to variationally relax the parameters of
my ansatz such that the energy is decreased.
For this we will need the augmented Hessian optimizer
The optimizerer code we have takes
Step4: Each interation prints out a variety of information that the user might find useful. Watching energies go down is known to be one of the best forms of entertainment during a shelter-in-place order.
After the optimization we can print the energy as a function of iteration number to see close the energy gets to the true minium. | Python Code:
# Import library functions and define a helper function
import numpy as np
import cirq
from openfermioncirq.experiments.hfvqe.gradient_hf import rhf_func_generator
from openfermioncirq.experiments.hfvqe.opdm_functionals import OpdmFunctional
from openfermioncirq.experiments.hfvqe.analysis import (compute_opdm,
mcweeny_purification,
resample_opdm,
fidelity_witness,
fidelity)
from openfermioncirq.experiments.hfvqe.third_party.higham import fixed_trace_positive_projection
from openfermioncirq.experiments.hfvqe.molecular_example_odd_qubits import make_h3_2_5
Explanation: Using the library(odd number of qubits)
This code tutorial shows how to estimate a 1-RDM and perform variational optimization
End of explanation
rhf_objective, molecule, parameters, obi, tbi = make_h3_2_5()
ansatz, energy, gradient = rhf_func_generator(rhf_objective)
# settings for quantum resources
qubits = [cirq.GridQubit(0, x) for x in range(molecule.n_orbitals)]
sampler = cirq.Simulator(dtype=np.complex128) # this can be a QuantumEngine
# OpdmFunctional contains an interface for running experiments
opdm_func = OpdmFunctional(qubits=qubits,
sampler=sampler,
constant=molecule.nuclear_repulsion,
one_body_integrals=obi,
two_body_integrals=tbi,
num_electrons=molecule.n_electrons // 2, # only simulate spin-up electrons
clean_xxyy=True,
purification=True
)
Explanation: Generate the input files, set up quantum resources, and set up the OpdmFunctional to make measurements.
End of explanation
# 1.
# default to 250_000 shots for each circuit.
# 7 circuits total, printed for your viewing pleasure
# return value is a dictionary with circuit results for each permutation
measurement_data = opdm_func.calculate_data(parameters)
# 2.
opdm, var_dict = compute_opdm(measurement_data,
return_variance=True)
opdm_pure = mcweeny_purification(opdm)
# 3.
raw_energies = []
raw_fidelity_witness = []
purified_eneriges = []
purified_fidelity_witness = []
purified_fidelity = []
true_unitary = ansatz(parameters)
nocc = molecule.n_electrons // 2
nvirt = molecule.n_orbitals - nocc
initial_fock_state = [1] * nocc + [0] * nvirt
for _ in range(1000): # 1000 repetitions of the measurement
new_opdm = resample_opdm(opdm, var_dict)
raw_energies.append(opdm_func.energy_from_opdm(new_opdm))
raw_fidelity_witness.append(
fidelity_witness(target_unitary=true_unitary,
omega=initial_fock_state,
measured_opdm=new_opdm)
)
# fix positivity and trace of sampled 1-RDM if strictly outside
# feasible set
w, v = np.linalg.eigh(new_opdm)
if len(np.where(w < 0)[0]) > 0:
new_opdm = fixed_trace_positive_projection(new_opdm, nocc)
new_opdm_pure = mcweeny_purification(new_opdm)
purified_eneriges.append(opdm_func.energy_from_opdm(new_opdm_pure))
purified_fidelity_witness.append(
fidelity_witness(target_unitary=true_unitary,
omega=initial_fock_state,
measured_opdm=new_opdm_pure)
)
purified_fidelity.append(
fidelity(target_unitary=true_unitary,
measured_opdm=new_opdm_pure)
)
print('\n\n\n\n')
print("Canonical Hartree-Fock energy ", molecule.hf_energy)
print("True energy ", energy(parameters))
print("Raw energy ", opdm_func.energy_from_opdm(opdm),
"+- ", np.std(raw_energies))
print("Raw fidelity witness ", np.mean(raw_fidelity_witness).real,
"+- ", np.std(raw_fidelity_witness))
print("purified energy ", opdm_func.energy_from_opdm(opdm_pure),
"+- ", np.std(purified_eneriges))
print("Purified fidelity witness ", np.mean(purified_fidelity_witness).real,
"+- ", np.std(purified_fidelity_witness))
print("Purified fidelity ", np.mean(purified_fidelity).real,
"+- ", np.std(purified_fidelity))
Explanation: The displayed text is the output of the gradient based restricted Hartree-Fock. We define the gradient in rhf_objective and use the conjugate-gradient optimizer to optimize the basis rotation parameters. This is equivalent to doing Hartree-Fock theory from the canonical transformation perspective.
Next, we will do the following:
Do measurements for a given set of parameters
Compute 1-RDM, variances, and purification
Compute energy, fidelities, and errorbars
End of explanation
from openfermioncirq.experiments.hfvqe.mfopt import moving_frame_augmented_hessian_optimizer
from openfermioncirq.experiments.hfvqe.opdm_functionals import RDMGenerator
import matplotlib.pyplot as plt
rdm_generator = RDMGenerator(opdm_func, purification=True)
opdm_generator = rdm_generator.opdm_generator
result = moving_frame_augmented_hessian_optimizer(
rhf_objective=rhf_objective,
initial_parameters= parameters + 5.0E-1 ,
opdm_aa_measurement_func=opdm_generator,
verbose=True, delta=0.03,
max_iter=120,
hessian_update='diagonal',
rtol=0.050E-2)
Explanation: This should print out the various energies estimated from the 1-RDM along with error bars. Generated from resampling the 1-RDM based on the estimated covariance.
Optimization
We use the sampling functionality to variationally relax the parameters of
my ansatz such that the energy is decreased.
For this we will need the augmented Hessian optimizer
The optimizerer code we have takes:
rhf_objective object, initial parameters,
a function that takes a n x n unitary and returns an opdm
maximum iterations,
hassian_update which indicates how much of the hessian to use
rtol which is the gradient stopping condition.
A natural thing that we will want to save is the variance dictionary of
the non-purified 1-RDM. This is accomplished by wrapping the 1-RDM
estimation code in another object that keeps track of the variance
dictionaries.
End of explanation
plt.semilogy(range(len(result.func_vals)),
np.abs(np.array(result.func_vals) - energy(parameters)),
'C0o-')
plt.xlabel("Optimization Iterations", fontsize=18)
plt.ylabel(r"$|E - E^{*}|$", fontsize=18)
plt.tight_layout()
plt.show()
Explanation: Each interation prints out a variety of information that the user might find useful. Watching energies go down is known to be one of the best forms of entertainment during a shelter-in-place order.
After the optimization we can print the energy as a function of iteration number to see close the energy gets to the true minium.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.